Heh, yeah, it does!earlz wrote:If I turn my eyes slightly out of focus, the image looks exactly like the 32bit one.
RGB to VGA
- Troy Martin
- Member
- Posts: 1686
- Joined: Fri Apr 18, 2008 4:40 pm
- Location: Langley, Vancouver, BC, Canada
- Contact:
Re: RGB to VGA
Brendan: Wow. Just freaking. Wow.
- Firestryke31
- Member
- Posts: 550
- Joined: Sat Nov 29, 2008 1:07 pm
- Location: Throw a dart at central Texas
- Contact:
Re: RGB to VGA
With the latest gradient picture I noticed that there are some almost sharp transitions, mostly at the midway point between the color gradients and about 1/3 of the way with the grayscale gradient on the right and to a lesser extent on the left. With the pixels->triangles->pixels image, how many triangles are there, 2 per pixel? How does it look scaled? I do a bit of 3D graphics work (not much, but still) and am a bit interested in the performance of this system. It probably can't be used for real time rendering, right?
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
Re: RGB to VGA
Hi,
For scaling, here's the original picture (which was 80 * 80 pixels) scaled down to 25 * 25 pixels (without any rotation): And here's a very small part of the picture (the top of the 'S' in "HAS"), scaled up to 8000 * 8000 pixels (with 10 degrees clockwise rotation): Note: For this picture, the browser I'm using scales it down, and the browser's scaling algorithm causes anti-aliasing issues (the diagonal lines look lumpy). If your browser is doing the same thing then click on the image to see the unscaled version.
The triangles themselves are stored using 32-bit coordinates, where the colours are encoded as (my version of) CIE LAB (and where the intensity is a simplified floating point value that ranges up to several times brighter than the sun) and there's also alpha. The colours are converted into CIE XYZ for all processing (actually "XYZA"), and when everything is done the pixel data is converted from CIE XYZ into RGB and then gamma correction is applied (before spitting out the data as a bitmap).
You are of course correct - at the moment the utility that converts pixel data into triangles (mostly) does create 2 solid triangles per pixel. The only exception (at the moment) is if several pixels on the same line are the same colour (in this case the utility will create 2 triangles for the row of pixels).
Now that (I think) I've got the conversion from triangles to pixels correct my next step is to replace the extremely simple "2 triangles per pixel" code with some advanced voodoo; although to be honest there's a pile of ideas floating around in my head about the best way to do this, and none seem all that attractive at the moment.
My current thinking is to split to original picture into 2 arbitrary halves (triangles); and then (recursively) for each triangle, use edge detection to find any edges, then split the triangle into 3 (or 2 in very lucky cases) smaller triangles along the detected edge, until all triangles can be described by a solid or shaded triangle (with an acceptable amount of error) and don't need to be split up further.
To convert the original 80 * 80 picture into 12346 triangles it takes about 49 ms; but larger images seem to take exponentially longer (e.g. a 1024 * 768 picture can take 10 minutes).
However, none of the code has been optimized in any way (except for GCC's "-O2"). It doesn't use SSE for anything, it's all single-threaded, it consumes at least 40 bytes per pixel (I'm using doubles for almost everything) and it gives the CPU's caches a good thrashing. Mostly it's just a prototype so I can determine if the approach is viable.
Of course if I can get it to work, then fonts, pictures, textures, mouse pointers, icons, polygons, etc would all be converted to triangles, and the only thing all the video code in my OS will need to worry about drawing triangles in 3D space.
Cheers,
Brendan
I noticed that too, and I'd like to blame it on gamma correction. If I take gamma correction into account (which was easy enough to do for 4-bpp, as it's almost purely table look-ups anyway, with small tables) the transition is smoother, but the gradient doesn't seem right (the halfway point looks like it's at about 75% brightness rather than 50% brightness).Firestryke31 wrote:With the latest gradient picture I noticed that there are some almost sharp transitions, mostly at the midway point between the color gradients and about 1/3 of the way with the grayscale gradient on the right and to a lesser extent on the left.
So far I've put a lot of work into getting the conversion from triangles back into pixels right (especially improving sub-pixel accuracy), but not so much work into converting pixels into triangles.Firestryke31 wrote:With the pixels->triangles->pixels image, how many triangles are there, 2 per pixel? How does it look scaled?
For scaling, here's the original picture (which was 80 * 80 pixels) scaled down to 25 * 25 pixels (without any rotation): And here's a very small part of the picture (the top of the 'S' in "HAS"), scaled up to 8000 * 8000 pixels (with 10 degrees clockwise rotation): Note: For this picture, the browser I'm using scales it down, and the browser's scaling algorithm causes anti-aliasing issues (the diagonal lines look lumpy). If your browser is doing the same thing then click on the image to see the unscaled version.
The triangles themselves are stored using 32-bit coordinates, where the colours are encoded as (my version of) CIE LAB (and where the intensity is a simplified floating point value that ranges up to several times brighter than the sun) and there's also alpha. The colours are converted into CIE XYZ for all processing (actually "XYZA"), and when everything is done the pixel data is converted from CIE XYZ into RGB and then gamma correction is applied (before spitting out the data as a bitmap).
You are of course correct - at the moment the utility that converts pixel data into triangles (mostly) does create 2 solid triangles per pixel. The only exception (at the moment) is if several pixels on the same line are the same colour (in this case the utility will create 2 triangles for the row of pixels).
Now that (I think) I've got the conversion from triangles to pixels correct my next step is to replace the extremely simple "2 triangles per pixel" code with some advanced voodoo; although to be honest there's a pile of ideas floating around in my head about the best way to do this, and none seem all that attractive at the moment.
My current thinking is to split to original picture into 2 arbitrary halves (triangles); and then (recursively) for each triangle, use edge detection to find any edges, then split the triangle into 3 (or 2 in very lucky cases) smaller triangles along the detected edge, until all triangles can be described by a solid or shaded triangle (with an acceptable amount of error) and don't need to be split up further.
At the moment performance depends a lot on how many triangles and what size image. For the image above (encoded as 12346 triangles) it takes my computer (a 2.4 Ghz Intel Core 2 Quad) about 23 seconds to generate an 8000 * 8000 bitmap and about 60 ms to generate an 80 * 80 bitmap.Firestryke31 wrote:I do a bit of 3D graphics work (not much, but still) and am a bit interested in the performance of this system. It probably can't be used for real time rendering, right?
To convert the original 80 * 80 picture into 12346 triangles it takes about 49 ms; but larger images seem to take exponentially longer (e.g. a 1024 * 768 picture can take 10 minutes).
However, none of the code has been optimized in any way (except for GCC's "-O2"). It doesn't use SSE for anything, it's all single-threaded, it consumes at least 40 bytes per pixel (I'm using doubles for almost everything) and it gives the CPU's caches a good thrashing. Mostly it's just a prototype so I can determine if the approach is viable.
Of course if I can get it to work, then fonts, pictures, textures, mouse pointers, icons, polygons, etc would all be converted to triangles, and the only thing all the video code in my OS will need to worry about drawing triangles in 3D space.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Firestryke31
- Member
- Posts: 550
- Joined: Sat Nov 29, 2008 1:07 pm
- Location: Throw a dart at central Texas
- Contact:
Re: RGB to VGA
So you use software rendering? Have you tried it with OpenGL or DirectX? Also, have you tried combining it with the RGB->VGA system? That would be interesting...
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
Re: RGB to VGA
Hi,
However, what I'm mostly trying to do is get the conversion from pixel data into triangles working well. At 2 triangles per pixel the overhead (in terms of processing time and disk space/RAM usage) is far too high; and if I can't do better than that then the whole idea goes in the trash.
Alternatively, if I can average around 10 pixels per triangle (across a range of pictures) then there'd be a lot less triangles to process, and a lot less overhead; and if I can do that then I'll consider the approach viable (and only then start looking into things like optimizing the performance, or doing it with 3D acceleration, etc).
Cheers,
Brendan
I haven't tried it with OpenGL or DirectX - at the moment it uses software only, and doesn't actually display anything (it converts files from the bitmap file format to my "triangles" file format and back). Of course it's all entirely device independent (resolution independent and colour depth/colour space independent), and for some cases (e.g. animated video, where each frame is only seen for about 16 ms) the quality of the output image can probably be reduced a lot without any noticeable difference, to improve performance. For example, I could draw the triangles very accurately and convert the resulting data into CMYK for a printer, or I could convert to RGB first and then use a 3D accelerated video driver to draw a (relatively low quality) version of the same image, or I could take a screen shot (of the triangle data rather than of the pixels generated by the video card) and then use it to create a very large extremely high quality poster sized image.Firestryke31 wrote:So you use software rendering? Have you tried it with OpenGL or DirectX? Also, have you tried combining it with the RGB->VGA system? That would be interesting...
However, what I'm mostly trying to do is get the conversion from pixel data into triangles working well. At 2 triangles per pixel the overhead (in terms of processing time and disk space/RAM usage) is far too high; and if I can't do better than that then the whole idea goes in the trash.
Alternatively, if I can average around 10 pixels per triangle (across a range of pictures) then there'd be a lot less triangles to process, and a lot less overhead; and if I can do that then I'll consider the approach viable (and only then start looking into things like optimizing the performance, or doing it with 3D acceleration, etc).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Firestryke31
- Member
- Posts: 550
- Joined: Sat Nov 29, 2008 1:07 pm
- Location: Throw a dart at central Texas
- Contact:
Re: RGB to VGA
Have you considered using one vertex per pixel? DirectX and OpenGL will interpolate the color between the vertexes of the triangle (unless you specifically state otherwise), and it shouldn't be too hard to do in a software renderer. Then it goes from 2 triangles per pixel to 2 triangles per 4 pixels, and you get free bilinear scaling.
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: RGB to VGA
If your doing bilinear scaling, why not just toss it at the graphics card as a texture?! Any system which requires rendering thousands of triangles, particularly ones modified regularly, is gonna kill performance. Graphics card triangle limitations are going up slower than CPU clock speeds!
Seriously, leave the image in a format the graphics card understands: Raw raster textures. In the case of pre-vectorized graphics, you can probably build them into a few vertex buffers and fire them off at the graphics card wrapped in a glPushMatrix/glPopMatrix block.
All of this triangle complexity is never gonna be fast, and it's gonna consume all the GPU power that GPU intensive applications could use anyway...
Seriously, leave the image in a format the graphics card understands: Raw raster textures. In the case of pre-vectorized graphics, you can probably build them into a few vertex buffers and fire them off at the graphics card wrapped in a glPushMatrix/glPopMatrix block.
All of this triangle complexity is never gonna be fast, and it's gonna consume all the GPU power that GPU intensive applications could use anyway...
Re: RGB to VGA
Hi,
Note: My code to convert triangles back into pixel data already handles "shaded triangles", it's just that the code to convert pixel data into triangles (which is mostly just a temporary hack for testing purposes) doesn't use this feature yet.
I also want colour space independence. A picture should be able to contain colours (like bright cyan) that can't be represented by RGB, and should also be able to contain colours (like bright blue and bright green) that can't be represented by CMYK; and when the picture is displayed on the screen it should be converted into the best possible RGB colours (e.g. with trashed shades of cyan but almost perfect bright blue and bright green), and when the picture is sent to a printer it should be converted to the best possible CMYK colours (e.g. with trashed shades of bright blue and bright green, but almost perfect bright cyan). This includes printing a screen shot (e.g. where the original triangles that make up the screen are converted to CMYK and sent to the printer, and not the RGB pixel data itself, so that colours don't end up trashed twice).
I can render the triangles in software and get perfect results, or (where time is an issue) I can get the video card to draw the triangles and hope that the eye doesn't have enough time to notice the messed up/low quality parts. I can even use a mixture (e.g. software rendered desktop with a few GPU rendered windows on top).
For performance (for low quality GPU rendering), I don't think it'll make much difference. If a video card can handle an average of 10000 textured triangles per second (including messing about uploading textures and creating mipmaps), then it can probably handle an average of 20000 solid or shaded triangles per second (with no need to upload textures or create mipmaps). Also note that most modern video cards are capable of doing over 20 million triangles per second; which works out to less than 2.36 pixels per triangle at 1024 * 768 @ 60 Hz (although I'd assume this is textured triangles in 3D, rather than solid or shaded triangles in 2D).
For high quality software rendering performance doesn't matter as much (e.g. one page per second is extremely fast for a colour printer); but if my current code (on my computer) can do 205766 triangles per second (12346 triangles in 60 ms), then an optimized (single-threaded) version could probably do 4 times as many (823000 triangles per second). A multi-threaded version would scale very well - for the 4 core CPU I'm using, each core could do 1/4 of the image, and it'd probably do over 3 million triangles per second. For video, this works out to about 16 pixels per triangle for 1024 * 768 @ 60 Hz.
This all comes down to one thing though - reducing the number of triangles. If I can't convert that 80 * 80 "I Has Plant" picture down to 650 or less triangles, then...
Cheers,
Brendan
That would work, but I'm hoping to get better quality results with more complex methods. For example, if there's a 5 * 5 square of black pixels next to a 5 * 5 square of white pixels, then with this approach you may or may not get a strip of grey in the middle (which would be undesirable, especially if the image is scaled up). That's one of the reasons I want to use edge detection as the basis for deciding where triangles should be. For example, in this case I'd want to end up with 2 black triangles and 2 white triangles.Firestryke31 wrote:Have you considered using one vertex per pixel? DirectX and OpenGL will interpolate the color between the vertexes of the triangle (unless you specifically state otherwise), and it shouldn't be too hard to do in a software renderer. Then it goes from 2 triangles per pixel to 2 triangles per 4 pixels, and you get free bilinear scaling.
Note: My code to convert triangles back into pixel data already handles "shaded triangles", it's just that the code to convert pixel data into triangles (which is mostly just a temporary hack for testing purposes) doesn't use this feature yet.
I want resolution independence, which basically means that all graphics data will end up being scaled. You never get good results from scaling pixel data. Video cards use oversampling (a low quality method of anti-aliasing, to avoid jagged images) and techniques based on mipmaps (to avoid Moiré patterns), but it's all just compromise, and only really works because for things like 3D games the eye doesn't have enough time to notice the messed up/low quality parts.Owen wrote:If your doing bilinear scaling, why not just toss it at the graphics card as a texture?! Any system which requires rendering thousands of triangles, particularly ones modified regularly, is gonna kill performance. Graphics card triangle limitations are going up slower than CPU clock speeds!
Seriously, leave the image in a format the graphics card understands: Raw raster textures. In the case of pre-vectorized graphics, you can probably build them into a few vertex buffers and fire them off at the graphics card wrapped in a glPushMatrix/glPopMatrix block.
All of this triangle complexity is never gonna be fast, and it's gonna consume all the GPU power that GPU intensive applications could use anyway...
I also want colour space independence. A picture should be able to contain colours (like bright cyan) that can't be represented by RGB, and should also be able to contain colours (like bright blue and bright green) that can't be represented by CMYK; and when the picture is displayed on the screen it should be converted into the best possible RGB colours (e.g. with trashed shades of cyan but almost perfect bright blue and bright green), and when the picture is sent to a printer it should be converted to the best possible CMYK colours (e.g. with trashed shades of bright blue and bright green, but almost perfect bright cyan). This includes printing a screen shot (e.g. where the original triangles that make up the screen are converted to CMYK and sent to the printer, and not the RGB pixel data itself, so that colours don't end up trashed twice).
I can render the triangles in software and get perfect results, or (where time is an issue) I can get the video card to draw the triangles and hope that the eye doesn't have enough time to notice the messed up/low quality parts. I can even use a mixture (e.g. software rendered desktop with a few GPU rendered windows on top).
For performance (for low quality GPU rendering), I don't think it'll make much difference. If a video card can handle an average of 10000 textured triangles per second (including messing about uploading textures and creating mipmaps), then it can probably handle an average of 20000 solid or shaded triangles per second (with no need to upload textures or create mipmaps). Also note that most modern video cards are capable of doing over 20 million triangles per second; which works out to less than 2.36 pixels per triangle at 1024 * 768 @ 60 Hz (although I'd assume this is textured triangles in 3D, rather than solid or shaded triangles in 2D).
For high quality software rendering performance doesn't matter as much (e.g. one page per second is extremely fast for a colour printer); but if my current code (on my computer) can do 205766 triangles per second (12346 triangles in 60 ms), then an optimized (single-threaded) version could probably do 4 times as many (823000 triangles per second). A multi-threaded version would scale very well - for the 4 core CPU I'm using, each core could do 1/4 of the image, and it'd probably do over 3 million triangles per second. For video, this works out to about 16 pixels per triangle for 1024 * 768 @ 60 Hz.
This all comes down to one thing though - reducing the number of triangles. If I can't convert that 80 * 80 "I Has Plant" picture down to 650 or less triangles, then...
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Firestryke31
- Member
- Posts: 550
- Joined: Sat Nov 29, 2008 1:07 pm
- Location: Throw a dart at central Texas
- Contact:
Re: RGB to VGA
If you really want I can post the original image for you to better test with.
Another question: Do you do the color conversion (your method->RGB) every redraw, or only once at the beginning and use cached results?
Another question: Do you do the color conversion (your method->RGB) every redraw, or only once at the beginning and use cached results?
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
Re: RGB to VGA
Hi,
When converting from BMP to triangles:
When converting from triangles into BMP:
Did I mention it hasn't been optimized yet?
Cheers,
Brendan
Thanks, but I should be OK. Currently I've got a collection of around 80 pictures, ranging from small images (mostly 128 * 64 in different BMP formats that I'm using to test my bitmap file format decoder) to huge images. The largest is a 6000 * 6000 TIF of the Orion Nebula, from the Hubble Space Telescope (which I'm hoping to use as a splash screen during boot, if I get everything working, if I can get the triangle count down, and if I get permission from the copyright holder/s).Firestryke31 wrote:If you really want I can post the original image for you to better test with.
Currently the full story goes like this..Firestryke31 wrote:Another question: Do you do the color conversion (your method->RGB) every redraw, or only once at the beginning and use cached results?
When converting from BMP to triangles:
- For each pixel:
- Normalize the RGB values so that they range from 0 to 1(e.g. "R = R/Rmax; G = G/Gmax; B = B/Bmax")
- Get rid of gamma correction (assuming original image uses sRGB gamma ramp)
- Convert RGB into CIE XYZ
* Create list of triangles *
For each vertex (shaded triangles) or triangle (solid triangles): - Convert colours from CIE XYZ into CIE LAB
- Convert from CIE LAB into my representation of LAB (A and B converted to 16-bit unsigned integers, L converted into a 13-bit significand and 5-bit exponent)
When converting from triangles into BMP:
- For each vertex (shaded triangles) or triangle (solid triangles):
- Convert my representation of LAB into CIE LAB
- Convert CIE LAB into CIE XYZ
* Draw the triangles *
For each pixel: - Convert CIE XYZ into RGB
- Apply gamma correction (using sRGB gamma ramp) and clamp out of range values
- Convert to correct colour depth (e.g. "R = R*255; G = G*255; B = B*255")
Did I mention it hasn't been optimized yet?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: RGB to VGA
For resolution independence, you have vector graphics. Why bother converting bitmaps into triangles? It's never gonna scale well either way.Brendan wrote:I want resolution independence, which basically means that all graphics data will end up being scaled. You never get good results from scaling pixel data. Video cards use oversampling (a low quality method of anti-aliasing, to avoid jagged images) and techniques based on mipmaps (to avoid Moiré patterns), but it's all just compromise, and only really works because for things like 3D games the eye doesn't have enough time to notice the messed up/low quality parts.
With 3D graphics, the biggest compromise is low polygon count; the other is lack of full screen anti-aliasing as it's expensive. Most games today use at least 1024x1024 textures; modern graphics cards can go up to 4096x4096. Anisotropic filtering provides very good results. You're never gonna get perfect from scaling anything which was stored as a bitmap. If you want good scaling, again, start with a vector format.
Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
For the colour space independence issue? My method is to store colours in whatever format the image was captured in. When drawing an image, the colour space is converted automatically; the graphics server will probably also cache converted images. Screenshots will continue to be bitmaps because thats the only practical method to provide an exact image of what is displayed on the screen. The important point is that printing and drawing on the screen can be done using exactly the same code.
Re: RGB to VGA
Hi,
Of course converting pixel data into vector graphics won't be suitable for real time, but that part of it can be done well before the resulting vector graphics are actually used.
I wouldn't be so sure that shaded triangles aren't used either - the idea of drawing a nice blue ocean with blue monochromatic textures just seems silly.
Good luck with that. I'll be having a standard file format that's used for everything, where the VFS converts legacy graphics file formats into my native graphics file format automatically (so that all applications and drivers never need to care about such an ugly mess)....
Good luck with that too. My device drivers will be converting data from one and only one standard format, into whatever "device specific" format/s the device itself wants, and getting "as perfect as possible" results without unnecessarily messed up colours (or anti-aliasing issues, or scaling problems) every time. At least that's the idea...
Cheers,
Brendan
Give me 4 weeks to continue my research, and I'll show you small bitmaps that are scaled up to huge sizes that make you drool.Owen wrote:For resolution independence, you have vector graphics. Why bother converting bitmaps into triangles? It's never gonna scale well either way.Brendan wrote:I want resolution independence, which basically means that all graphics data will end up being scaled. You never get good results from scaling pixel data. Video cards use oversampling (a low quality method of anti-aliasing, to avoid jagged images) and techniques based on mipmaps (to avoid Moiré patterns), but it's all just compromise, and only really works because for things like 3D games the eye doesn't have enough time to notice the messed up/low quality parts.
Of course converting pixel data into vector graphics won't be suitable for real time, but that part of it can be done well before the resulting vector graphics are actually used.
In theory I agree, and I do intend to start with a vector graphics format where possible, but I don't think everyone on the internet and all the digital camera manufacturers are willing to shift to triangles just yet; so I still need a way to convert (legacy) pixel data into vector graphics (triangles).Owen wrote:With 3D graphics, the biggest compromise is low polygon count; the other is lack of full screen anti-aliasing as it's expensive. Most games today use at least 1024x1024 textures; modern graphics cards can go up to 4096x4096. Anisotropic filtering provides very good results. You're never gonna get perfect from scaling anything which was stored as a bitmap. If you want good scaling, again, start with a vector format.
From a video card's perspective, a shaded triangle would be the same as a textured triangle except the (u, v) texture co-ordinates are used to determine the colour instead of using a texture - the vertex transformations, rasterization and oversampling is all the same (for these stages, they can't optimize textured triangle drawing without also optimizing shaded triangle drawing). The only major difference is that for the "lookup the colour" step there's no cache misses or VRAM bandwidth issues involved.Owen wrote:Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
I wouldn't be so sure that shaded triangles aren't used either - the idea of drawing a nice blue ocean with blue monochromatic textures just seems silly.
That works well, until you think about supporting 30 different graphics file formats (where even something simple like BMP has 6 variations and no sane header to easily decide which variation was used). With only 6 possible colour spaces it'd add up to over 1 thousand different permutations, that all need to be supported by every application that dares to draw a 2D image?Owen wrote:For the colour space independence issue? My method is to store colours in whatever format the image was captured in. When drawing an image, the colour space is converted automatically; the graphics server will probably also cache converted images. Screenshots will continue to be bitmaps because thats the only practical method to provide an exact image of what is displayed on the screen.
Good luck with that. I'll be having a standard file format that's used for everything, where the VFS converts legacy graphics file formats into my native graphics file format automatically (so that all applications and drivers never need to care about such an ugly mess)....
Sure, the same blitting routine will work for 24-bpp integer data for the first video card, and for 32-bit per channel floating point data for the second video card, and will even do half-toning on CYMK for the old colour printer, and handle 6 primary colours for the new photo quality printer; and it'll all work flawlessly regardless of how colours are represented in the source data (just with a quick "rep movsd?").Owen wrote:The important point is that printing and drawing on the screen can be done using exactly the same code.
Good luck with that too. My device drivers will be converting data from one and only one standard format, into whatever "device specific" format/s the device itself wants, and getting "as perfect as possible" results without unnecessarily messed up colours (or anti-aliasing issues, or scaling problems) every time. At least that's the idea...
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: RGB to VGA
Just ONE format? I agree with you in theory that it would be preferable, but what about compression and animation? I've always been telling myself that at the kernel level it would be necessary to support 2 at minimum. One for the static full-size bitmaps, and one for everything else. And choosing the format for the second was probably going to be even harder than the technical details of the first.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: RGB to VGA
Actually, it *is* used as part of the lighting process, whether done by software calling glColor or by hardware filling out that field for you. Software assigns light values to each corner of triangles, then renders that to give a nice shaded texture. More advanced software would use a second greyscale texture to model the amount of light actually hitting the surface. If it can hardly be optimised, it is only because it is exactly worth one multiplication per color component per pixel. Texture mapping is far more complex and thus far more susceptible to people trying clever tricks on it.Owen wrote:Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
That means software can change the vertex colors to change the properties of the light, without having to care about the occlusion.
Even when using pixelshaders, the same class of operations are performed, be it in a different form. The result is still that a color gets interpolated, then multiplied with the texture, which is exactly what vertex colors did, only with the interpolation being predefined to linear.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: RGB to VGA
What I should have said is that graphics cards are so optimized at drawing from textures that they rarely have texture cache misses [high predictability helps].Combuster wrote:Actually, it *is* used as part of the lighting process, whether done by software calling glColor or by hardware filling out that field for you. Software assigns light values to each corner of triangles, then renders that to give a nice shaded texture. More advanced software would use a second greyscale texture to model the amount of light actually hitting the surface. If it can hardly be optimised, it is only because it is exactly worth one multiplication per color component per pixel. Texture mapping is far more complex and thus far more susceptible to people trying clever tricks on it.Owen wrote:Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
That means software can change the vertex colors to change the properties of the light, without having to care about the occlusion.
Even when using pixelshaders, the same class of operations are performed, be it in a different form. The result is still that a color gets interpolated, then multiplied with the texture, which is exactly what vertex colors did, only with the interpolation being predefined to linear.
They are not optimized at drawing thousands of triangles each with 96 bits of colour data. The vertex RAM path is smaller than the texture RAM path. That colour data is a whopping 24 times the average size for a texel - 4 bits thanks to S3TC.
And if the rendering involves OpenGL immediate mode rendering (glBegin/glVertex/glColor/glTexCoord), you have far bigger problems already in that it's painfully slow and CPU limited. It's not optimized because nothing modern uses it, it's nigh on impossible to optimize in the first place, and it's being dropped from the latest OpenGL standards.