RGB to VGA

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
Troy Martin
Member
Member
Posts: 1686
Joined: Fri Apr 18, 2008 4:40 pm
Location: Langley, Vancouver, BC, Canada
Contact:

Re: RGB to VGA

Post by Troy Martin »

Brendan: Wow. Just freaking. Wow.
earlz wrote:If I turn my eyes slightly out of focus, the image looks exactly like the 32bit one.
Heh, yeah, it does!
Image
Image
Solar wrote:It keeps stunning me how friendly we - as a community - are towards people who start programming "their first OS" who don't even have a solid understanding of pointers, their compiler, or how a OS is structured.
I wish I could add more tex
User avatar
Firestryke31
Member
Member
Posts: 550
Joined: Sat Nov 29, 2008 1:07 pm
Location: Throw a dart at central Texas
Contact:

Re: RGB to VGA

Post by Firestryke31 »

With the latest gradient picture I noticed that there are some almost sharp transitions, mostly at the midway point between the color gradients and about 1/3 of the way with the grayscale gradient on the right and to a lesser extent on the left. With the pixels->triangles->pixels image, how many triangles are there, 2 per pixel? How does it look scaled? I do a bit of 3D graphics work (not much, but still) and am a bit interested in the performance of this system. It probably can't be used for real time rendering, right?
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: RGB to VGA

Post by Brendan »

Hi,
Firestryke31 wrote:With the latest gradient picture I noticed that there are some almost sharp transitions, mostly at the midway point between the color gradients and about 1/3 of the way with the grayscale gradient on the right and to a lesser extent on the left.
I noticed that too, and I'd like to blame it on gamma correction. If I take gamma correction into account (which was easy enough to do for 4-bpp, as it's almost purely table look-ups anyway, with small tables) the transition is smoother, but the gradient doesn't seem right (the halfway point looks like it's at about 75% brightness rather than 50% brightness).
Firestryke31 wrote:With the pixels->triangles->pixels image, how many triangles are there, 2 per pixel? How does it look scaled?
So far I've put a lot of work into getting the conversion from triangles back into pixels right (especially improving sub-pixel accuracy), but not so much work into converting pixels into triangles.

For scaling, here's the original picture (which was 80 * 80 pixels) scaled down to 25 * 25 pixels (without any rotation):
5587_sti_25.png
5587_sti_25.png (1.55 KiB) Viewed 4454 times
And here's a very small part of the picture (the top of the 'S' in "HAS"), scaled up to 8000 * 8000 pixels (with 10 degrees clockwise rotation):
5587_sti_100.png
Note: For this picture, the browser I'm using scales it down, and the browser's scaling algorithm causes anti-aliasing issues (the diagonal lines look lumpy). If your browser is doing the same thing then click on the image to see the unscaled version.

The triangles themselves are stored using 32-bit coordinates, where the colours are encoded as (my version of) CIE LAB (and where the intensity is a simplified floating point value that ranges up to several times brighter than the sun) and there's also alpha. The colours are converted into CIE XYZ for all processing (actually "XYZA"), and when everything is done the pixel data is converted from CIE XYZ into RGB and then gamma correction is applied (before spitting out the data as a bitmap).

You are of course correct - at the moment the utility that converts pixel data into triangles (mostly) does create 2 solid triangles per pixel. The only exception (at the moment) is if several pixels on the same line are the same colour (in this case the utility will create 2 triangles for the row of pixels).

Now that (I think) I've got the conversion from triangles to pixels correct my next step is to replace the extremely simple "2 triangles per pixel" code with some advanced voodoo; although to be honest there's a pile of ideas floating around in my head about the best way to do this, and none seem all that attractive at the moment.

My current thinking is to split to original picture into 2 arbitrary halves (triangles); and then (recursively) for each triangle, use edge detection to find any edges, then split the triangle into 3 (or 2 in very lucky cases) smaller triangles along the detected edge, until all triangles can be described by a solid or shaded triangle (with an acceptable amount of error) and don't need to be split up further.
Firestryke31 wrote:I do a bit of 3D graphics work (not much, but still) and am a bit interested in the performance of this system. It probably can't be used for real time rendering, right?
At the moment performance depends a lot on how many triangles and what size image. For the image above (encoded as 12346 triangles) it takes my computer (a 2.4 Ghz Intel Core 2 Quad) about 23 seconds to generate an 8000 * 8000 bitmap and about 60 ms to generate an 80 * 80 bitmap.

To convert the original 80 * 80 picture into 12346 triangles it takes about 49 ms; but larger images seem to take exponentially longer (e.g. a 1024 * 768 picture can take 10 minutes).

However, none of the code has been optimized in any way (except for GCC's "-O2"). It doesn't use SSE for anything, it's all single-threaded, it consumes at least 40 bytes per pixel (I'm using doubles for almost everything) and it gives the CPU's caches a good thrashing. Mostly it's just a prototype so I can determine if the approach is viable.

Of course if I can get it to work, then fonts, pictures, textures, mouse pointers, icons, polygons, etc would all be converted to triangles, and the only thing all the video code in my OS will need to worry about drawing triangles in 3D space.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Firestryke31
Member
Member
Posts: 550
Joined: Sat Nov 29, 2008 1:07 pm
Location: Throw a dart at central Texas
Contact:

Re: RGB to VGA

Post by Firestryke31 »

So you use software rendering? Have you tried it with OpenGL or DirectX? Also, have you tried combining it with the RGB->VGA system? That would be interesting...
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: RGB to VGA

Post by Brendan »

Hi,
Firestryke31 wrote:So you use software rendering? Have you tried it with OpenGL or DirectX? Also, have you tried combining it with the RGB->VGA system? That would be interesting...
I haven't tried it with OpenGL or DirectX - at the moment it uses software only, and doesn't actually display anything (it converts files from the bitmap file format to my "triangles" file format and back). Of course it's all entirely device independent (resolution independent and colour depth/colour space independent), and for some cases (e.g. animated video, where each frame is only seen for about 16 ms) the quality of the output image can probably be reduced a lot without any noticeable difference, to improve performance. For example, I could draw the triangles very accurately and convert the resulting data into CMYK for a printer, or I could convert to RGB first and then use a 3D accelerated video driver to draw a (relatively low quality) version of the same image, or I could take a screen shot (of the triangle data rather than of the pixels generated by the video card) and then use it to create a very large extremely high quality poster sized image.

However, what I'm mostly trying to do is get the conversion from pixel data into triangles working well. At 2 triangles per pixel the overhead (in terms of processing time and disk space/RAM usage) is far too high; and if I can't do better than that then the whole idea goes in the trash.

Alternatively, if I can average around 10 pixels per triangle (across a range of pictures) then there'd be a lot less triangles to process, and a lot less overhead; and if I can do that then I'll consider the approach viable (and only then start looking into things like optimizing the performance, or doing it with 3D acceleration, etc).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Firestryke31
Member
Member
Posts: 550
Joined: Sat Nov 29, 2008 1:07 pm
Location: Throw a dart at central Texas
Contact:

Re: RGB to VGA

Post by Firestryke31 »

Have you considered using one vertex per pixel? DirectX and OpenGL will interpolate the color between the vertexes of the triangle (unless you specifically state otherwise), and it shouldn't be too hard to do in a software renderer. Then it goes from 2 triangles per pixel to 2 triangles per 4 pixels, and you get free bilinear scaling.
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: RGB to VGA

Post by Owen »

If your doing bilinear scaling, why not just toss it at the graphics card as a texture?! Any system which requires rendering thousands of triangles, particularly ones modified regularly, is gonna kill performance. Graphics card triangle limitations are going up slower than CPU clock speeds!

Seriously, leave the image in a format the graphics card understands: Raw raster textures. In the case of pre-vectorized graphics, you can probably build them into a few vertex buffers and fire them off at the graphics card wrapped in a glPushMatrix/glPopMatrix block.

All of this triangle complexity is never gonna be fast, and it's gonna consume all the GPU power that GPU intensive applications could use anyway...
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: RGB to VGA

Post by Brendan »

Hi,
Firestryke31 wrote:Have you considered using one vertex per pixel? DirectX and OpenGL will interpolate the color between the vertexes of the triangle (unless you specifically state otherwise), and it shouldn't be too hard to do in a software renderer. Then it goes from 2 triangles per pixel to 2 triangles per 4 pixels, and you get free bilinear scaling.
That would work, but I'm hoping to get better quality results with more complex methods. For example, if there's a 5 * 5 square of black pixels next to a 5 * 5 square of white pixels, then with this approach you may or may not get a strip of grey in the middle (which would be undesirable, especially if the image is scaled up). That's one of the reasons I want to use edge detection as the basis for deciding where triangles should be. For example, in this case I'd want to end up with 2 black triangles and 2 white triangles.

Note: My code to convert triangles back into pixel data already handles "shaded triangles", it's just that the code to convert pixel data into triangles (which is mostly just a temporary hack for testing purposes) doesn't use this feature yet.
Owen wrote:If your doing bilinear scaling, why not just toss it at the graphics card as a texture?! Any system which requires rendering thousands of triangles, particularly ones modified regularly, is gonna kill performance. Graphics card triangle limitations are going up slower than CPU clock speeds!

Seriously, leave the image in a format the graphics card understands: Raw raster textures. In the case of pre-vectorized graphics, you can probably build them into a few vertex buffers and fire them off at the graphics card wrapped in a glPushMatrix/glPopMatrix block.

All of this triangle complexity is never gonna be fast, and it's gonna consume all the GPU power that GPU intensive applications could use anyway...
I want resolution independence, which basically means that all graphics data will end up being scaled. You never get good results from scaling pixel data. Video cards use oversampling (a low quality method of anti-aliasing, to avoid jagged images) and techniques based on mipmaps (to avoid Moiré patterns), but it's all just compromise, and only really works because for things like 3D games the eye doesn't have enough time to notice the messed up/low quality parts.

I also want colour space independence. A picture should be able to contain colours (like bright cyan) that can't be represented by RGB, and should also be able to contain colours (like bright blue and bright green) that can't be represented by CMYK; and when the picture is displayed on the screen it should be converted into the best possible RGB colours (e.g. with trashed shades of cyan but almost perfect bright blue and bright green), and when the picture is sent to a printer it should be converted to the best possible CMYK colours (e.g. with trashed shades of bright blue and bright green, but almost perfect bright cyan). This includes printing a screen shot (e.g. where the original triangles that make up the screen are converted to CMYK and sent to the printer, and not the RGB pixel data itself, so that colours don't end up trashed twice).

I can render the triangles in software and get perfect results, or (where time is an issue) I can get the video card to draw the triangles and hope that the eye doesn't have enough time to notice the messed up/low quality parts. I can even use a mixture (e.g. software rendered desktop with a few GPU rendered windows on top).

For performance (for low quality GPU rendering), I don't think it'll make much difference. If a video card can handle an average of 10000 textured triangles per second (including messing about uploading textures and creating mipmaps), then it can probably handle an average of 20000 solid or shaded triangles per second (with no need to upload textures or create mipmaps). Also note that most modern video cards are capable of doing over 20 million triangles per second; which works out to less than 2.36 pixels per triangle at 1024 * 768 @ 60 Hz (although I'd assume this is textured triangles in 3D, rather than solid or shaded triangles in 2D).

For high quality software rendering performance doesn't matter as much (e.g. one page per second is extremely fast for a colour printer); but if my current code (on my computer) can do 205766 triangles per second (12346 triangles in 60 ms), then an optimized (single-threaded) version could probably do 4 times as many (823000 triangles per second). A multi-threaded version would scale very well - for the 4 core CPU I'm using, each core could do 1/4 of the image, and it'd probably do over 3 million triangles per second. For video, this works out to about 16 pixels per triangle for 1024 * 768 @ 60 Hz.

This all comes down to one thing though - reducing the number of triangles. If I can't convert that 80 * 80 "I Has Plant" picture down to 650 or less triangles, then...


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Firestryke31
Member
Member
Posts: 550
Joined: Sat Nov 29, 2008 1:07 pm
Location: Throw a dart at central Texas
Contact:

Re: RGB to VGA

Post by Firestryke31 »

If you really want I can post the original image for you to better test with.

Another question: Do you do the color conversion (your method->RGB) every redraw, or only once at the beginning and use cached results?
Owner of Fawkes Software.
Wierd Al wrote: You think your Commodore 64 is really neato,
What kind of chip you got in there, a Dorito?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: RGB to VGA

Post by Brendan »

Hi,
Firestryke31 wrote:If you really want I can post the original image for you to better test with.
Thanks, but I should be OK. Currently I've got a collection of around 80 pictures, ranging from small images (mostly 128 * 64 in different BMP formats that I'm using to test my bitmap file format decoder) to huge images. The largest is a 6000 * 6000 TIF of the Orion Nebula, from the Hubble Space Telescope (which I'm hoping to use as a splash screen during boot, if I get everything working, if I can get the triangle count down, and if I get permission from the copyright holder/s).
Firestryke31 wrote:Another question: Do you do the color conversion (your method->RGB) every redraw, or only once at the beginning and use cached results?
Currently the full story goes like this..

When converting from BMP to triangles:
  • For each pixel:
  • Normalize the RGB values so that they range from 0 to 1(e.g. "R = R/Rmax; G = G/Gmax; B = B/Bmax")
  • Get rid of gamma correction (assuming original image uses sRGB gamma ramp)
  • Convert RGB into CIE XYZ
    * Create list of triangles *
    For each vertex (shaded triangles) or triangle (solid triangles):
  • Convert colours from CIE XYZ into CIE LAB
  • Convert from CIE LAB into my representation of LAB (A and B converted to 16-bit unsigned integers, L converted into a 13-bit significand and 5-bit exponent)
Note: This is one division, three "pow(x, 2.4)", nine multiplications and six additions, for every pixel, just to go from the original RGB to CIE XYZ.

When converting from triangles into BMP:
  • For each vertex (shaded triangles) or triangle (solid triangles):
  • Convert my representation of LAB into CIE LAB
  • Convert CIE LAB into CIE XYZ
    * Draw the triangles *
    For each pixel:
  • Convert CIE XYZ into RGB
  • Apply gamma correction (using sRGB gamma ramp) and clamp out of range values
  • Convert to correct colour depth (e.g. "R = R*255; G = G*255; B = B*255")
Note: This is ten multiplications, six additions and three table lookups, for every pixel, just to go from the CIE back to RGB.

Did I mention it hasn't been optimized yet? :D


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: RGB to VGA

Post by Owen »

Brendan wrote:I want resolution independence, which basically means that all graphics data will end up being scaled. You never get good results from scaling pixel data. Video cards use oversampling (a low quality method of anti-aliasing, to avoid jagged images) and techniques based on mipmaps (to avoid Moiré patterns), but it's all just compromise, and only really works because for things like 3D games the eye doesn't have enough time to notice the messed up/low quality parts.
For resolution independence, you have vector graphics. Why bother converting bitmaps into triangles? It's never gonna scale well either way.

With 3D graphics, the biggest compromise is low polygon count; the other is lack of full screen anti-aliasing as it's expensive. Most games today use at least 1024x1024 textures; modern graphics cards can go up to 4096x4096. Anisotropic filtering provides very good results. You're never gonna get perfect from scaling anything which was stored as a bitmap. If you want good scaling, again, start with a vector format.

Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.

For the colour space independence issue? My method is to store colours in whatever format the image was captured in. When drawing an image, the colour space is converted automatically; the graphics server will probably also cache converted images. Screenshots will continue to be bitmaps because thats the only practical method to provide an exact image of what is displayed on the screen. The important point is that printing and drawing on the screen can be done using exactly the same code.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: RGB to VGA

Post by Brendan »

Hi,
Owen wrote:
Brendan wrote:I want resolution independence, which basically means that all graphics data will end up being scaled. You never get good results from scaling pixel data. Video cards use oversampling (a low quality method of anti-aliasing, to avoid jagged images) and techniques based on mipmaps (to avoid Moiré patterns), but it's all just compromise, and only really works because for things like 3D games the eye doesn't have enough time to notice the messed up/low quality parts.
For resolution independence, you have vector graphics. Why bother converting bitmaps into triangles? It's never gonna scale well either way.
Give me 4 weeks to continue my research, and I'll show you small bitmaps that are scaled up to huge sizes that make you drool.

Of course converting pixel data into vector graphics won't be suitable for real time, but that part of it can be done well before the resulting vector graphics are actually used.
Owen wrote:With 3D graphics, the biggest compromise is low polygon count; the other is lack of full screen anti-aliasing as it's expensive. Most games today use at least 1024x1024 textures; modern graphics cards can go up to 4096x4096. Anisotropic filtering provides very good results. You're never gonna get perfect from scaling anything which was stored as a bitmap. If you want good scaling, again, start with a vector format.
In theory I agree, and I do intend to start with a vector graphics format where possible, but I don't think everyone on the internet and all the digital camera manufacturers are willing to shift to triangles just yet; so I still need a way to convert (legacy) pixel data into vector graphics (triangles).
Owen wrote:Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
From a video card's perspective, a shaded triangle would be the same as a textured triangle except the (u, v) texture co-ordinates are used to determine the colour instead of using a texture - the vertex transformations, rasterization and oversampling is all the same (for these stages, they can't optimize textured triangle drawing without also optimizing shaded triangle drawing). The only major difference is that for the "lookup the colour" step there's no cache misses or VRAM bandwidth issues involved.

I wouldn't be so sure that shaded triangles aren't used either - the idea of drawing a nice blue ocean with blue monochromatic textures just seems silly.
Owen wrote:For the colour space independence issue? My method is to store colours in whatever format the image was captured in. When drawing an image, the colour space is converted automatically; the graphics server will probably also cache converted images. Screenshots will continue to be bitmaps because thats the only practical method to provide an exact image of what is displayed on the screen.
That works well, until you think about supporting 30 different graphics file formats (where even something simple like BMP has 6 variations and no sane header to easily decide which variation was used). With only 6 possible colour spaces it'd add up to over 1 thousand different permutations, that all need to be supported by every application that dares to draw a 2D image?

Good luck with that. I'll be having a standard file format that's used for everything, where the VFS converts legacy graphics file formats into my native graphics file format automatically (so that all applications and drivers never need to care about such an ugly mess)....
Owen wrote:The important point is that printing and drawing on the screen can be done using exactly the same code.
Sure, the same blitting routine will work for 24-bpp integer data for the first video card, and for 32-bit per channel floating point data for the second video card, and will even do half-toning on CYMK for the old colour printer, and handle 6 primary colours for the new photo quality printer; and it'll all work flawlessly regardless of how colours are represented in the source data (just with a quick "rep movsd?").

Good luck with that too. My device drivers will be converting data from one and only one standard format, into whatever "device specific" format/s the device itself wants, and getting "as perfect as possible" results without unnecessarily messed up colours (or anti-aliasing issues, or scaling problems) every time. At least that's the idea...


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: RGB to VGA

Post by bewing »

Just ONE format? I agree with you in theory that it would be preferable, but what about compression and animation? I've always been telling myself that at the kernel level it would be necessary to support 2 at minimum. One for the static full-size bitmaps, and one for everything else. And choosing the format for the second was probably going to be even harder than the technical details of the first.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: RGB to VGA

Post by Combuster »

Owen wrote:Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
Actually, it *is* used as part of the lighting process, whether done by software calling glColor or by hardware filling out that field for you. Software assigns light values to each corner of triangles, then renders that to give a nice shaded texture. More advanced software would use a second greyscale texture to model the amount of light actually hitting the surface. If it can hardly be optimised, it is only because it is exactly worth one multiplication per color component per pixel. Texture mapping is far more complex and thus far more susceptible to people trying clever tricks on it.

That means software can change the vertex colors to change the properties of the light, without having to care about the occlusion.

Even when using pixelshaders, the same class of operations are performed, be it in a different form. The result is still that a color gets interpolated, then multiplied with the texture, which is exactly what vertex colors did, only with the interpolation being predefined to linear.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: RGB to VGA

Post by Owen »

Combuster wrote:
Owen wrote:Additionally, graphics cards are rarely better at drawing shaded triangles than at drawing textured ones. Drawing shaded triangles doesn't get optimized because it doesn't get used.
Actually, it *is* used as part of the lighting process, whether done by software calling glColor or by hardware filling out that field for you. Software assigns light values to each corner of triangles, then renders that to give a nice shaded texture. More advanced software would use a second greyscale texture to model the amount of light actually hitting the surface. If it can hardly be optimised, it is only because it is exactly worth one multiplication per color component per pixel. Texture mapping is far more complex and thus far more susceptible to people trying clever tricks on it.

That means software can change the vertex colors to change the properties of the light, without having to care about the occlusion.

Even when using pixelshaders, the same class of operations are performed, be it in a different form. The result is still that a color gets interpolated, then multiplied with the texture, which is exactly what vertex colors did, only with the interpolation being predefined to linear.
What I should have said is that graphics cards are so optimized at drawing from textures that they rarely have texture cache misses [high predictability helps].

They are not optimized at drawing thousands of triangles each with 96 bits of colour data. The vertex RAM path is smaller than the texture RAM path. That colour data is a whopping 24 times the average size for a texel - 4 bits thanks to S3TC.

And if the rendering involves OpenGL immediate mode rendering (glBegin/glVertex/glColor/glTexCoord), you have far bigger problems already in that it's painfully slow and CPU limited. It's not optimized because nothing modern uses it, it's nigh on impossible to optimize in the first place, and it's being dropped from the latest OpenGL standards.
Post Reply