Page 3 of 3

Re: RGB to VGA

Posted: Wed Jun 24, 2009 9:41 pm
by Brendan
Hi,
bewing wrote:Just ONE format? I agree with you in theory that it would be preferable, but what about compression and animation? I've always been telling myself that at the kernel level it would be necessary to support 2 at minimum. One for the static full-size bitmaps, and one for everything else. And choosing the format for the second was probably going to be even harder than the technical details of the first.
One standardized native file format for each different purpose, including one for plain text, one for still 2D images, one for still 3D scenes, one for compressed files, one for sounds, one for databases, etc; where all file formats contain a header that simply and unambiguously identifies the file format, and are entirely open formats (e.g. not restricted by patents, etc); and where all specifications can be downloaded from the same place, by anyone, for any reason.

I haven't thought about 2D animation much yet. My first thought is that it'd be good to use morphing, so that you can generate a virtual frame that's part-way between one frame and the next (e.g. so that a 30 frames per second animation can be displayed flicker-free on a monitor with a 80 Hz refresh rate, or so that you can play the animation in slow motion without the animation becoming "jerky", or at any speed you like really). Of course morphing could be done by splitting an image up into triangles, then using a table to describe which triangles in "frame n" become which triangles in "frame n+1".

A "scalable triangle image" file (which hasn't become a native file format yet, but includes the standardized header that all of my native file formats share) that describes 12346 solid triangles costs 298056 bytes. Using my current native file format for compressed files, this compresses down to 131260 bytes. Of course this might be very good or it might be very bad, depending on how much detail those triangles describe.


Cheers,

Brendan

Re: RGB to VGA

Posted: Thu Jun 25, 2009 3:59 am
by Combuster
Owen wrote:They are not optimized at drawing thousands of triangles each with 96 bits of colour data. The vertex RAM path is smaller than the texture RAM path. That colour data is a whopping 24 times the average size for a texel - 4 bits thanks to S3TC.
What does not add up here is that each texture gets an u-v(-w?) coordinate from the same pipeline as they get the color data. The difference is that the GPU requires for textures that they calculate the u,v from the vertices, lambda and theta for the mipmapping and anisotropic filters, and then look up the whole stuff in texture memory.

I Don't think pushing a texture for each color pair each frame and writing out 64 bits of uv beats pushing 96 bits a vertex for just the color plus all the other things you do not have to do. Apart from the fact that vertex shaders used for lighting don't generally have access to texture writes.

Re: RGB to VGA

Posted: Thu Jun 25, 2009 7:36 am
by Owen
Games don't push the same number of actual triangles. Graphics cards are great at discarding (With software help) the triangles you can't see in batches.

Re: RGB to VGA

Posted: Thu Jul 09, 2009 10:45 am
by Brendan
Hi,
Brendan wrote:Give me 4 weeks to continue my research, and I'll show you small bitmaps that are scaled up to huge sizes that make you drool.
This is taking a lot longer than I'd originally hoped. My plan is to do edge detection (with sub-pixel accuracy), then use the detected edges to split the image into polygons, then break the polygons up into triangles. For example, if there's a square of black pixels and a square of white pixels, with a grey strip between the squares caused by anti-aliasing, then I want to find one edge that splits the grey strip, so that I can end up with 2 pure black triangles and 2 pure white triangles, with no grey (regardless of the rotation of the black and white squares of pixels). My hope is that the "anti-anti-aliasing" that I'm attempting results in a very sharp image when the final triangles are scaled up to large sizes.

So far I'm only half way through edge detection - I tried many techniques, didn't like Sobel or Canny, and mostly made up my own method (with lots of trail and error).

Here's the current results of the edge detection (click it to see the unscaled image, if necessary):
  • edge1.png
This image was scaled by a factor of 8 in both directions (so it's possible to see the sub-pixel precision), then made darker (so you can see the marks from edge detection easier). The marks from edge detection are small coloured lines, where the centre of the line represents the point where the edge is determined to be, the direction of the line shows the direction of the edge, and the length of the line is meaningless. There's one of these marks for each colour component (e.g. a red line for edges detected in the red colour component of the pixels, a blue line for blue colour component, etc). The marks are superimposed onto the image; and when the marks for each colour component have the same position and direction it gives the appearance of a single white line.

Please note that this is generated from raw edge detection data; and it's still not necessarily perfect. Later steps involve averaging the data from each colour component to get a single point with a single direction, and then playing "join the dots" to convert the data into lines.


Cheers,

Brendan

Re: RGB to VGA

Posted: Fri Jul 17, 2009 3:35 pm
by -m32
I guess I did things the "hard" way when I was looking for a method of converting from 24bit colour to 16 colours.... (didn't google, wanted to figure something out on my own)

I found the approximate 24 bit hex values of each colour found in the 4 bit colour palette and then gave each nibble in the 24 bit colour a "score" (for lack of a better term) of either L, D, M, B (Low, Dim, Medium, and Bright).....

So, for example, the colour red is represented as 0xFF0000 in hex, which when scored is BLL, bright yellow would be something like 0xaaaa55 (can't remember the exact value.) which would then be MMD. I did that for all 16 colours.

Then, figuring that there are 64 possible combinations that can be created from L, D, M, B, I created a lookup table that maps each possible combination to the closest matching 4 bit VGA "index" (again, lack of a better term).

With that lookup table, I can then "score" any 24 bit RGB value, look up the score, and get the closest matching VGA colour...

Don't know if that makes much sense :) But it works quite well... obviously is just a direct colour mapping, there's no dithering going on at all..... maybe this method has been used before, I dunno, I've still not looked it up :P

Re: RGB to VGA

Posted: Sat Aug 08, 2009 8:30 am
by AndrewAPrice
I did some edge detection code in a pixel shader for Media Player Classic (a wonderful free player for Windows that supports Pixel Shader effects). Screenies and code here: http://andrewalexanderprice.com/other/c ... index.html

If you see the hexidecimal colour values for each VGA colour (http://www.igrin.co.nz/petersim/bcolour0.html) you'll notice that you're basically have 3 states for each channel (0, .5, 1), just .5 and 1 can't be mixed together.

I made a quick HLSL shader in Media Player Classic to demonstrate.

The first image here, clamps it to VGA colours. That is, if any colour component is above 0.5, it rounds the colour to 0/1, else it rounds colour * 2 to 0/0.5.
Image

The VGA colour values are really just a bitmap:
bit 1: blue
bit 2: green
bit 3: red
bit 4: on for bright

Knowing this, this shader converts the RGB colour to the vga colour (in the range 0-15). I just divided it by 15 to output it to the screen:
http://img20.imageshack.us/i/tovgacolour.jpg/

I've squeezed in my HLSL code at the top of each screenshot for reference.

Re: RGB to VGA

Posted: Sat Aug 08, 2009 8:19 pm
by Brendan
Hi,
MessiahAndrw wrote:I did some edge detection code in a pixel shader for Media Player Classic (a wonderful free player for Windows that supports Pixel Shader effects). Screenies and code here: http://andrewalexanderprice.com/other/c ... index.html
If found that simple edge detection was easy, but extremely good edge detection (with sub-pixel accuracy, and "anti anti aliasing") was much harder. My edge detection was only the first of many steps needed to convert an image into the minimum number of triangles; and after about a month of working on it (and getting nothing done on the OS itself) I decided I was being "overly distracted"... ;)


Cheers,

Brendan

Re: RGB to VGA

Posted: Sat Aug 08, 2009 8:51 pm
by AndrewAPrice
Brendan wrote:Hi,
MessiahAndrw wrote:I did some edge detection code in a pixel shader for Media Player Classic (a wonderful free player for Windows that supports Pixel Shader effects). Screenies and code here: http://andrewalexanderprice.com/other/c ... index.html
If found that simple edge detection was easy, but extremely good edge detection (with sub-pixel accuracy, and "anti anti aliasing") was much harder. My edge detection was only the first of many steps needed to convert an image into the minimum number of triangles; and after about a month of working on it (and getting nothing done on the OS itself) I decided I was being "overly distracted"... ;)


Cheers,

Brendan
For a perfect cartoon/outline result you really need to do object recognition to split the image up (possibly through a highly trained neural network). Imagine if there are two overlapping objects (e.g. coffee mugs) on the camera, same colour, same material, similar lighting, but one is just slightly behind the other. It is difficult for an algorithm to comprehend if they are different or the same object unless it has a higher understanding of what it is it's drawing.

Cartoon shaded games can cheat because the developers can render out information such as depth (to reconstruct the position) or unique object IDs (to distinguish that pixels belong to different objects).

Re: RGB to VGA

Posted: Sun Aug 09, 2009 3:02 pm
by Owen
In fact thats how they tend to do edge inking: They edge detect on either the Z or normal buffer (Depending on the kind of results you want). Pretty much the same method is used for pre-rendered CGI since it works brilliantly.