Page 1 of 1

Triangles (was Functional Programming)

Posted: Sun Feb 26, 2012 2:28 pm
by childOfTechno
Brendan wrote:Did you hear the one about people in glass houses, Mr Child of Techno? From your attitude alone I can guess that I've been programming longer than you've been breathing.
I doubt it.
Brendan wrote:I'm not sure what you found funny about my image scaling research - pixel perfect scaling and rotation isn't a bad result. It's just unfortunate that I ran out of time/patience attempting to get good compression when converting bitmap data while also improving the original image's quality during the conversion. In hindsight I should've done one or the other rather than trying to do both at the same time.
You're reinventing the square wheel. Anybody with pixel pushing experience knows that efficient image scaling and rotation is a solved problem, do some research.

Welcome to the glass house: you rant about people creating pointless programming languages, yet you waste time developing pointless algorithms. You won't be able to beat the compression offered by a good PNG library, and you won't get better visual results than trilinear filtering a stack of mip-maps.

Re: Functional Programming (was Pitfalls of lacking a MMU)

Posted: Sun Feb 26, 2012 3:09 pm
by Brendan
Hi,
childOfTechno wrote:
Brendan wrote:Did you hear the one about people in glass houses, Mr Child of Techno?
I doubt it.
Brendan wrote:I'm not sure what you found funny about my image scaling research - pixel perfect scaling and rotation isn't a bad result. It's just unfortunate that I ran out of time/patience attempting to get good compression when converting bitmap data while also improving the original image's quality during the conversion. In hindsight I should've done one or the other rather than trying to do both at the same time.
You're reinventing the square wheel. Anybody with pixel pushing experience knows that efficient image scaling and rotation is a solved problem, do some research.

Welcome to the glass house: you rant about people creating pointless programming languages, yet you waste time developing pointless algorithms. You won't be able to beat the compression offered by a good PNG library, and you won't get better visual results than trilinear filtering a stack of mip-maps.
I'm sorry - it seems I've made the mistake of assuming you're capable of rational thought. I apologise for my incorrect assumption.

Triangle data is resolution independent, is faster and easier to scale and rotate, gives better quality results, and fits into a "software rendering" pipeline much easier (no texture data to worry about at all). Unfortunately legacy bitmap data can't be avoided, so before adopting the triangle format I'd need a way to convert legacy bitmap data into the triangle format. This conversion is the problem I was researching - simple solutions are fast but create huge files, and complex solutions are excessively slow. I wasn't trying to rotate bitmap data, scale bitmap data or compress bitmap data (I was mostly trying to simplify a software rendering pipeline by getting rid of all bitmap data completely).

PNG does not compress polygon/triangle data and only compresses bitmap data, which is a completely different thing. Trilinear filtering is not pixel perfect (it's just closer to perfect than bilinear).


Cheers,

Brendan

Re: Triangles (was Functional Programming)

Posted: Sun Feb 26, 2012 3:57 pm
by childOfTechno
Brendan wrote:I'm sorry - it seems I've made the mistake of assuming you're capable of rational thought. I apologise for my incorrect assumption.
I'm quite capable thank you.
Brendan wrote:Triangle data is resolution independent, is faster and easier to scale and rotate, gives better quality results, and fits into a "software rendering" pipeline much easier (no texture data to worry about at all). Unfortunately legacy bitmap data can't be avoided, so before adopting the triangle format I'd need a way to convert legacy bitmap data into the triangle format. This conversion is the problem I was researching - simple solutions are fast but create huge files, and complex solutions are excessively slow. I wasn't trying to rotate bitmap data, scale bitmap data or compress bitmap data (I was mostly trying to simplify a software rendering pipeline by getting rid of all bitmap data completely).
Hmm, building a binary space partition tree of all the triangles in the image and then walking along a scan line would work, you might even be able to anti-alias the result, but what are you planning to do when downsampling an image (when a pixel becomes larger than a single triangle?) You could oversample the result, but it's going to run slower than a sloth on Valium, and the results are not going to be pixel perfect (unless you take a lot of samples.) At any rate, for any image with detail (a photograph) you're going to need a bucket of triangles, a decidedly not-efficient way to describe an image.
Brendan wrote:Trilinear filtering is not pixel perfect (it's just closer to perfect than bilinear).
Actually, for uniform downsampling trilinear filtering is as close to perfect as it gets. Trilinear filtering over a RIP map will deal nicely with downsampling a non-uniform scale. I just can't figure out what you're trying to acheive will all this.

But anyway, back on topic ...

You're a hypocrite, your argument (that there shouldn't be multiple programming languages) applies equally well to operating systems. You're just making things worse by trying to implement a new one. :)

QED.

Re: Triangles (was Functional Programming)

Posted: Sun Feb 26, 2012 7:39 pm
by Brendan
Hi,

An example would've helped my explanation a lot, but I didn't have time earlier. I have time now, so I've created some images to help describe (one simple case of) the "bitmap to triangle conversion" problem.

Here's the original image, a tiny (4 pixels by 4 pixels) bitmap:
Image

It's hard to see, so I blew it up:
Image
Note: The pixels in the corners are almost fully transparent, while those in the middle are almost fully opaque.

The goal is to find the least number of triangles that (when rendered) reproduce the original bitmap without any error. In this case there's 2 solutions both involving 2 triangles. Here one solution:

Image

You should be able to guess what the other solution is.

Once you've got the image converted into 2 triangles, you should be able to rotate it 20 degrees (counter-clockwise) and scale it up 100 times larger (to 400 pixels by 400 pixels), and end up with this:

Image
Note: You might notice that the image above is only 300 * 300 pixels. This is because all of the pixels around the edges are 100% transparent, and I cropped them off.

Finally, here's a picture of what happens when you scale the original image in GIMP and then rotate it:

Image


Cheers,

Brendan

Re: Triangles (was Functional Programming)

Posted: Sun Feb 26, 2012 8:29 pm
by Brendan
Hi,
childOfTechno wrote:Hmm, building a binary space partition tree of all the triangles in the image and then walking along a scan line would work, you might even be able to anti-alias the result, but what are you planning to do when downsampling an image (when a pixel becomes larger than a single triangle?) You could oversample the result, but it's going to run slower than a sloth on Valium, and the results are not going to be pixel perfect (unless you take a lot of samples.)
My original description of the problem wasn't very good. The problem is finding where the edges of the triangles should be; which (for the optimal number of triangles) isn't necessarily where a sub-pixel edge detection algorithm will suggest (but sub-pixel edge detection is an obvious place to start). For simple images (e.g. white square on a black background, with a thin line of grey between the white and black caused by anti-aliasing) sub-pixel edge detection works well, but it falls apart once you start looking at (for e.g.) a photo of waves on an ocean.

Note: Don't worry about converting from triangles back into bitmap data - that's solved (including anti-aliasing triangle edges and drawing triangles that are far smaller than a pixel).
childOfTechno wrote:At any rate, for any image with detail (a photograph) you're going to need a bucket of triangles, a decidedly not-efficient way to describe an image.
That's one of the things I was hoping to find out with my research - to determine the relationship between image quality and file size for detailed images/photos. I suspect that for lossy conversion it wouldn't be too bad (e.g. less artefacts than JPEG at similar file sizes) but I wanted to get lossless conversion working right first, as lossy conversion shouldn't be hard to support once you've got lossless conversion working.
childOfTechno wrote:But anyway, back on topic ...

You're a hypocrite, your argument (that there shouldn't be multiple programming languages) applies equally well to operating systems. You're just making things worse by trying to implement a new one. :)
Erm. The topic was changed, so it's off-topic now.

You're right, in that the "ban all new languages" argument (and the corresponding "ban all new OSs" argument) is flawed. Just wait a few minutes while I get in my time machine, and I'll travel back to a time that is few hours before you post the comment above, and I'll create a revised summary that corrects this defect. I'll probably call it something like "Brendan's Law Of Language Adoption".

Ok - I'm back from the past now, and I can confirm that I have called it Brendan's Law Of Language Adoption!


Cheers,

Brendan

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 2:16 am
by Owen
childOfTechno wrote:Actually, for uniform downsampling trilinear filtering is as close to perfect as it gets. Trilinear filtering over a RIP map will deal nicely with downsampling a non-uniform scale. I just can't figure out what you're trying to acheive will all this.
Erm, I'm pretty sure thats Lanczos, which is as close to perfect as you can get for some values of perfect (one of the assumptions of Lanczos is that you treat the image as a band-limited version of a continuous function, and don't (ab)use your knowledge of the pixel grid to construct higher-frequency details)

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 6:46 am
by Love4Boobies
Having some experience on the subject, I thought I'd share my $0.02 and provide some sample images. First of all, linear methods (such as triliniar or Lanczos filtering) cannot eliminate all three types of artifacts simultaneously: ringing, aliasing, and blurring. The best results today are achieved by edge-adaptive methods; perhaps statistical methods as well but I haven't had time to study any of those---image processing is far from my primary concern.

First, here is my original image:

Image

Next, here it is using Brendan's method*:

Image

Unless Brendan has some ideas on how to improve his implementation, Lanczos* indeed yields better results:

Image

Here it is using the well-known NEDI filter, except I implemented it using an edge-preserving nonlinear iterative image resampling by super resolution method:

Image

Finally, here it is using the FEDI filter (an improvement to MEDI), using a similar implementation to the one above, except it was reconstructed with HRSD and ADWS in FEDI, meaning that I first analyze the samples to pick the best data on which to reconstruct. Depending on the image, I was sometimes able to resample images 5x the size of the original without any visible errors with this method (alas, the process takes a couple of hours for larger images)!

Image

* - Method is suitable for real-time processing. Note that the other methods' running time can be improved for real-time processing by I aimed for maximum quality.

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 7:59 am
by Owen
One thing to note with regards to Lanczos filtering, is that performance is dependent upon the filtering used when producing the source image. In particular, it is dependent upon a band-limited source image (which the used source image does not appear to be). It has the feature that, for this target domain, it never produces any artifacts; hence, video files are generally passed through a band limiting filter when produced/scaled in order to produce a Lanczos suitable image. On such an image, Lanczos re-sampling always produces a resized image with all the detail present in the original preserved.

Other filters may better "extract" details from the source image, but sometimes those details will be unintended artifacts (e.g. look at the pigeon's eye for the NEDI/FEDI filters).

Note that Lanczos also works very well for rotating images (or doing simultaneous rotation and scaling).

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 8:55 am
by childOfTechno
An edge preserving filter looks nicer, but do you want to do this in real time, in software? I suspect the way to go is a plain boring trilinear filter, especially for downsampling an image.

EDIT: actually, as a compression mechanism, a triangle based approach might work (somewhat like quad-tree encoding.) Just off the top of my head:

You'd need a function that calculates an error sum across a triangle coloured with a gradient (one colour for each vertex) compared to the source image.

Divide the image in to two triangles. Set their initial vertex colours from the source image.

Walk down the tree, splitting triangles in to two halves at each step. Each step introduces a new vertex and a new vertex colour. If a triangle matches the source image closely (to within some error bound,) stop and mark the node as a leaf. The end result would be a binary space partition tree containing enough information to reconstruct the image.

It'd also be relatively fast, the error function would rarely need to test all the pixels covered by a given triangle, it could fall through once the error gets too large (which will happen quickly for large triangles covering a lot of detail.)

I still can't see how any triangle approach is going to work well in any case, simply because pixels are square (or at least rectangular.)

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 9:20 am
by Love4Boobies
Owen wrote:One thing to note with regards to Lanczos filtering, is that performance is dependent upon the filtering used when producing the source image. In particular, it is dependent upon a band-limited source image (which the used source image does not appear to be).
Of course it isn't; bandlimited images are not commonplace. Most interpolations produce optimal results in very specific cases and for very specific benchmarks (be said performance measured in terms of RMSE, PSNR, lack of artifacts, or simply overall visual quality). If Lanczos produces lower RMSE than, say, bicubic filtering but still a greater degree of ringing, I cannot be happy with the results.
Owen wrote:It has the feature that, for this target domain, it never produces any artifacts; hence, video files are generally passed through a band limiting filter when produced/scaled in order to produce a Lanczos suitable image. On such an image, Lanczos re-sampling always produces a resized image with all the detail present in the original preserved.
Okay, but what you're really resampling is an image distorted by a bandlimiting function. Except for some rare cases, such as an image that has been previously downsampled in a crude manner, that's not really desired. Moreover, a better term for such a process may perhaps be image editing. As for videos, there are alternatives, including Fast EDI (which requires half the computation power of NEDI and still yields better results---not as good as FEDI's, though).
Owen wrote:Other filters may better "extract" details from the source image, but sometimes those details will be unintended artifacts (e.g. look at the pigeon's eye for the NEDI/FEDI filters).
While I'm not sure that I see any artifacts in the pigeon's eyes for those two particular images, I obviously agree that artifacts are a reality of these algorithms. However, look at the ringing around the pigeon's beak when Lanczos interpolation is used; artifacts are a reality of the trade.
childOfTechno wrote:An edge preserving filter looks nicer, but do you want to do this in real time, in software? I suspect the way to go is a plain boring trilinear filter, especially for downsampling an image.
Perhaps one day. But Brendan made a good point: If you want perfect results and still good performance, you may want to look into vectorization. Alas, I'm not aware of any research on vectorizers that are as good as our best resamplers.

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 2:06 pm
by Brendan
Hi,
childOfTechno wrote:EDIT: actually, as a compression mechanism, a triangle based approach might work (somewhat like quad-tree encoding.) Just off the top of my head:

You'd need a function that calculates an error sum across a triangle coloured with a gradient (one colour for each vertex) compared to the source image.

Divide the image in to two triangles. Set their initial vertex colours from the source image.
Arbitrary splitting isn't going to lead to the least triangles.

One idea I had was to have an "assumed colour" for each vertex in a polygon (where the assumed colour is the colour at that vertex); and use these assumed colours to generate test gradients. Then find the "line of highest error" (highest difference between the original colour data and the test gradients), and split the polygon along this line. If there is no line of highest error, then find the vertex that when sliced off of the polygon (via. a line between its neighbouring vertices) would create the largest triangle.

Each time you split a polygon you create 2 polygons (which are guaranteed to have 3 or 4 sides/vertices, and guaranteed to be convex). If any of these new polygons have only 3 sides you try to determine if there are colours you could assign to each vertex such that the highest error is either zero (for lossless conversion) or below some error factor (for lossy conversion); and if it is you've found a triangle for the final solution.

Otherwise, (for any polygon that wasn't found to be an acceptable triangle) you go back to the first step and try to split it again.

Note: For 3-sided polygons, the results from the "see if it's acceptable for the final solution" step could be used to determine the "line of highest error".

I don't think this would be fast. Each time you split a polygon you'd need to split the bitmap data along the same line, such that (for e.g.) a grey pixel that is on the line may become a white pixel in one new polygon and a black pixel in the other new polygon. Because you'd want to keep the bitmap data for each polygon as a regular "(width, height)" rectangle; for the pathological worst case (an image consisting of many parallel diagonal lines) you can double the RAM used for bitmap data each time you split. To minimise RAM usage you'd probably want to split polygons in order from "most likely to be converted to final triangles" to "least likely", in the hope of being able to discard/free bitmap data sooner.

The other thing I should point out is that I do want to use some form of edge detection to split polygons; so that when the final triangles are scaled up the edges remain sharp and you don't get images that look blurry (like you do for bilinear scaling for example). The "line of highest error" idea should achieve this while also helping to minimise the number of triangles.
childOfTechno wrote:I still can't see how any triangle approach is going to work well in any case, simply because pixels are square (or at least rectangular.)
Pixels in bitmap data are only square for the simple cases. Rotate a bitmap 45 degrees and you get diamond shaped pixels. Do some perspective projection and you might end up diamond shaped pixels at different sizes (smaller diamond shapes in the distance). I'm not interested in the simple cases ("2D graphics" is just 3D graphics where textures happen to be parallel to the viewport by accident, not by design).


Cheers,

Brendan

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 2:44 pm
by Owen
Brendan wrote:
childOfTechno wrote:I still can't see how any triangle approach is going to work well in any case, simply because pixels are square (or at least rectangular.)
Pixels in bitmap data are only square for the simple cases. Rotate a bitmap 45 degrees and you get diamond shaped pixels. Do some perspective projection and you might end up diamond shaped pixels at different sizes (smaller diamond shapes in the distance). I'm not interested in the simple cases ("2D graphics" is just 3D graphics where textures happen to be parallel to the viewport by accident, not by design).
This kind of assumption is where the TV/Movie and PC worlds collide, and why video is band limited: The TV and video people take pixels to be `samples`, rather than "squares of colour". This band limiting is what makes Lanczos re-sampling ideal for such media.

I don't get what benefit merging vector and raster graphics into one representation gives you considering there are very good resampling algorithms for raster graphics anyway.

Re: Triangles (was Functional Programming)

Posted: Mon Feb 27, 2012 7:36 pm
by Brendan
Hi,
Owen wrote:I don't get what benefit merging vector and raster graphics into one representation gives you considering there are very good resampling algorithms for raster graphics anyway.
I don't see where you get "merging" from.

What I think I want is a system where normal applications never see or use any bitmap data for any reason; where triangles are the lowest level graphics primitive (not pixels or texels). Unfortunately, I couldn't expect the entire Internet to shift to triangles and also couldn't expect hardware devices (scanners, digital cameras, etc) to adopt triangles either. Therefore I have to be able to convert legacy bitmap data into triangles (for compatibility purposes). I also need to be able to render triangles (or, convert triangles back into bitmap data) within things like video drivers and printer drivers.

Of course "what I think I want" implies that I don't actually know if I want this or not. I need to do research to determine if this approach is practical or not (how bad the problems are and how good the advantages are).

I'm not worried about the rendering - it's a solved problem. The conversion from bitmap data into triangle data is so simple though, and that's the part that I need before I can determine if I actually want what I think I want. ;)


Cheers,

Brendan