Prev: Canon 1250U2
Next: driver
From: Don on 4 Oct 2005 11:29 On Mon, 03 Oct 2005 15:49:15 +0200, "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote: >But my point is that sharpening algorithms should not necessarily >produce haloes. I don't have proof -- actually, proof is what I'm hoping >to obtain if I can make my program work! --, but just note that my >hypothesis is just that: halos need not necessarily occur. "Sharpening" by increasing contrast of transition areas always produces halos. It's the basis of the algorithm. You may not perceive them but they're there. The optical illusion which makes you perceive this contrast in transition areas as sharpness has a threshold. Step over it and halos become perceptible. Stay below it and they don't. >By the way - not that it's particularly important, but I don't think the >"sharpest case possible" is a clean break between black and white, as at >least *one* gray pixel will be unavoidable, unless you manage to place >all of your "borders" *exactly* at the point of transition between two >pixels. It's theoretically the sharpest which, as you indicate, can also be achieved in practice sometimes by lining up the image and the sensors. Anyway, have to keep it short because I'm real busy today. With the new drive I decided to re-organize everything and that takes a lot of time. Haven't scanned anything in 3 days and I'm falling behind my original schedule... Why are days always too short? ;o) Don.
From: Bart van der Wolf on 5 Oct 2005 08:38 "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message news:zm0%e.4303$133.1958(a)tornado.fastwebnet.it... SNIP > Try with this command line: > > slantededge.exe --verbose --csv-esf esf.txt --csv-lsf > lsf.txt --csv-mtf mtf.txt testedge.ppm Yep, got it running. Bart
From: Bart van der Wolf on 6 Oct 2005 08:43 "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message news:CGa0f.9459$133.3815(a)tornado.fastwebnet.it... SNIP > But my point is that sharpening algorithms should not necessarily > produce haloes. I don't have proof -- actually, proof is what I'm > hoping to obtain if I can make my program work! --, but just note > that my hypothesis is just that: halos need not necessarily occur. That's correct, halo can be avoided while still boosting the high spatial frequencies' MTF. The boost however may be not too spectacular, it's just restoring some of the capture process losses. Some losses are inherent to the sampling process (e.g. area sampling versus point sampling will produce different MTFs from the same subject). Maybe it is more accurate to classify those as system characteristics rather than losses. Sharpening on the other end, with further anticipated losses in mind (e.g. printing) should introduce small halos in order to trick human vision, but the halo should not be visible (IOW smaller than visual resolution). What *is* visible is the contrast boost (without halo) of the spatial frequencies we *can* resolve. Capture loss restoration can be tackled by using a high-pass filter. Convolving the image with a smallish HP-filter kernel is rather fast and simple to implement. The best result can be achieved if the HP filter is modeled after the PSF. This is what it can look like on your (odd looking) "testedge.tif": <http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/testedge.zip> The Luminance channel was HP-filtered in "Image Analyzer" with a "user defined filter" of 7x7 support. The mean S/N ratio has decreased from 241.9:1 to 136.3:1, while the 10-90% edge rise went from 4.11 to 2.64 pixels. Unfortunately the scan suffers from some CA like aberration (especially the Red channel is of lower resolution), which may become more visible as well. Bart
From: Lorenzo J. Lucchini on 6 Oct 2005 09:23 Bart van der Wolf wrote: > > "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message > news:CGa0f.9459$133.3815(a)tornado.fastwebnet.it... > SNIP > > [snip] > > This is what it can look like on your (odd > looking) "testedge.tif": > <http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/testedge.zip> > The Luminance channel was HP-filtered in "Image Analyzer" with a "user > defined filter" of 7x7 support. Why is my edge odd-looking? Does it still look like it's not linear gamma? The behavior of my driver is puzzling me more and more, expecially wrt the differences between 8- and 16-bit scans. > The mean S/N ratio has decreased from 241.9:1 to 136.3:1, while the > 10-90% edge rise went from 4.11 to 2.64 pixels. Unfortunately the scan > suffers from some CA like aberration (especially the Red channel is of > lower resolution), which may become more visible as well. Note that I had disabled all color correction, and the three channels look much more consistent when my driver's standard color correction coefficients are used. By the way - I'm experimenting a bit with the Fourier method for reconstructing the PSF that's explained in the book's chapter you pointed me to (I mean the part about tomography). I don't think I have much hope with that, though, as there is interpolation needed, and it appears that interpolation in the frequency domain is a tough thing. OTOH, I think I've understood the way you're trying to reconstruct the PSF; I'm not sure I like it, since as far as I can understand you're basically assuming the PSF *will be gaussian* and thus try to fit a ("3D") gaussian on it. Now, perhaps the inexactness due to assuming a gaussian isn't really important (at least with the scanners we're using), but it still worries me a little. Also, the book says that gaussian PSFs have gaussian LSFs with the same parameters -- i.e. that a completely simmetrical gaussian PSF is the same as any corresponding LSF. Our PSFs are generally not symmetrical, but they *are* near-gaussian, so what would you think about just considering the two LSFs we have as sections of the PSF? I think in the end I'll also try implementing your method, even though there is no "automatic solver" in C, so it'll be a little tougher. But perhaps some numerical library can be of help. by LjL ljlbox(a)tiscali.it
From: Bart van der Wolf on 6 Oct 2005 13:37
"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message news:7A91f.15406$133.4116(a)tornado.fastwebnet.it... SNIP > Why is my edge odd-looking? The pixels "seem" to be sampled with different resolution horizontally/vertically. As you said earlier, you were experimenting with oversampling and downsizing, so that may be the reason. BTW, I am using the version that came with the compiled alpha 3. SNIP > OTOH, I think I've understood the way you're trying to reconstruct > the PSF; I'm not sure I like it, since as far as I can understand > you're basically assuming the PSF *will be gaussian* and thus try to > fit a ("3D") gaussian on it. In fact I fit the weighted average of multiple (3) Gaussians. That allows to get a pretty close fit to the shape of the PSF. Although the PSFs of lens, film, and scanner optics+sensor all have different shapes, the combination usually resembles a Gaussian just like in many natural processes. Only defocus produces a distinctively different shape, something that could be added for even better PSF approximation in my model. This is how the ESF of your "testedge" compares to the ESF of my model: http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/ESF.png > Now, perhaps the inexactness due to assuming a gaussian isn't really > important (at least with the scanners we're using), but it still > worries me a little. That's good ;-) I also don't take things for granted without some soul searching (and web research). > Also, the book says that gaussian PSFs have gaussian LSFs with the > same parameters -- i.e. that a completely simmetrical gaussian PSF > is the same as any corresponding LSF. The LSF is the one dimensional integral of the PSF. If the PSF is a Gaussian, then the LSF is also a Gaussian, but with a different shape than a simple cross-section through the maximum of the PSF. The added benefit of a Gaussian is that it produces a separable function, the X and Y dimension can be processed separately after one an other with a smaller (1D) kernel. That will save a lot of processing time with the only drawback being a very slightly less accurate result (due to compounding rounding errors, so negligible if acurate math is used). > Our PSFs are generally not symmetrical, but they *are* > near-gaussian, so what would you think about just considering the > two LSFs we have as sections of the PSF? Yes, we can approximate the true PSF shape by using the information from two orthogonal resolution measurements (with more variability in the "slow scan" direction). I'm going to enhance my (Excel) model, which now works fine for symmetrical PSFs based on a single ESF input (which may in the end turn out to be good enough given the variability in the slow scan dimension). I'll remove some redundant functions (used for double checking), and then try to circumvent some of Excel's shortcomings. > I think in the end I'll also try implementing your method, even > though there is no "automatic solver" in C, so it'll be a little > tougher. But perhaps some numerical library can be of help. My current (second) version (Excel spreadsheet) method tries to find the right weighted mix of 3 ERF() functions (http://www.library.cornell.edu/nr/bookcpdf/c6-2.pdf page 220, if not present in a function library) in order to minimize the sum of squared errors with the sub-sampled ESF. The task is to minimize that error by changing three standard deviation values. I haven't analyzed what type of error function this gives, but I guess (I can be wrong in my assumption) that even an iterative approach (although not very efficient) is effective enough because the calculations are rather simple, and should execute quite fast. One could consider (parts of) one of the methods from chapter 10 of the above mentioned Numerical Recipes book to find the minimum error for one Standard Deviation at a time, then loop through them again for a number of iterations until a certain convergence criterion is met. This is basically the model which is compared at the original ESF sub-sample coordinates: ESFmodel = ( W1*(1+IF(X<0;-Erf(-X/(SD1*Sqrt(2)));Erf(X/(SD1*SQRT(2)))))/2 + W2*(1+IF(X<0;-Erf(-X/(SD2*Sqrt(2)));Erf(X/(SD2*SQRT(2)))))/2 + W3*(1+IF(X<0;-Erf(-X/(SD3*Sqrt(2)));Erf(X/(SD3*SQRT(2)))))/2 ) / (W1+W2+W3) That will ultimately give, after optimization (minimizing the error between samples and model), three Standard Deviations (SD1,SD2,SD3) which I then use to populate a two dimensional kernel with the (W1,W2,W3) weighted average of three symmetrical Gaussians. The population is done with symbolically pre-calculated (with Mathematica) functions that equal the 2D pixel integrals of 1 quadrant of the kernel. The kernel, being symmetrical, is then completed by copying/mirroring the results to the other quadrants. That kernel population part is currently too inflexible for my taste, but I needed it to avoid some of the Excel errors with the ERF function. It should be possible to make a more flexible kernel if the ERF function is better implemented. Bart |