Prev: Canon 1250U2
Next: driver
From: Lorenzo J. Lucchini on 28 Sep 2005 20:44 Bart van der Wolf wrote: > > "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message > news:1aB_e.2000$133.1794(a)tornado.fastwebnet.it... > SNIP > >> Yeah, it's the Unix emulation layer that Cygwin compiled programs >> apparently need. >> I've uploaded it at >> http://ljl.741.com/cygwin1.dll.gz >> http://ljl.150m.com/cygwin1.dll.gz > > > Getting closer, although now it complains about not being to locate > cygfftw3-3.dll (presumably the FFT routine library). Argh! I hoped that at least FFTW was linked statically. Well, http://ljl.741.com/cygfftw3-3.dll.gz as well as, just in case http://ljl.741.com/cygfftw3_threads-3.dll.gz >> [snip] > > Yes, that's what I was thinking of, interpolation in one direction to > compensate for the aspect ratio. > > SNIP > >>> The noise in the image is single pixel noise (and has a bit of >>> hardware calibration striping). >> >> What is single pixel noise -- or, is it "single pixel" as opposed to >> what)? > > No, the edge seems to be NN interpolated, but the noise is not (there > are single (as opposed to 1x2 pixel) variations). Oh, but I hadn't interpolated *up*, but resized *down*! I've resized the 4800dpi direction to 2400dpi. Anyway, this doesn't matter. > I'll wait for the next tarball, before making further benchmark tests. It's there. As you might suspect, at http://ljl.741.com/slantededge-alpha3.tar.gz I haven't included the libraries, as they're a bit big and my server doesn't let me upload more than 1 meg or so. This time I'm positive there is no gamma and no color correction. I've also used a brand new edge, which if I'm not mistaken is really a razor blade this time! (No, I don't really know what a razor blade looks like. So sue me :-) Anyway, it seems quite sharp, probably sharper than the cutter I was using before. There is still something I don't quite understand: the program now reports that the 10%-90% rise is about 6 pixels, while it was about 4-5 before (and Imatest agreed). I don't think this is because of the edge, but rather because of the fact I'm not normalizing the image anymore -- so 10% and 90% "take longer" to be reached. Should these 10% and 90% positions fixed as if the image were normalized? by LjL ljlbox(a)tiscali.it
From: stevenj on 28 Sep 2005 23:42 Lorenzo J. Lucchini wrote: > > Depending on the library implementation, for complex numbers , Abs[z] > > gives the modulus |z| . > > No, there isn't such a function in FFTW. FFTW is not a complex-number library. You can easily take the absolute value of its complex number yourself by sqrt(re*re + im*im), or you can use the standard C complex math library (or the C++ library via a typecast). > However, there are functions to > directly obtain a real-to-real FFT; I probably should look at them, > although I'm not sure if the real data they output are the moduli or > simply the real parts of the transform's output. Neither. The real-to-real interface is primarily for transforms of real-even or real-odd data (i.e. DCTs and DSTs), which can be expressed via purely real outputs. They are also for transforms of real data where the outputs are packed into a real array of the same length, but the outputs in this case are still complex numbers (just stored in a different format). Cordially, Steven G. Johnson
From: Ole-Hjalmar Kristensen on 29 Sep 2005 04:47 Here is an implementation of Glassman's method of FFT, which wil work for any N, not just powers of two. If N is not a power of two, it degenerates to a DFT. The code is lifted from PAL (Public Ada Library), if I remember correctly, and I do not think there is any restrictions on it. You will have to convert it to C of course, but the code is pretty obvious even if yo don't know Ada. Just beware that unlike C, Ada allows nested subroutines, and that arrays do not necessarily start with index 0..... with Ada.Numerics.Short_Complex_Types; use Ada.Numerics.Short_Complex_Types; package FFT_Pack is type Complex_Vector is array(Integer range <>) of Complex; procedure Debug(X : Complex_Vector); procedure FFT (FFT_Data : in out Complex_Vector ; Inverse_Transform : in boolean ); end FFT_Pack; with Ada.Numerics.Short_Elementary_Functions; use Ada.Numerics.Short_Elementary_Functions; with Ada.Text_Io; use Ada.Text_Io; package body FFT_Pack is procedure Debug(X : Complex_Vector) is begin for I in x'Range loop Put_Line(Integer'Image(I) & " : " & Short_Float'Image(X(I).Re) & " " & Short_Float'Image(X(I).Im)); end loop; end Debug; procedure Glassman (A, B, C : in integer; Data_Vector : in out Complex_Vector ; Inverse_Transform : in boolean ) is Temp : Complex_Vector(1..Data_Vector'length); Counter : integer := Data_Vector'first; JC : integer := 0; Two_Pi : constant short_float := 6.28318530717958; Del, Omega, Sum : Complex; Angle : short_float; C_Plus_1 : integer := C + 1; begin Temp(Temp'Range) := Data_Vector; Angle := Two_Pi / (short_float(A * C)); Del := (Cos(Angle), (-(Sin(Angle)))); if (Inverse_Transform) then Del := Conjugate(Del); end if; Omega := (1.0,0.0); for IC in 1..C loop for IA in 1..A loop for IB in 1..B loop Sum := Temp((((IA - 1)*C + (C-1))*B) + IB); for JCR in 2..C loop JC := C_Plus_1 - JCR; -- No need to add C + 1 each -- time through loop Sum := Temp((((IA - 1) * C + (JC - 1)) * B) + IB) + (Omega * Sum); end loop; -- JCR Data_Vector(Counter) := Sum; Counter := Counter + 1; end loop; -- IB Omega := Del * Omega; end loop; -- IA end loop; -- IC end Glassman; procedure FFT ( FFT_Data : in out Complex_Vector; Inverse_Transform : in boolean ) is A : integer := 1; B : integer := FFT_Data'length; C : integer := 1; begin -- FFT while (B > 1) loop -- define the integers A, B, and C A := C * A; -- such that A * B * C = FFT_Data'length C := 2; while (B mod C) /= 0 loop C := C + 1; end loop; B := B/C; -- B = 1 causes exit from while loop Glassman (A,B,C, FFT_Data, Inverse_Transform); end loop; if Inverse_Transform then -- optional 1/N scaling for inverse -- transform only for i in FFT_Data'range loop FFT_Data(i) := FFT_Data(i) / short_float(FFT_Data'length); end loop; end if; end FFT; end FFT_Pack; -- C++: The power, elegance and simplicity of a hand grenade.
From: Don on 29 Sep 2005 11:53 On Wed, 28 Sep 2005 18:29:13 +0200, "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote: >But all this made me wonder about something else: would it make any >sense to compare the edge *position* of each (red, green and blue) >channel with the edge position in the luminance channel? That would be a good test. My point is that there's misalignment between individual channels, so to combine them (using *any* method) reduces accuracy because the result is a (fuzzy) mix of all of them. By doing channels individually you should get more accurate measurements. In other words (and all other things being equal) I would expect that individual channels would differ much less in relation to each other, then any one of them will differ in relation to the combined luminance or average values. >I mean. SFRWin gives "red", "blue" and "green" color offsets (for >measuring "color fringing"), but the "green" offset is always zero, as >the other two channels are compared to green. I would do a full permutation and compare all to each other. Anyway, have fun! :o) And let us know how it turns out! Don.
From: Don on 29 Sep 2005 11:53
On Wed, 28 Sep 2005 15:28:36 +0200, "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote: >> I've only been skimming this thread and not paying too much attention >> because the MTF of my scanner is what it is and there's nothing I can >> do about it... So, with that in mind... > >Well, but I don't plan to stop here. What I'd mostly like to obtain from >all this is a way to "sharpen" my scans using a function that is >tailored to my scanner's specifics (rather than just "unsharp mask as I >see fit"). > >So, you see, there is a practical application of the measurements I can >obtain, it's not just about knowing how poor the scanner's resolution is. I agree with that in principle. However, in practical terms I think that starting with an 8-bit image there's only so much accuracy you can achieve. I strongly suspect (but don't know for a fact) that you will not be able to show a demonstrable difference between any custom sharpening and just applying unsharp mask at 8-bit depth. I think you can improve the sharpness considerably more (even at 8-bit depth) by simply aligning individual channels to each other. >And why do you say I'm measuring the "objective values" of the pixels >instead of their "perceptual values"? I'm mostly trying to measure >resolution, in the form of the MTF. Because it's all based on those gray pixels which are created because the scanner can't resolve that border area. So it's much better to read the actual values of those gray pixels rather than take either an average or luminance value. If the three RGB channels are not perfectly aligned (and they never are!) then combining them in any way will introduce a level of inaccuracy (fuzziness). In case of luminance that inaccuracy will also have a green bias, while the average will be more even - which is why I said that your original idea to use average seems like the "lesser evil" when compared to the skewed and green-biased luminance values. >So you see that I'm *already* doing measurements that are inherently >"perceptual". So why not be coherent and keep this in mind throughout >the process? Because perception is subjective. When there is no other way, then yes, use perception. But since you already have the values of those gray pixels it just seem much more accurate to use those values. >> Actually, what I would do is measure each channel *separately*! > >... I'm doing this already. >The "gray" channel is measured *in addition* to the other three >channels, and is merely a convenience. That's good. So displaying individual results should be easy. Don. |