Prev: SI Facescape
Next: FF camera with mirrorless design
From: David J Taylor on 28 Apr 2010 10:24 > no. it might be slightly better than just letting the printer handle > it, but do you really think there will be more detail in the upsized > one? There is more apparent detail, but whether it is accurate or not is another matter. In satellite images we do find that providing a 2:1 interpolation (e.g. from 3MP to 12MP) does make fine detail more easily discerned (than simple pixel replication). Cheers, David
From: David J Taylor on 28 Apr 2010 10:30 "nospam" <nospam(a)nospam.invalid> wrote in message news:280420100709042417%nospam(a)nospam.invalid... [] > it may be a very accurate pixel, but it's still a *single* pixel and it > differs from foveon because each measurement is independent. > > with foveon, the layers are tightly intertwined. unlike the pretty (and > misleading) pictures in their ads, the layers do *not* measure red, > green and blue, that's only a result of (here it comes), interpolation. > in fact, there is more interpolation with foveon than there is with > bayer which is comical, actually. Agreed. [Although I would prefer not use the term "interpolation" to describe the 3 x 3 matrix processing to convert the three Foveon sensed values into three RGB values. Something like "colour correction", perhaps?] David
From: nospam on 28 Apr 2010 10:36 In article <hr9gpr$p44$1(a)news.eternal-september.org>, David J Taylor <david-taylor(a)blueyonder.co.uk.invalid> wrote: > > it may be a very accurate pixel, but it's still a *single* pixel and it > > differs from foveon because each measurement is independent. > > > > with foveon, the layers are tightly intertwined. unlike the pretty (and > > misleading) pictures in their ads, the layers do *not* measure red, > > green and blue, that's only a result of (here it comes), interpolation. > > in fact, there is more interpolation with foveon than there is with > > bayer which is comical, actually. > > Agreed. > > [Although I would prefer not use the term "interpolation" to describe the > 3 x 3 matrix processing to convert the three Foveon sensed values into > three RGB values. Something like "colour correction", perhaps?] well, it *is* interpolating the overlapping spectra to figure out the incident colour is (and not all that accurately either).
From: Alfred Molon on 28 Apr 2010 11:49 In article <280420100736582849%nospam(a)nospam.invalid>, nospam says... > In article <hr9gpr$p44$1(a)news.eternal-september.org>, David J Taylor > <david-taylor(a)blueyonder.co.uk.invalid> wrote: > > > > it may be a very accurate pixel, but it's still a *single* pixel and it > > > differs from foveon because each measurement is independent. > > > > > > with foveon, the layers are tightly intertwined. unlike the pretty (and > > > misleading) pictures in their ads, the layers do *not* measure red, > > > green and blue, that's only a result of (here it comes), interpolation. > > > in fact, there is more interpolation with foveon than there is with > > > bayer which is comical, actually. > > > > Agreed. > > > > [Although I would prefer not use the term "interpolation" to describe the > > 3 x 3 matrix processing to convert the three Foveon sensed values into > > three RGB values. Something like "colour correction", perhaps?] > > well, it *is* interpolating the overlapping spectra to figure out the > incident colour is (and not all that accurately either). It's three separate spectral measurements per pixel, while a Bayer sensor has only one. Also, Bayer does not measure luminance at the pixel level, while a full colour sensor does. In any case we were talking about *spatial* interpolation, which Bayer does to generate the final image. -- Alfred Molon ------------------------------ Olympus E-series DSLRs and micro 4/3 forum at http://tech.groups.yahoo.com/group/MyOlympus/ http://myolympus.org/ photo sharing site
From: Ray Fischer on 28 Apr 2010 13:14
David J Taylor <david-taylor(a)blueyonder.co.uk> wrote: >"nospam" <nospam(a)nospam.invalid> wrote in message >> In article <hr8n74$uku$1(a)news.eternal-september.org>, David J Taylor >> <david-taylor(a)blueyonder.co.uk.invalid> wrote: >> >>> One imager I work with has 11 pixels at each spatial coordinate: >>> >>> http://www.esa.int/esapub/bulletin/bullet111/chapter4_bul111.pdf >>> >>> and there are 3712 x 3712 coordinates in an image. >> >> the word pixel appears only once in the entire document and it says >> 'image data' for 4 visible/near-infrared and 8 infrared channels, not >> pixels. >> >> it's a 13.7 megapixel sensor (3712 x 3712), with each pixel having 12 >> components. it is *not* 165 megapixels. > >If you look at the physical sensor, there are (IIRC) three detectors for >each of 11 channels, so does that make it a 33 pixel sensor? Once again you confuse "pixel" with the data used to represent the pixel. Is a single RGB pixel really 24 pixels buecause it uses 24 bits to represent the color? That seems to be your argument. >Where it differs from the Bayer sensor is that at each spatial location, >11 different spectral bands are sensed. so that at each location there >are 11 independent measurements. Now explain why that is of any relevance at all. -- Ray Fischer rfischer(a)sonic.net |