From: Kenneth Galea on
"Kenneth Galea" <k.galea(a)hotmail.com> wrote in message <hki1bk$ee5$1(a)fred.mathworks.com>...
> ImageAnalyst <imageanalyst(a)mailinator.com> wrote in message <43b5892c-04fa-4b61-b1cc-328deeb03719(a)t1g2000vbq.googlegroups.com>...
> > Because I wanted to select objects that had high red values but low
> > green and low blue values. Objects that have high red and low green
> > and blue signal will appear as red in the image. Make sense?
> >
> > As a counter example if I wanted greenish things, I'd take reds in the
> > range 0-redThresh, greens in range greenThresh-255, and blues in the
> > range 0-blueThresh.
> >
> > If I wanted blue things I'd take pixels having higher blue values and
> > lower red and green values.
> > Does that make more sense now?
>
> Yes thanks a lot. Now I my problem is that when I subtract a white image from another white image some "salt and pepper nosie is seen". I used medfilt2 but its still not enough. After I used bwareaopen which didn't make any diffenerce even if I consider an area of 50 pixels...don't know why its not working!! Is there another method for this noise removal??
>
> Thanks
> Kenneth

Hi sorry to send you again.
I am having a bit of the same problem again with subtraction. I tried the background division as an alternative method and when I divided the test by background, the images are not even recognisable at that point. So I decided to keep with subtraction method. In this case I'm ensuring that illumination is always the same and from the same position (i'm not moving the camera) and when I compare a test image (containing an object) with the background image its perfect.....however when I compare a white image with another white image taken exactly from the same place with the same amount of light but taken a few seconds later there is a noticable difference since the result is quite a noisy image which sometimes can't be removed by filtering. Why am I still obtaining this difference....it's really frustrating !!!?? Maybe if i'll buy a camera with flash incorporated would it be better?....
since I'm really clueless at this stage:(

Thanks
Kenneth
From: ImageAnalyst on
Kenneth    
You need to remove noise from your reference (background) image.
Otherwise the noise essentially doubles up because you have noise from
both images and they'll add in quadrature more or less. Imagine that
you had an exposure that was hours long - you wouldn't have any video
noise would you? This is what you want to achieve because this would
represent the "true" illumination pattern. So how can you get that?
Well you could average together a bunch of images to beat down the
noise. Or you could try to run some noise reduction filters on the
image, such as simple averaging or median filters, or more
sophisticated methods like bilateral filters, sigma filters, etc. to
the even more sophisticated methods like "non-local means," BM3D
(http://www.cs.tut.fi/~foi/GCF-BM3D/), UINTA (http://www.cs.utah.edu/
~suyash/pubs/uinta/), K-LLD (http://users.soe.ucsc.edu/~milanfar/
talks/), or many others.

Of course more light is better but you have to have controlled light,
which light from a flash IS NOT. If you do that, heck even if you
don't, you might want to put a standard in your image, such as a gray
scale step wedge running along one edge of your image. Then you can
check that and apply the appropriate intensity correction to your test
image to bring it back to the standard.

Once you've removed the noise from your background, you'll have only
the noise in your image to worry about. You can use the same kind of
techniques there. It really depends on what analysis you want to do
as far as how much effort you need to put into a noise reduction
method. If the simplest works for you, then great. Otherwise try
better ones. Why don't you post your "good, improved" intensity-
corrected images and I'll see what I can do with them this weekend?
From: ImageAnalyst on
Here's a big list of noise reduction methods, including some with
MATLAB implementations:

http://www.stanford.edu/~slansel/tutorial/software.htm
From: Kenneth Galea on
Hi
I wanted to post my final code first so that you can easily understand what I did:

background = imread('C:\Users\Kenneth\Desktop\background.jpg');
subplot(3,3, 1);
imshow(background);
set(gcf, 'Position', get(0, 'ScreenSize')); % Maximize figure.
title('Background Image');

testImage = imread('C:\Users\Kenneth\Desktop\test3.jpg');
subplot(3,3, 2);
imshow(testImage);
title('Test Image');

redPlane_test = double(testImage(:, :, 1));
greenPlane_test = double(testImage(:, :, 2));
bluePlane_test = double(testImage(:, :, 3));

redPlane_background = double(background(:, :, 1));
greenPlane_background = double(background(:, :, 2));
bluePlane_background = double(background(:, :, 3));


subtractedImageRed = redPlane_test - redPlane_background;
subtractedImageGreen = greenPlane_test - greenPlane_background;
subtractedImageBlue = bluePlane_test - bluePlane_background;
subplot(3,3, 3);
imshow(subtractedImageRed, [])
title('Subtracted Red');


redThreshold = graythresh(subtractedImageRed)
greenThreshold = graythresh(subtractedImageGreen)
blueThreshold = graythresh(subtractedImageBlue)

redThreshold = uint8(redThreshold * 255);
greenThreshold = uint8(greenThreshold * 255);
blueThreshold = uint8(blueThreshold * 255);

binaryImageRed1 = (subtractedImageRed < -redThreshold) ;
subplot(3,3, 4);
imshow(binaryImageRed1, []) ;
title('Dark things');
binaryImageRed2 =(subtractedImageRed > redThreshold);
subplot(3,3, 5);
imshow(binaryImageRed2, []);
title('Bright things');

binaryImageRed = (subtractedImageRed < -redThreshold) |(subtractedImageRed > redThreshold);
binaryImageGreen = (subtractedImageGreen < -greenThreshold) | (subtractedImageGreen > greenThreshold);
binaryImageBlue = (subtractedImageBlue < -blueThreshold) | (subtractedImageBlue > blueThreshold);


binaryImage = binaryImageRed & binaryImageGreen & binaryImageBlue ;
binaryImage = ~binaryImage;
subplot(3,3, 6);
imshow(binaryImage)
title('Both Dark and Bright things');

figure
imshow(binaryImage)
binaryImage = im2bw(binaryImage)

filtered_binaryImage = medfilt2(binaryImage);
figure
imshow(filtered_binaryImage)

Link: http://drop.io/thresholdbackground
This link shows what I was trying to work on (N.B. these two images are 2 different background images taken exactly one after each other - everything the same though as I said when subtracting noise is observed as can be seen in the third image "result"). In my opinion, with objects thresholding was better when graythresh was used on a gray image instead of considering red,green and blue. However in the case when there are no objects using the latter method gave better results ...which still need to be improved as you can see in the link.

Regards
Kenneth
From: ImageAnalyst on
Kenneth
I'm not exactly sure how to respond. With objects in the image and a
blank background you can use a threshold on monochrome images, or use
color classification on color images. With color images, simple color
classification can sometimes be done with thresholding the three color
bands, which will essentially select out a rectangular chunk out of
the 3D rgb color space. You pick a threshold such that it will select
the objects and not select the background. If you can do this with an
auto threshold, such as the Otsu method used by graythresh, then
fine. However a simple auto method (like gray thresh) will force a
threshold to be picked that divides the image into foreground and
background. If you're looking at a picture that has no foreground
because it's all background then it will basically split the
background into two classes (light background and dark background),
which is not what you want. That's why I said to use a fixed
threshold. It will always pick out the proper things regardless of
how large or small the foreground objects are. With an autothreshold
method, the threshold that is picked depends on the area fraction of
the objects in the foreground. If they are of substantial size with
respect to the background it may pick a good threshold value but if
the foreground is too small or nonexistent then it will try to pick a
threshold based on just the background which will of course be
unacceptable.

In your latest case you had no foreground and so the subtraction is
essentially all just noise. Then the autothreshold picked some gray
level right in the midst of that noise and so it picks out the
strongest noise as foreground objects. Obviously you don't want
that. If you're going to have images that range anywhere from big
objects to small objects to no objects, then you're going to have to
have a threshold selection method that will give you the correct
threshold. I've had cases like this and I can tell you that I use a
fixed threshold. You pick a threshold and if there's something above
it, it's an objects. If there's nothing above it there's o objects.
It works for any and all object sizes, from an object that is 100% of
the pixels all the way down to 0% of the pixels. It ALWAYS works,
unlike an autothreshold method which only works well if the foreground
and background can be split into two histogram humps (unless of course
the auto method is smart enough to revert to a fixed threshold when it
recognized that the test image histogram is essentially no different
than the background histogram, but then we're back to the fixed
threshold again.)

So if you want to be completely general I'd stick with a fixed
threshold. But if you're always going to have objects that take up a
substantial portion of the image (like your scissors and pen image)
AND you're going to correct for shading in the background
illumination, then an auto threshold can work. But like I said, it
won't work for the "no objects" case.