Prev: M-QAM and M-PSK
Next: Facial aging project help
From: Listed Unlisted on 5 Apr 2010 15:53 I am designing a system using a robot in such a way that the Robot will pick up the tool when the camera sends the picture of the tool in its memory. In this, I need to use pattern recognition using Matlab. I have two tools to recognize. First my question is: What kind of webcam should I use? Should I use the one attached on the camera which is 1.3 MP web camera or should I should a different one that I can connect it to the camera? I know I need to convert my images into gray image. So Here's the code I have so far: vid=videoinput('winvideo'); set(vid, 'ReturnedColorspace', 'RGB'); g=getsnapshot(vid); figure(1); %%subplot(2,1,1); imshow(g); gGray=rgb2gray(g); I2 = imread('pattern.jpg'); %%subplot(2,1,2); imshow(I2); for i=1:5; g=getsnapshot(vid); gGray=rgb2gray(g); i1=I2;i2=gGray; s=size(i1); sI=size(gGray); rect_i1=[s(2) s(1)]; c=normxcorr2(i1,i2); [max_c, imax] = max(abs(c(:))); [ypeak, xpeak] = ind2sub(size(c),imax(1)); corr_offset = [(xpeak-s(2)) (ypeak-s(1))]+1; figure(2);imshow(i2);hold on; rectangle('Position',[corr_offset(1) corr_offset(2) s(2) s(1)]); gg=[corr_offset(1) corr_offset(2) s(2) s(1)]; end clc clear all It's not recognizing the correct image. Any suggestions? Anyone can help me with this?
From: Fabian Siddiqi on 5 Apr 2010 16:32 Hey, I would recommend using SIFT descriptors for the recognition. There is an opensource implementation (www.VLFEAT.org) with a MATLAB interface, so it's really simple to use. Read up on SIFT (Scale-Invariant Feature Transform). When you apply the SIFT transform to an image, it will extract "interesting" features and quantize them into 128 dimensional vectors. So, what you can do, is apply the transform onto images of the object you want it to recognize. Save these descriptors into a database. Then, when your robot is doing its thing and collecting pictures, apply the transform on these pictures: if the descriptors from the image you just took match the descriptors from the "training images", then you have a match and you're set: you know that the object is in front of the robot! What's good about SIFT is that, as its name indicates, the generated descriptors are invariant to scale and rotation, although now that I think about it, this might be a problem for you... If this looks like something that might be useful, I can post a more detailed way of how you'd go about doing this. You can read more about SIFT here: http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
|
Pages: 1 Prev: M-QAM and M-PSK Next: Facial aging project help |