From: Gus Lott on
So I've got some DCAM cameras streaming video into Matlab via the IMAQ toolbox. I've got an object in the field of view and I will put a certain number of retroreflective motion capture dots on it fixed on a rigid, known geometry.

What I'd like to do is calculate the angles of orientation (azimuth and altitude or however they're labeled i.e. pan/tilt angles) and position of the known constellation of points in my video.

So I know that if you start with the points in the geometry with zero tilt and at some arbitrary origin, the current position of the object is defined by a translation and a rotation in 3D. Then the points with that transform are then projected onto the camera to reduce the dimensionality to 2.

Assuming that I am sampling above the nyquist for my bright spots, I can do sub-pixel interpolation to get a nice estimate of the absolute position of my points in the frame.

Can anyone point me in the direction towards finding the x/y/z translation and rotation of the rigid frame given the projected points on a single camera (not stereo vision).

I figure that you can set up equations relating a the projection u/v points on the image to x/y/z points in the world coordinate system and then you also know the rigid equations linking the points in x/y/z. There's got to be a requirement for "number of dots" and maybe some non-symmetric geometry constraints to unambiguously decode the pose.

I wanted to avoid reinventing the wheel if the solution already existed out there in the world. I figure this was frequently done for augmented reality applications and wanted to leverage other work when possible.

Thanks for any help!