Working on georectifying and geolocating and aligning a pile of aerial images taken from a UAV last week. The results are far from perfect but some of the edges are actually starting to line up.
One interesting thing to notice … the edges of the image quads are curved because of the camera projection, but the edges inside the image come out straight.
My process is to do an initial projection of the images onto a plane using the flight data (camera location, attitude). Then find keypoints in the images. Then project the keypoint coordinates from image space into “map” space. Then find the matching keypoints between pairs of images. Finally I have built my own incremental affine transform solver that attempts to align the keypoints (in map coordinate space, not pixel coordinate space.)
The result is hopefully a ‘consensus’ placement/alignment of images such that each image is more accurately placed in the end than at the start, and there is some advantage to averaging gps and attitude determination errors.
I can also use this process to estimate camera shutter latency (0.65 seconds) as well as camera roll, pitch, and yaw alignment errors (-0.40, -1.60, -3.55 degrees respectively)
You can see the process is working to some degree. I think where it currently is falling down the most is with keypoint generation and matching. This really would work best if I could a good distribution of keypoints across each image (which I can do) but then my keypoint matcher seems to reject most of those less dominant keypoints out away from where the keypoints would naturally cluster.
For those that are curious, as a quick way to visualize the image group, I have created a 3d model and can look at it quickly using osgviewer.
Andrey, I did experiment with hugin quite a bit, but I couldn’t find a way to make it work. Do you have a work flow description you could share with me, or would you be willing to guide me through the process once? I tried to follow their mosaic tutorial, but either I’m missing something or just not smart enough to figure it out.
Hi Curtis, I apologize if misled you. The only thing I did with Hugin - stitched simple panoramas. But as I can see it’s modular and capable software. Stitcher might be used as stand alone utility. And community forums may help. On other hand the task you are trying to solve may have solution already. Here is an example: http://diydrones.com/profiles/blogs/aerial-mapping-stitch-by-ms-ice
I assume you did research and saw such pages. Thinking multirotor may be better than plane for this because you can create mission with coordinates, altitude, heading predefined and just take a picture and than no need to find all rotation angles afterwards. I’m not an expert just connecting bits and pieces of spread information together.
Hi Andrey, Microsoft ice looks interesting. I haven’t tried it yet. I’ll probably push on my own approach a bit more here before I completely give up. There is still another big optimization step left for me to do that hopefully will clear up many of the obvious errors. Perhaps one advantage my approach will have over ICE is that I make use of my knowledge of camera positions to project into map space … and then I do all my fitting in map space, not image space. So I’ll live with some image discontinuities if it preserves more accurate geospatial relationships vs. trying to stretch and fit the images anyway possible to get a best visual fit. On the other hand, if everything was perfect, both ICE and myself would achieve a perfect visual fit and perfect geospatial relationships. It’s an interesting challenge!