So it’s been quite a while since I have done an update about my OSS.
Since my last update I have be doing alot of reading and a small amount of programming. I have really focused now specifically on video mosaics. However the phrase, “video mosaic”, can mean a variety of things to a variety of people. Therefore in order to eliminate confusion, the goal of the project is to create a two dimensional picture that represents a scene by stitching various sequential frames of video footage together.
The first attempt that I made at tackling the problem was by using optical flow to determine how much features had moved between frames (on average). It turns out that in practice this method is not very accurate as it hard to eliminate rotation from video (this algorithm could only represent translations). Therefore a more sophisticated approach was required.
I have moved on to a featureless approach which aims to calculate the motion between frames by minimizing the difference in intensities. While I believe this approach to be superior, it draws on quite a bit of math which it makes it quite difficult to understand and therefore implement.
I have also realized that OpenCV now longer offers many benefits for this problem as it doesn’t implement the necessary functions. So intend to use the NASA vision workbench as it implements many more of the required functions.