Last week someone put an interesting item onto Hack-A-Day; someone who’d built their own “CT Scanner”. And functionally it is. It takes x-ray images from all angles around an object, which they then scan into a computer and let it turn the platen images into a 3D model. I believe the latter was done using OpenCV scripts.
The recent and ongoing Occupy Wall Street protests in New York City have generated a large amount of citizen-media footage (albeit released without sound in many cases, to avoid breaking outdated wiretapping laws).
If key events observed from different positions could be found to align the timelines of footage, it seems plausible that the same scripts used in the CT scanner could be used to assemble a crude animated 3D model of the events as they unfold.
For purposes of evidence gathering and contextualisation of the news stories, it seems this could be a very useful tool, rendering isolated and often repeated footage into singular but multifaceted logs of events from every recorded angle.
I doubt it would be of high quality, due to all the variables and low-quality of the footage used, but it could perhaps knit together the results with the events that preceded them, and at least help in giving a more rounded view of events.