News By Ryan Whitwam Jul. 6, 2014 10:00 am
A pair of computer scientists Carnegie Mellon University have created a video algorithm that knows your time is valuable. The system developed by Eric Xing and Bin Zhao can take a piece of video and automatically trim out all the boring parts, leaving you with the most important sections. It’s like CliffsNotes for video.
The system is called LiveLight, and it works by creating a “learned dictionary” of a video by watching for changes. It identifies patterns and uses that to determine when something unusual or notable happens. In the video demo used at a recent computer vision conference, a toddler is fiddling with an iPad for a solid two minutes. To the parent, every single second is pure gold, but not so much so someone else. Rather than show disinterested friends the whole two minute clip, LiveLight can cut it down to a few seconds of just the highlights.
Measuring the “regularity” of a video isn’t just useful for making videos of your kids more tolerable, it could assist police investigating a crime. If a surveillance camera is thought to have captured evidence of a crime, someone has to watch it. Rather than finding the relevant portion of the video, LiveLight can pull out only the parts where there’s something happening that might be important. For example, someone walking through the frame or an uptick in movement in a crowd.
The researchers call the system quasi-real time because the algorithm takes a few hours to run on a conventional computer. With a more powerful system (or even a supercomputer) LiveLight can finish in a few minutes. When the system spits out its edit, the human operator can even look at the dictionary material to add or remove things for a more accurate final cut.
Xing and Zhao have formed a company called PanOptus to market LiveLight. The computational requirements make it less than ideal for consumers right now, but imagine if it could be slimmed down to run on a smartphone? People would buy that.
More...
The system is called LiveLight, and it works by creating a “learned dictionary” of a video by watching for changes. It identifies patterns and uses that to determine when something unusual or notable happens. In the video demo used at a recent computer vision conference, a toddler is fiddling with an iPad for a solid two minutes. To the parent, every single second is pure gold, but not so much so someone else. Rather than show disinterested friends the whole two minute clip, LiveLight can cut it down to a few seconds of just the highlights.
Measuring the “regularity” of a video isn’t just useful for making videos of your kids more tolerable, it could assist police investigating a crime. If a surveillance camera is thought to have captured evidence of a crime, someone has to watch it. Rather than finding the relevant portion of the video, LiveLight can pull out only the parts where there’s something happening that might be important. For example, someone walking through the frame or an uptick in movement in a crowd.
The researchers call the system quasi-real time because the algorithm takes a few hours to run on a conventional computer. With a more powerful system (or even a supercomputer) LiveLight can finish in a few minutes. When the system spits out its edit, the human operator can even look at the dictionary material to add or remove things for a more accurate final cut.
Xing and Zhao have formed a company called PanOptus to market LiveLight. The computational requirements make it less than ideal for consumers right now, but imagine if it could be slimmed down to run on a smartphone? People would buy that.
More...