It remains the dream of military imagery analysts who stare at surveillance footage all day: sensors and cameras that alert their human masters to looming threats. The Navy's next research program wants to make it an overdue reality.
It actually wants to do much more than that, according to a Monday research announcement. But at a minimum, the Navy's mad scientists want you to help them write stronger, more robust algorithms that can fold different data sets from different sensor systems into a single, unified picture that gives sailors a deeper understanding of the dangers they face.
Or, as the Navy puts it, better algorithms that can enable the development of 'key technologies that will enable rapid, accurate decision making by autonomous processes in complex, time varying highly dynamic environments that are probed with heterogeneous sensors and supported by open source data,' according to a new call for papers from the Office of Naval Research.
This is something of a white whale for the military. In 2011, the blue-sky researchers at Darpa began exploring ways to automatically pre-select camera imagery for viewing, so analysts wouldn't drown in a tsunami of data from ever-more-powerful surveillance tools. 'We're collecting data at rates well above what we had in the past,' Air Force Secretary Michael Donley lamented last year, warning that it will take 'years' for human eyes to catch up to all the services' robotic ones.
Enter the Office of Naval Research. One of its new special program announcements for 2013 identifies software algorithms as a major point of concern: It wants more robust logic tools play nicely across hardware and software platforms, pre-assembling a mosaic of threats. Don't bother writing them better search tools for sifting through their data archives: The Navy expressly rules that out. It wants the imaging equipment of pre-cut vegetables in a salad bag.
One subset of that research is called Sensor Management and Allocation. Its goal: to 'optimally task and re-task large sensors networks [sic] based on current picture and sensor availability to understand the battle space and maintain dynamic persistent surveillance.' A related effort, called Automated Image Understanding, gets more explicit. It's about 'detection and tracking of objects on water or in urban areas and inferring the threat level they may pose' ' sharply enough that the algorithm should be able to pick out 'partially occluded objects in urban clutter.' All this has to happen in real time.
Notice that the Navy isn't talking about developing new hardware that can automatically spot the dangerous, partially concealed things in water or in urban areas. It's got that stuff already, and on deck, particularly when it comes to spotting what lurks underwater. The new algorithms are about making all of that gear much, much smarter, and more deeply integrated ' or, at least, it might, if defense hardware manufacturers' software weren't proprietary.
Lurking behind all this is a wicked problem: figuring out how to represent distant objects caught within a field of vision as threatening; calculating the degree of threat; and weighting those threats when integrating them with either different images or images of the same field at an earlier time. Narrow your field too finely and you'll miss threats; widen it too much and you'll be awash in information.
The Navy's advice is to embrace uncertainty. 'If the process is to be automated and timely relative to a mission,' the Office of Naval Research states, 'then algorithms must be implemented that can sense, interpret, reason and successfully act in an open world with uncertain, incomplete, imprecise, and contradictory data.' That's something human analysts know very well ' and for which they're always trying to compensate.
Tidak ada komentar:
Posting Komentar