News & Events
Adaptive FPGA-based video processing platform developed for DSTL
12th April 2017
UK MOD’s Defence Science and Technology Laboratory (DSTL) has contracted a team, led by Plextek Services Limited and including RFEL Limited and 4Sight Imaging limited to tackle the problem of performing rapid evaluations of real-time image processing functions and to simultaneously demonstrate the latest adaptive capabilities that modern FPGA-based system-on-chip architectures can deliver to defence and security surveillance applications, all within a minimised size, weight and power footprint.
DSTL can use this platform to solve complex defence vision and surveillance problems, facilitating the rapid incorporation of best-in-class video processing algorithms while simultaneously bridging the gap between research prototypes and deployable equipment.
Peter Doig, Plextek’s Defence Business Manager, said, “RFEL and Plextek have brought together their expertise in sensor exploitation and real-time embedded video
processing, along with DSTL and 4Sight’s adaptive algorithms, to create a single, environment that delivers a robust proving tool. This allows a diverse range of enhancements to be experimentally evaluated, using an optimised yet intelligent and flexible architecture that can be re-used in deployable field equipment.”
As video processing solutions become increasingly complex and sophisticated, a new problem has emerged – the need to optimally configure FPGA-based component functions and algorithms in real time, in rapidly varying conditions. To crack it, the team will deliver a solution that incorporates a software processing layer, previously developed for DSTL by 4Sight Imaging, that performs the adaption of the control variables to optimise the real-time video enhancement, replacing the need for a man-in-the-loop.
Using video metrics benchmarked against extensive human trials, the CPU-based configuration management layer can out-perform a human operator. Furthermore, all of the processing is performed at source, in real-time, thereby reducing off-board bandwidth and potentially alleviating downstream processing requirement. The whole is implemented in a low Size Weight and Power (SWaP) video enhancement platform and delivers a capability that never tires or misses the action, irrespective of the time of day, or the prevailing weather.
This innovative work draws together the best aspects of two approaches to video processing: high performance, bespoke FPGA processing supporting the computationally intensive tasks, and the flexibility (but lower performance) of CPU-based processing. This heterogeneous, hybrid approach is possible by using contemporary system-on-chip (SoC) devices, such as Xilinx’s Zynq devices, that provide embedded ARM CPUs with closely coupled FPGA fabric. The use of a modular FPGA design, with generic interfaces for each module, enables FPGA functions, which are traditionally inflexible, to be dynamically re-configured under software control. Critically, to support remotely deployable, real-world applications, the system will also manage its own power budget, adapting the processing solution to maximise time delivering operational benefit.
Dr Alex Kuhrt, RFEL’s CEO, said, “Image processing techniques can have a huge number of degrees of freedom, with a wide range of possible parameter settings. Usually, this calls for a high level of user expertise, in order to set up a vision system. For security or defence applications, where conditions can change rapidly, a system may also require frequent tuning of those video controls to re-optimise the image. Commercial camera systems already recognise this and deliver a point-and-shoot experience, but military users have more specialised needs for their imagery, needs that are not well served by commercial or broadcast video enhancement products. Those users are typically operating under high pressure: they need a system that performs the right optimisation, automatically, to deliver the right mission capability, and crucially, never gets tired of doing so.”
Kuhrt adds, “The aim of providing such a fully automated, self-optimising, real-time video processing just got one step closer. This approach can continuously adjust to the scene dynamics far more accurately and rapidly than a human could ever achieve. Add to this the ability to balance the optimal image quality with the bandwidth available and the power available and this technology can be exploited to provide a truly game-changing approach to high-performance video situational awareness.”
Later, RFEL plan to introduce this technology as an enhancement to their powerful HALO video platform. Coupled with their tried and tested video processing algorithms such as Stabilisation, Distortion Correction and Non-Linear Contrast Enhancement, this will provide a rapid product development approach for commercial designers addressing the most demanding vision applications in the security, military and aerospace domains.