Reverse-engineering insect brains to make robots

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

UK start-up Opteran, a University of Sheffield spin-out, has a completely different take on neuromorphic engineering compared to most of the industry. The company has reverse-engineered insect brains to derive new collision avoidance and navigation algorithms that can be used in robotics.

Opteran calls its new approach to AI “natural intelligence”, taking direct inspiration from biology for the algorithm part of the system. This approach is distinct from existing computer vision approaches, which primarily use AI/deep learning or photogrammetry, a technique that uses 2D photographs to infer information about 3D objects, such as dimensions.

Opteran’s natural intelligence requires no training data, and no training, more like how a biological brain works. Deep learning today is capable of narrow AI – it can perform carefully defined tasks in a limited environment such as a computer game – but huge amounts of training data are required, as is computation and energy consumption. Opteran wants to circumvent the limitations of deep learning by closely mimicking what brains actually do, in order to build autonomous robots that can interact with the real world while having a tight computational and energy budget. .

“Our goal is to reverse or rearrange nature’s algorithms to create a software brain that allows machines to perceive, behave and adapt more like natural creatures,” said Professor James Marshall, Director scientist from Opteran, in a recent presentation at the Integrated Vision Summit.

“Mimicking the brain to develop AI is an old idea, dating back to Alan Turing,” he said. “Deep learning, on the other hand, is based on a caricature of a tiny part of the visual cortex of the primate brain that ignores the great complexity of a real brain…modern neuroscience techniques are increasingly additionally applied to give the information we need to faithfully reverse engineer how real brains solve the problem of autonomy.

Reverse engineering brains requires studying animal behavior, neuroscience and anatomy together. Opteran worked with bee brains because they are both simple enough and capable of orchestrating complex behavior. Bees are able to navigate distances of 7 miles and communicate their mental maps accurately to other bees. It does all this with less than a million neurons, in an energy-efficient brain the size of a pinhead.

Opteran successfully reverse-engineered the algorithm used by bees for estimating optical flow (the apparent motion of objects in a scene caused by the relative motion of the observer). This algorithm can perform optical flow processing at 10 kHz for less than one watt, running on a small FPGA.

“This performance exceeds state-of-the-art deep learning by orders of magnitude in all dimensions, including robustness, power, and speed,” Marshall said.


With the rise of artificial intelligence, technologies claiming to be “brain-inspired” are on the rise. We examine what neuromorphic means today in our special project on Neuromorphic Computing.


Biological algorithms

Biological motion sensing was mathematically modeled in the 1960s based on experiments with insect brains. The model is called the Hassenstein-Reichardt detector and it has been verified many times through different experimental methods. In this model, the brain receives signals from two nearby receptors in the eye. Input from a receiver is delayed. If the brain receives both signals at the same time, the neuron fires, because that means the object you are looking at is moving. Doing this again with the other delayed signal means it works if the object is moving in both directions (hence the symmetry in the model).

Hassenstein-Reichardt detector compared to the Opteran algorithm
(Left) The Hassenstein-Reichardt detector, a model of motion detection in biological brains. (Right) Opteran’s proprietary algorithm derived from bee brains. (Source: Opteran)

Marshall explained in his presentation that the Hassenstein-Reichardt detector, while sufficient to model motion detection in fruit flies, is very sensitive to spatial frequency (the distribution pattern of darkness and light). light in an image) and contrast, and therefore not very suitable for generalized visual navigation.

“Bees are doing something smarter, which is a new arrangement of these elemental units,” Marshall said. “Bee flight behavior shows great robustness to spatial frequency and contrast, so there must be something else going on.”

Opteran used behavioral and neuroscientific data from bees to come up with its own visual inertial odometry estimator and collision avoidance algorithm (right in diagram above). This algorithm was compared and found to be superior to FlowNet2s (a state-of-the-art deep learning algorithm at the time), in terms of theoretical accuracy and robustness to noise. Marshall points out that implementing deep learning would also require GPU acceleration, with the associated power penalty.

Real world robotics

It’s a nice theory, but does it work in the real world? Opteran has indeed applied its algorithms to real-world robotics. The company has developed a robot dog demo, Hopper, in a form factor similar to Boston Dynamics’ Spot. Hopper uses an edge-only vision solution based on Opteran’s collision prediction and avoidance algorithm; when a potential collision is identified, a simple controller makes it turn away.

Opteran is also working on a 3D navigation algorithm, still based on bees. This solution will be equivalent to today’s SLAM (simultaneous location and mapping) algorithms, but it will also handle path planning, routing and semantics. Marshall said it would run on a fraction of a watt on the same hardware.

“Another significant saving is in the size of the map generated by this approach,” he said. “While classic photogrammetry-based SLAM generates map sizes on the order of hundreds of megabytes to gigabytes per square meter, which poses significant challenges for mapping large areas, we have maps that consume only kilobytes of memory.”

A demonstration of this algorithm powering a small drone in flight uses a single low-resolution camera (less than 10,000 pixels) to perform autonomous vision-based navigation.

Hardware and software

Opteran’s development kit uses a small Xilinx Zynqberry FPGA module that weighs less than 30g and consumes less than 3W. It requires two cameras. The development kit uses inexpensive ($20) Raspberry Pi cameras, but Opteran will work with OEMs to calibrate algorithms for other camera types during product development.

The current FPGA can simultaneously run Opteran’s omnidirectional optical flow processing and collision prediction algorithms. Future hardware could migrate to larger FPGAs or GPUs as needed, Marshall said.

The company is building a software stack for robotic applications. In addition to an electronically stabilized surround-view system, there’s collision avoidance and then navigation. Work is underway on a decision engine to enable a robot to decide where it should go and under what circumstances (deadline 2023). Future elements include social, causal, and abstract engines, which will allow robots to interact with each other, infer causal structures in real-world environments, and abstract general principles from lived situations. All of these engines will be based on biological systems – no deep learning or rule-based systems.

Opteran completed a $12 million funding round last month, which will fund the commercialization of its natural intelligence approach and the development of the remaining algorithms in its stack. So far, customer pilots have used stabilized vision, collision avoidance, and navigation capabilities in cobot arms, drones, and mining robots.

Future research directions could also include studying other animals with more complex brains, Marshall said.

“We started with insects, but the approach is changing,” he said. “We will look at vertebrates in due course, it is absolutely on our roadmap.”