Loughborough University Develops Energy Efficient AI Chip Using Material Based Computing for Advanced Data Processing

Image: Getty.

(IN BRIEF) Researchers at Loughborough University have developed a brain-inspired computer chip that could significantly reduce the energy consumption of certain AI tasks, potentially by up to 2,000 times. The device uses nanoporous oxide memristors to process time-dependent data directly in hardware, eliminating the need for energy-intensive software-based computation. Tested on tasks such as predicting chaotic systems, recognizing images, and performing logical operations, the chip demonstrates versatility and efficiency. The innovation addresses growing concerns about the energy demands of AI systems and offers a potential pathway toward more sustainable and scalable computing solutions. While still in early stages, further research aims to expand its capabilities and bring it closer to practical applications.

(PRESS RELEASE) LOUGHBOROUGH, 2-Apr-2026 — /EuropaWire/ — Loughborough University researchers have developed a novel computer chip that leverages the physical properties of materials to process information, offering the potential to dramatically improve the energy efficiency of certain artificial intelligence applications.

The device, created by a team led by Dr Pavel Borisov, is designed to handle time-dependent data directly within hardware, rather than relying on traditional software-based computation. This approach allows AI systems to perform specific tasks using significantly less energy, with findings indicating efficiency improvements of up to 2,000 times in certain scenarios.

The Loughborough University chip during testing.

The research, supported by the Engineering and Physical Sciences Research Council, demonstrates a shift in how AI systems can be constructed by embedding computational processes within the physical structure of materials themselves. According to the team, this method reduces the need for energy-intensive processing typically associated with conventional computing systems.

At the core of the innovation is a memristor-based device made from nanoporous oxide. These materials contain randomly distributed nanopores that create multiple electrical pathways, effectively mimicking the complex connectivity found in neural networks. This structure enables the device to act as a form of reservoir computing system, where incoming data is transformed into patterns that can be analyzed and predicted more efficiently.

The researchers tested the system across several tasks involving time-varying data. Using the Lorenz system, a well-known model associated with chaotic behavior and the so-called butterfly effect, the chip demonstrated the ability to predict short-term system dynamics and reconstruct missing information. Additional tests showed that the device could recognize simple pixel-based numerical images and perform basic logical operations, highlighting its versatility across different computational tasks.

The findings suggest that integrating computation directly into hardware could help address the growing energy demands of modern AI systems. As machine learning models become more complex, their power consumption continues to rise, creating challenges for sustainability and scalability. By shifting part of the computational workload into material-based processes, similar outcomes may be achieved with substantially lower energy requirements.

Dr Borisov explained that the work was inspired by the human brain’s highly interconnected and seemingly random neural structures. By replicating this complexity through nanoscale engineering of niobium oxide films, the team was able to create a system capable of processing data efficiently while maintaining predictive capabilities.

Further development is needed to advance the technology beyond its current experimental stage. Future work will focus on scaling the system, increasing the complexity of the networks, and testing performance using more realistic and noisy datasets. The researchers believe this approach could lead to compact, industry-ready AI devices with improved energy efficiency and the ability to operate independently of large-scale computing infrastructure.

Professor Sergey Saveliev, a co-author of the study, noted that the research highlights how fundamental physics can be applied to modern computational challenges, using the inherent complexity of physical systems to reduce computational overhead.

The study, titled Scalable Platform Enabling Reservoir Computing With Nanoporous Oxide Memristors for Image Recognition and Time Series Prediction, has been published in the Advanced Intelligent Systems.

Media Contact:

Tel: +44 (0)1509 222224
email: publicrelations@lboro.ac.uk

SOURCE: Loughborough University

MORE ON LOUGHBOROUGH UNIVERSITY, ETC.:

EDITOR'S PICK:

Comments are closed.