We want to hear from you! Take our quick AI survey and share your thoughts on the current state of AI, how you’re implementing it, and what you’re hoping for in the future. Learn more
While the tech world remains obsessed with the latest Nvidia GPU-powered extended language models (LLMs), a quieter revolution is brewing in AI hardware. As the limitations and power requirements of traditional deep learning architectures become increasingly apparent, a new paradigm called neuromorphic computing is emerging that promises to dramatically reduce the compute and power requirements of AI.
Mimicking Nature’s Masterpiece: How Neuromorphic Chips Work
But what exactly are neuromorphic systems? To find out, VentureBeat spoke with Sumeet Kumar, CEO and founder of Innatera, a leading neuromorphic chip startup.
“Neuromorphic processors are designed to mimic the way the biological brain processes information,” Kumar explains. “Rather than performing sequential operations on data stored in memory, neuromorphic chips use artificial neural networks that communicate through spikes, much like real neurons.”
This brain-inspired architecture gives neuromorphic systems distinct advantages, particularly for edge computing applications in consumer devices and industrial IoT. Kumar highlighted several compelling use cases, including always-on audio processing for voice activation, real-time sensor fusion for robotics and autonomous systems, and ultra-low-power computer vision.
Countdown to VB Transform 2024
Join business leaders in San Francisco July 9-11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register Now
“The bottom line is that neuromorphic processors can perform complex AI tasks using a fraction of the power of traditional solutions,” Kumar said. “This enables features like continuous environmental monitoring to be implemented in battery-powered devices, which simply wasn’t possible before.”
From the doorbell to the data center: concrete applications emerge
Innatera’s flagship product, the Spiking Neural Processor T1, unveiled in January 2024, exemplifies these benefits. The T1 combines an event-driven compute engine with a conventional CNN accelerator and a RISC-V processor, creating a complete platform for ultra-low-power AI in battery-powered devices.
“Our neuromorphic solutions can perform computations with 500 times less energy than conventional approaches,” Kumar said. “And we see pattern recognition speeds that are about 100 times faster than our competitors.”
Kumar illustrated this point with a compelling real-world application. Innatera partnered with Socionext, a Japanese sensor provider, to develop an innovative human presence detection solution. The technology, which Kumar demonstrated at CES in January, combines a radar sensor with Innatera’s neuromorphic chip to create highly efficient, privacy-preserving devices.
“Take video doorbells for example,” Kumar says. “Traditional doorbells use power-hungry image sensors that need to be recharged frequently. Our solution uses a radar sensor, which is much more energy-efficient.” The system can detect human presence even when a person is still, as long as they have a heartbeat. Since it doesn’t use imaging, it maintains privacy until a camera needs to be activated.
The technology has wide-ranging applications beyond doorbells, including home automation, building security, and even occupancy detection in vehicles. “This is a perfect example of how neuromorphic computing can transform everyday devices,” Kumar said. “We’re bringing AI capabilities to the edge while reducing power consumption and improving privacy.”
Doing more with less in AI computing
These dramatic improvements in energy efficiency and speed are generating significant interest from the industry. Kumar revealed that Innatera has many customer engagements, and that neuromorphic technologies are gaining traction. The company is targeting the market for edge sensor applications, with an ambitious goal of bringing intelligence to a billion devices by 2030.
To meet this growing demand, Innatera is ramping up production. The spike-based neural processor is expected to enter production later in 2024, with volume shipments starting in the second quarter of 2025. This timeline reflects the rapid progress the company has made since it was spun off from Delft University of Technology in 2018. In just six years, Innatera has grown to approximately 75 employees and recently appointed former Apple vice president Duco Pasmooij to its advisory board.
The company recently closed a $21 million Series A funding round to accelerate the development of its spiking neural processors. The round, which was oversubscribed, included investors including Innavest, InvestNL, EIC Fund, and MIG Capital. This strong investor support underscores the growing excitement around neuromorphic computing.
Kumar envisions a future in which neuromorphic chips increasingly handle AI workloads at the edge, while larger foundational models remain in the cloud. “There’s a natural complementarity,” he said. “Neuromorphic chips excel at processing real-world sensor data quickly and efficiently, while large language models are better suited for reasoning and knowledge-intensive tasks.”
“It’s not just about raw computing power,” Kumar observed. “The brain is accomplishing remarkable intellectual feats with a fraction of the energy our current AI systems require. That’s the promise of neuromorphic computing – AI that’s not only more capable but also dramatically more efficient.”
Seamless integration with existing tools
Kumar pointed to a key factor that could accelerate adoption of their neuromorphic technology: developer-friendly tools. “We’ve created a very comprehensive software development kit that makes it easy for application developers to target our silicon,” Kumar explained.
Innatera’s SDK uses PyTorch as its interface. “You basically develop your neural networks entirely in a standard PyTorch environment,” Kumar noted. “So if you know how to build neural networks in PyTorch, you can already use the SDK to target our chips.”
This approach significantly lowers the barrier to entry for developers already familiar with popular machine learning frameworks, allowing them to leverage their existing skills and workflows while harnessing the power and efficiency of neuromorphic computing.
“This is a simple, turnkey, standard and very fast way to build and deploy applications on our chips,” Kumar added, highlighting the potential for rapid adoption and integration of Innatera’s technology into a wide range of AI applications.
The Silicon Valley Stealth Game
As LLMs grab headlines, industry leaders are quietly acknowledging the need for radically new chip architectures. OpenAI CEO Sam Altman, who has been vocal about the imminent arrival of artificial general intelligence (AGI) and the need for massive investment in chip manufacturing, has personally invested in Rain, another neuromorphic chip startup.
The move is telling. Despite Altman’s public statements about advancing current AI technologies, his investment suggests a recognition that the path to more advanced AI may require a fundamental shift in computing architecture. Neuromorphic computing could be one of the keys to closing the efficiency gap facing current architectures.
Bridging the gap between artificial intelligence and biological intelligence
As artificial intelligence continues to permeate every aspect of our lives, the need for more efficient hardware solutions will only grow. Neuromorphic computing represents one of the most exciting frontiers in chip design, with the potential to enable a new generation of smart devices that are both more capable and more durable.
While big language models grab the headlines, the real future of AI may lie in chips that think more like our own brains. As Kumar put it: “We’re just scratching the surface of what’s possible with neuromorphic systems. The next few years are going to be very exciting.”
As these brain-inspired chips make their way into consumer devices and industrial systems, we may be on the cusp of a new era of artificial intelligence—one that is faster, more efficient, and more closely tied to the remarkable capabilities of the biological brain.
Source link