After years of dominance of a form of AI called Transformers, the search for new architectures has begun.
Transformers are the foundation of OpenAI’s video generation model Sora, and are at the core of text generation models such as Anthropic’s Claude, Google’s Gemini, and GPT-4o. But Transformers are starting to run into technical hurdles, particularly those related to computation.
Transformers, at least when running on commodity hardware, are not particularly efficient at processing and analyzing vast amounts of data, which is why as companies build and expand infrastructure to accommodate the requirements of transformers, electricity demand is growing exponentially, perhaps unsustainably.
One promising architecture proposed this month is Test-Time Training (TTT), which was developed over the course of a year and a half by researchers at Stanford University, UC San Diego, UC Berkeley, and Meta. The team argues that not only can the TTT model process much more data than the Transformer, it can do so without consuming as much computing power.
Transformer Hidden Status
The basic component of a Transformer is the “hidden state”, which is essentially a long list of data. When a Transformer processes something, it “remembers” what it processed by adding an entry to the hidden state. For example, if the model is processing books, the values in the hidden state might be representations of words (or parts of words), etc.
“If you think of the Transformer as an intelligent entity, the lookup table, that hidden state, is the Transformer’s brain,” Yu Sun, a postdoctoral researcher at Stanford University and collaborator on the TTT study, told TechCrunch. “This specialized brain is what enables the Transformer’s well-known features, such as in-context learning.”
Hidden state is one of the things that makes Transformers so powerful, but it also holds them back: for the Transformer to “say” even one word about a book it has read, the model must scan the entire lookup table, a task as computationally intensive as re-reading the entire book.
So Sun and his team came up with the idea of replacing the hidden state with a machine learning model—an AI nesting doll, if you like, a model within a model.
This gets a bit technical, but the gist is that the internal machine learning model of the TTT model, unlike a Transformer lookup table, does not get larger and larger as it processes additional data. Instead, it encodes the data it processes into representative variables called weights. This is what makes the TTT model so performant: the size of the internal model remains the same regardless of how much data the TTT model processes.
Sun believes future TTT models will be able to efficiently process billions of pieces of data, from words to images, voice recordings and videos, far beyond the capabilities of current models.
“Our system can say X words about a book without the computational complexity of re-reading the book X times,” Sun said. “Large-scale video models based on Transformers such as Sora can only process 10-second videos because they only have a lookup table ‘brain’. Our ultimate goal is to develop a system that can process longer videos that resemble the visual experience in human life.”
Skepticism about the TTT model
So, will the TTT models eventually replace Transformers? Possibly, but it’s too early to say for sure.
The TTT model is not a replacement for the Transformer, and because the researchers only developed two small models for their study, it is methodologically difficult to compare the TTT to some of the larger Transformer implementations currently on the market.
“I think it’s a really interesting innovation, and if it supports the claim that data leads to efficiency gains then that’s great news, but I don’t know if it’s better than existing architectures,” says Mike Cook, a senior lecturer in the School of Information at King’s College London, who was not involved in the TTT research. “When I was an undergraduate, an old professor of mine used to make this joke: ‘How do you solve a computer science problem? You add another layer of abstraction. Adding a neural network within a neural network definitely reminds me of that.'”
Either way, the accelerated research into transformer alternatives signals a growing awareness of the need for innovative solutions.
This week, AI startup Mistral released Codestral Mamba, a model based on another alternative to Transformers called a state-space model (SSM). Like the TTT model, SSMs are more computationally efficient than Transformers and can scale to larger amounts of data.
AI21 Labs is also researching SSMs, as is Cartesia, which developed some of the first SSMs, as well as the Codestral Mamba and its namesake Mamba and Mamba-2.
If these efforts are successful, for better or worse, generative AI could become even more accessible and pervasive than it is today.