Skip to main content

Rat Brain Cells Learn to Compute Like AI in Groundbreaking Study

Biological Meets Artificial: Rat Neurons Master AI Tasks

In what sounds like science fiction becoming reality, scientists from Japan's Tohoku University and Future University have successfully trained living rat brain cells to perform artificial intelligence computations. The neurons, taken from rat cortices, learned to generate complex temporal signals using a real-time machine learning framework - blurring the lines between biological and artificial intelligence.

Image

How It Works: A Living Computer

The research team built something extraordinary - a "closed-loop reservoir computing" system that combines living neurons with high-density microelectrode arrays and microfluidic devices. Unlike traditional computing that requires constant external input, this biological system learns and generates periodic and chaotic waveforms all on its own.

"What's fascinating," explains Professor Hidemasa Yamamoto from Tohoku University, "is that these living neuronal networks aren't just biologically significant - they're proving to be viable computational resources."

The secret lies in the team's innovative use of polydimethylsiloxane (PDMS) microfluidic films. Without physical constraints, neurons tend to form overly synchronized networks that aren't great for learning. The researchers solved this by confining neuronal cell bodies in 128 microscopic pores connected by tiny channels, creating two distinct network structures: grid and hierarchical patterns.

Putting Neurons to the Test

The results were impressive. During testing, the grid network configuration proved particularly adept at generating various waveforms:

  • Precise sine waves with periods ranging from 4 to 30 seconds
  • Clean triangle and square wave patterns
  • Even approximating complex three-dimensional chaotic trajectories known as Lorenz attractors

During learning phases, the system's predicted signals matched target signals with over 80% accuracy - demonstrating genuine learning capability from biological components.

Challenges Ahead

While groundbreaking, the technology isn't without its limitations. Researchers noticed that errors creep in when the system operates autonomously after training stops. There's also a noticeable 330-millisecond feedback loop delay that currently limits how quickly the system can track rapidly changing waveforms.

The team is already looking ahead to developing specialized hardware that could reduce these delays. Such improvements could unlock exciting applications in:

  • Advanced brain-computer interfaces
  • Next-generation neural prosthetic devices
  • Novel computing architectures blending biology and technology

Key Points:

  • Living AI: Rat cortical neurons successfully trained to perform real-time computations
  • Self-learning System: Microfluidic "closed-loop reservoir computing" requires no external input
  • Technical Hurdles: Feedback delays and autonomous operation errors need addressing
  • Future Potential: Could revolutionize neural prosthetics and brain-machine interfaces

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Peking University and OceanBase Break New Ground in Long Video Search Technology

Researchers from Peking University and OceanBase have developed LoVR, a groundbreaking benchmark for long video retrieval that tackles key industry challenges. Accepted by WWW 2026, this innovation enables precise searches across entire videos or specific segments through advanced semantic analysis. The system features over 40,000 finely annotated clips and addresses real-world problems like semantic drift in lengthy content.

March 2, 2026
video retrievalAI researchmultimodal technology
News

Robots Get a Sense of Touch with Groundbreaking New Dataset

A major leap forward in robotics arrived this week with the release of Baihu-VTouch, the world's first cross-body visual-tactile dataset. Developed collaboratively by China's National-Local Co-built Humanoid Robot Innovation Center and multiple research teams, this treasure trove contains over 60,000 minutes of real robot interaction data. What makes it special? The dataset captures not just what robots see, but how objects feel - enabling machines to develop human-like tactile sensitivity across different hardware platforms.

January 27, 2026
roboticsAI researchtactile sensing
Robots Get a Sense of Touch: Groundbreaking Dataset Bridges Vision and Feeling
News

Robots Get a Sense of Touch: Groundbreaking Dataset Bridges Vision and Feeling

Scientists have unveiled Baihu-VTouch, the world's most comprehensive dataset combining robotic vision and touch. This collection spans over 60,000 minutes of interactions across various robot types, capturing delicate contact details with remarkable precision. The breakthrough could revolutionize how robots handle delicate tasks - imagine machines that can actually 'feel' what they're doing.

January 26, 2026
roboticsAI researchtactile sensors
News

AI cracks famous math puzzle with a fresh approach

OpenAI's latest model has made waves in mathematics by solving a long-standing number theory problem. The solution to the Erdős problem caught the attention of Fields Medalist Terence Tao, who praised its originality. But behind this success lies a sobering reality - AI's overall success rate in solving such problems remains low, reminding us that these tools are assistants rather than replacements for human mathematicians.

January 19, 2026
AI researchmathematicsmachine learning
AI's Scientific Breakthrough: How FrontierScience Tests the Next Generation of Research Assistants
News

AI's Scientific Breakthrough: How FrontierScience Tests the Next Generation of Research Assistants

Artificial intelligence is making waves in scientific research, but how do we measure its true reasoning capabilities? The new FrontierScience benchmark puts AI models through rigorous testing in physics, chemistry, and biology. Early results show GPT-5.2 leading the pack, though human scientists still outperform when it comes to open-ended problem solving. This development could reshape how research gets done in labs worldwide.

December 17, 2025
AI researchscientific computingmachine learning benchmarks
AI2's Molmo 2 Brings Open-Source Video Intelligence to Your Fingertips
News

AI2's Molmo 2 Brings Open-Source Video Intelligence to Your Fingertips

The Allen Institute for AI has just unveiled Molmo 2, a game-changing open-source video language model that puts powerful visual understanding tools directly in developers' hands. With versions ranging from 4B to 8B parameters, these lightweight yet capable models can analyze videos, track objects, and even explain what's happening on screen. What makes this release special? Complete transparency - you get full access to both the models and their training data, a rare find in today's proprietary AI landscape.

December 17, 2025
AI researchcomputer visionopen source AI