Skip to main content

IBM Builds First Chips For Human-Inspired Robo-Brain, Does Not Appear To Be Terrified

Back in 2008, IBM announced their plan to try and create a computer that more accurately imitated the computing process of the human brain. Now, with a little help from 4 research Universities and DARPA, they’ve produced prototypes of the first few chips and are on the way to building their first, full-fledged robo-brain. If you’re going to run for the hills, now would be a good time to start.

Recommended Videos

The project’s goal is to drastically alter the way in which computers compute by changing their structure to mimic the human brain instead of functioning as calculators. You see, most computers are von Neumann machines, meaning that the memory and processor are separated. The practical result of this is that as you reach the limit of how fast information can be passed from the memory to the processor via the bus, you reach the limit of computing power. In the human brain, the memory and the processors are in thousands of nodes that communicate slower, but simultaneously and radially. IBM’s prototypes are chips that, in quantity, can emulate those nodes.

The chips aren’t made out of brain-matter or anything quite that mad-sciencey but have been produced using fairly standard (if high-tech) computer chip production techniques. The real innovation here is intended to be the change in architecture. These first prototype chips are not the equivalent of single neurons, but rather contain 256 robo-neruons. This is important considering the human brain has somewhere around a billion neurons, so that’s still a lot of chips. The hope is that a whole slew of these together will make for a robo-brain that can start to emulate complex human thought-processes.

While things look promising (depending on how you feel about robo-brains) we’re still a ways off from a high-functioning cognitive unit. Still, the potential for these robo-brains is intense. The ultimate hope is that these machines will not require programming but instead learn from experience, develop something similar to what we experience as “memory” instead of just RAM, and eventually learn to make and test hypotheses and find correlations. Basically, they’ll be able to learn, in a very human sense of the word. All in all, it seems very interesting and I’m sure that the perfection of these kinds of computers could do wonders for the fields of scientific research, human interfacing and a bunch of other things no one would even be able to predict. Still, I’m starting to feel like I’m living in the prequel to The Matrix.

(via VentureBeat)

Have a tip we should know? [email protected]

Author

Filed Under:

Follow The Mary Sue: