By Rob Mitchum | Photo by Mark Lopez/Argonne National Laboratory
If you want to know how a machine works, it helps to look inside. Crack open the case and look at how it’s wired together; you might need an engineering degree, a microscope and a lot of time, but eventually you can puzzle out what makes any given device tick.
But can that same approach work for the most amazing machine we know—one capable of making complex calculations in a fraction of a second, while using less energy than a common light bulb?
Reverse engineering the human brain is one of the great scientific challenges of our time, and scientists at the University of Chicago and Argonne National Laboratory are combining new techniques in microscopy, neurobiology and computing to reveal the brain’s inner mechanisms in unprecedented detail.
Treating the brain as a machine is not a far-fetched metaphor. In the abstract, the brain is an electrochemical computer, operating on electrical impulses and chemical signals sent between cells. Though the individual pieces may be small, on the scale of mere nanometers, drawing the wiring diagram for this machinery is theoretically possible, and has been done for very simple organisms such as the roundworm C. elegans.
But the size and complexity of the human brain create far bigger challenges. Scientists estimate that the brain contains nearly 100 billion neurons, the basic type of brain cell. Each of those neurons makes tens of thousands of contacts with other cells, bringing the number of connections into the quadrillions, or a million billion.
A complete map of these connections—sometimes called the connectome—would be nothing less than the largest dataset ever created. But within that massive inventory could lie answers to some of the most elusive scientific questions: the fundamental rules of cognition, explanations for many mental illnesses, even the biological factors that separate humans from other animals.
“It’s a huge theory of neuroscience that all of our behaviors, all of our pathologies, all of our illnesses, all of the learning that we do, is all due to changes in the connections between brain cells,” said Narayanan “Bobby” Kasthuri, assistant professor of neurobiology at the University and neuroscience researcher at Argonne. “It’s probably the equivalent of the standard model in physics, but in neuroscience.”
‘Soft, squishy things’
Since the time of Hippocrates and Herophilus, scientists have placed the location of the mind, emotions and intelligence in the brain. For centuries, this theory was explored through anatomical dissection, as the early neuroscientists named and proposed functions for the various sections of this unusual organ. It wasn’t until the late 19th century that Camillo Golgi and Santiago Ramón y Cajal developed the methods to look deeper into the brain, using a silver stain to detect the long, stringy cells now known as neurons and their connections, called synapses.
Today, neuroanatomy involves the most powerful microscopes and computers on the planet. Viewing synapses, which are only nanometers in length, requires an electron microscope imaging a slice of brain thousands of times thinner than a sheet of paper. To map an entire human brain would require 300,000 of these images, and even reconstructing a small three-dimensional brain region from these snapshots requires roughly the same supercomputing power it takes to run an astronomy simulation of the universe.
Fortunately, both of these resources exist at Argonne, where, in 2015, Kasthuri was the first neuroscientist ever hired by the U.S. Department of Energy laboratory. Peter Littlewood, the former director of Argonne who brought him in, recognized that connectome research was going to be one of the great big data challenges of the coming decades, one that UChicago and Argonne were perfectly poised to tackle.
“All real advances in science are advances in technology,” said Littlewood, professor of physics at the University. “What we were doing at Argonne with X-rays and electron microscopy was going to produce a straightforward change in the way we could process data in high resolution. We just needed somebody crazy enough to imagine this was a real possibility and who also owned the technology and understanding to do it.”
Kasthuri brought with him automated methods he developed for efficiently mapping the brain. A diamond knife with an edge only five atoms thick cuts 50-nanometer-thin slices of human, mouse or even octopus brain, which float away on water to a conveyer belt that takes them sequentially beneath the gaze of an electron microscope.
“You look at Bobby’s setup, it’s like somebody slicing cheese at the deli,” said Michael Papka, SM’02, PhD’09, director of the Argonne Leadership Computing Facility and professor of computer science at Northern Illinois University. “It’s not the world that computer scientists normally work with, it’s soft squishy things. I find it a fascinating pipeline.”
That’s the easy part. The Theta supercomputer at Argonne clocks out at 11.69 petaflops—between 11,000 and 12,000 million million operations per second. It’s typically used for processing particle physics data from the Large Hadron Collider at CERN or running models of universal expansion that assist the search for dark matter. Kasthuri’s data, Papka said, is beyond the capabilities of this world-class machine; the data has to be simplified, or downscaled, before it can be analyzed.
“Bobby talks about the number of neurons and number of galaxies, how complexity-wise they’re roughly the same,” Papka said. “Actually, the brain’s probably even more complex.”
But the close relationship between the University and Argonne provides a unique location to untangle this knot.
“At most other universities, I’d just have to give up this idea,” Kasthuri said. “Even a small part of a brain I could never map, because even 1 percent of a mouse brain is something like 1,000 terabytes of data. No other university in the world, I think, could conceivably handle that.”
To reduce this mind-boggling complexity into more practical science, Kasthuri is starting (relatively) small. Where other high-profile connectome projects have focused on building a complete map of the human brain, Kasthuri is focusing first on comparisons: between young brains and old, between animal brains and human, between “normal” brains and the brains of people with mental disorders.
“I think the only way we’re going to understand the brain is by comparing it to other things,” he said. “As far as we know, a neuron in the mouse looks like a neuron in the human. The ion channels in a mouse neuron are the same; the genes are the same. We’re left with this idea that the difference between a human brain and a mouse brain is in the pattern of connections, the number of neurons and, therefore, the number of connections in those two brains.”
One approach is to compare a common segment of brain from two very different organisms, such as the mouse and the octopus. The largest invertebrate brain belongs to the octopus, and cephalopod species have been well studied with neuropsychological and genetic methods by scientists such as Clifton Ragsdale, professor of neurobiology at the University.
In a proposed project with Ragsdale, Kasthuri will map and compare the visual brain areas of the mouse and octopus—the latter an extremely visual species, yet one that views the world differently from most mammals, focusing on form instead of movement.
“If you’re going to apply connectomic techniques to a particular system in octopus, then you should pick something like vision,” Ragsdale said. “What’s striking about it is the eye looks vertebrate-like, but as soon as you hit the photoreceptors and the optic lobe, those are invertebrate structures. So we might get insight at a circuit level into how cephalopod mollusks carry out their visual processing, and knowing the key elements of the circuitry is essential to begin to have any chance of understanding how neural circuits in invertebrates and vertebrates underlie behavior.”
Another comparison, this one within species, could offer answers to a classic dilemma: Can you teach an old dog, or human, new tricks? Kasthuri, fascinated by the ability of children to easily acquire new skills or assimilate culturally compared to adults’ difficulty with the same tasks, wants to compare young and old brains.
Beyond the age curve of learning, such a study could also address fundamental questions about how the adult brain is built—one connection at a time like a mosaic or pruned from a surplus of neurons and connections like a topiary.
“There’s some deep trade-off in our brains between having a young brain capable of learning anything but not really being good at any of it, and having an adult brain being good at a few things, but having no capacity to learn,” Kasthuri said. “There has to be a physical basis to this phenomenon at some level, and I want to know what that is.”
Unlocking the possibilities
Extending the applications of connectome research to medicine may be a longer road. Though scientists have found evidence of neuroanatomical differences in people suffering from schizophrenia and behavioral disorders, the link remains controversial.
Instead, the near-term benefits of brain mapping will be to equip scientists studying more elemental links between brain and behavior with a deeper understanding of the organ’s complex mechanical structure.
“You won’t understand the brain with just the wiring diagram, but you also probably won’t understand the brain without that wiring diagram,” said John Maunsell, the Albert D. Lasker Professor of neurobiology and director of the Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior. He is co-chair of a working group, assembled by the NIH Director's Advisory Committee, to review progress on the U.S. BRAIN initiative and advise on future directions.
“On the basic science side, there’s lots of different questions that haven’t been approachable, things I think are just sitting there waiting to be done as soon as we can get some really solid data on what’s changing synaptically," Maunsell said.
Another potential application spills outside of neurobiology itself into the world of computer engineering. Computer scientists already take inspiration from the brain in how they design both hardware and algorithms — one popular machine learning technique used to make predictions on data is called a “neural network” and works in similar fashion to today’s understanding of how neuronal connections strengthen and weaken.
The connectome’s higher-resolution view of how the brain stores information and learns new functions could lead to even more advanced artificial intelligence approaches. And the incredible energy efficiency of the brain—running at only about 20 watts—could hold lessons for designing less power-hungry supercomputers.
“Our brains can compute at an energy scale that’s impossible to currently imagine with these kinds of computers, and [people] can still do operations that these computers cannot do,” Kasthuri said. “There’s already an effort here—at Argonne and at the UChicago Institute of Molecular Engineering—to think about the next generation of computing hardware. If that next generation is modeled on the energy efficiency of our brains, that is going to be a game changer.”
For the computers inside our skulls, mapping the connectome also unlocks myriad new science and engineering possibilities. Like the Human Genome Project, its potential is equaled only by its challenges, and the path to the finish line is steep, but reaching it and understanding how the brain is wired can make great strides in teaching us who we are.
“I have to categorically dismiss the claim that it’s beyond our understanding,” Maunsell said. “It is complex, it’s fantastically complex. But just because we’re not there yet doesn’t mean we’re not going to get there. And you know the whole history of science is just breaking down these walls one after the next.”
Originally published on June 25, 2018.