Friday, 4 October 2019
Karl Friston: toward a grand unifying theory of life and cognition?
This week I’d like to tell you about a fascinating piece of reporting by journalist Shaun Raviv, in the November 13, 2018 issue of Wired magazine. It’s about one of the most important figures in the cognitive sciences today: Karl Friston. I call Raviv’s piece reporting rather than an interview because he spent more than a week in London in the summer of 2018 researching it. Its title, “The Genius Neuroscientist Who Might Hold the Key to True AI”, might seem sensationalistic, since we all know what a buzzword artificial intelligence has become. But in fact, this title understates the case. As Raviv puts it, “Friston believes he has identified nothing less than the organizing principle of all life, and all intelligence as well.”
What Friston offers is the kind of (very) grand unifying theory that doesn’t come along in science every day. But who is this guy with such big ideas? Well, according to the Royal Society of Fellows, to which he was admitted in 2006, the methods that he has developed for processing brain-imaging data are used in more than 90% of the studies published in this field. For example, Friston’s statistical parametric mapping method, which he developed in 1990, provides a means of compensating for the differences among people’s brains so that their activity can be compared. In other words, this method provides the basis for all of the brain-imaging protocols that scientists now use and take for granted. But before Friston, the principle involved was far from obvious. When you consider all of the fields of cognitive science that began to take off with the advent of brain imaging in the 1990s, it’s no surprise that according to calculations by the Allen Institute for Artificial Intelligence, Friston is the most frequently cited neurobiologist in the world today, or that Clarivate Analytics, which has successfully predicted the recipients of 46 Nobel Prizes in the sciences over the past 20 years, has ranked Friston among the three most likely future winners in the physiology or medicine category.
So when a scientist of this stature proposes a grand unifying theory, the scientific community really has no choice but to debate it. And the scope of the debate is just as sweeping as the implications of Friston’s celebrated free energy principle, according to which all living beings are constantly striving, in each of their cells and in all of their behaviours, to minimize this “free energy”. Why? To escape from the famous second principle of thermodynamics, according to which everything in the universe tends toward disorder and disorganization—in short, growing entropy. But according to Friston, this is not the case for human beings, who temporarily escape from this implacable law of physics by minimizing free energy. How? One way is by learning how to modify their internal models of the world in accordance with the differences that they observe between their predictions and what their senses tell them about what is actually going on in their environment. And it is the tremendous plasiticity of the human nervous system that makes these changes in our internal models possible.
Alternatively, we can also act upon our environment to make it conform more closely to our internal models. This is what Friston calls active inference. In the example that Raviv offers, if you predict that you are touching your nose with your left index finger, but your senses tell you that your left arm is still hanging at your side, you can minimize your observed prediction error by raising that finger toward your nose.
In both cases—plasticity and active inference—you minimize the observed error in your prediction, which can be broadly equated with minimizing free energy. In other words, when you minimize the surprise caused by the discrepancy between your internal models and the outside world, you are minimizing free energy.
The free energy principle has wide-ranging theoretical and practical implications in a wide variety of fields, including the study of mental disorders. The hallucinations that some people with schizophrenia experience may thus be seen as the result of their giving too little weight to the evidence of their senses. They make prediction errors, then fail to correct their internal models sufficiently, so that they get caught up in their erroneous interpretations and become disconnected from the real world.
But the profound implications of the free energy principle are not always so obvious. Some have even said that only Friston can really understand this principle, because you have to know ancient Greek to read Friston “in the original” (a joking reference to the many mathematical equations in his articles, with Greek letters symbolizing countless variables).
So, too much math, according to some people. Too many assumptions with too many “moving parts”, according to others. And such sweeping ambitions that they raise much skepticism, especially among some anthropologists, who study human diversity, and some evolutionary biologists, who study the mechanisms of evolution. But other researchers in these same disciplines are trying to translate the classic questions of these fields into this new language of the free energy principle. So that there is debate, there can be no doubt.
And since one of my roles as a cognitive-science blogger is to draw attention to current debates in this vast field, I’d like to end this post by addressing a question that was the title of an article by Friston in the July 2018 issue of Nature Neuroscience: “Does predictive coding have a future?” Because many interesting thinkers whom I have been following for a long time are now taking an interest in this “strange inversion”, as Friston explained in the abstract of this article: a shift from the 20th-century model of the brain as a machine that extracts knowledge from sensations to a 21st-century model of the brain as a machine for making predictions, an organ that actively constructs explanations about what is going on in the outside world, far from its sensory receptors.
We can understand why such a shift might be hard to accept for some professors late in careers that they built around this 20th-century paradigm. But we can also understand why Deric Bownds, a retired neuroscience researcher with a great deal of curiosity who still follows developments in his field closely, confessed in his blog that he had been “seriously remiss in not paying more attention to a revolution in how we view our brains.” Especially since this revolution is entirely compatible with other recent major conceptual contributions to cognitive science, such as embodied cognition, the dynamic nature of living phenomena on various time scales, and the concept of affordance, all of which, when you think about it, share a focus on action.
We can also understand why Friston’s ideas can be so exciting to certain young researchers whose brains harbour a priori models of the world that are not yet too set in stone. I have already written about some of these researchers in this blog, including Anil Seth, Samuel Vessière, and Maxwell Ramstead of McGill University in Montreal, who is also cited at the end of the Raviv’s article in Wired, along with Julie Pitt, head of machine-learning infrastructure at Netflix, who also has had her thinking transformed by Friston’s concept of active inference.
From the Simple to the Complex | Comments Closed