At the intersection of neuroscience and artificial intelligence there is a popular trope that marvels at how our brain can accomplish intelligent functions with just 20 watts of power supply. Whereas current implementations of AI require the output of an electric power plant. If only we could emulate the functions of the brain in technology, we might be able to save many log units of power needed for artificially intelligent systems. Here I revisit this argument.
When people wax poetic about the marvelous faculties of the human brain, they typically imagine Einstein, Mozart, and Tiger Woods all wrapped into one person. However, these are different brains, and the combination of these skills has never appeared in the same brain together. Moreover, each of these individual brains is a one-in-a-billion outlier, and it will be difficult to learn anything useful from such extreme examples that can’t be replicated.
Meanwhile, most real human brains, most of the time, just vegetate. People can barely make it through the day without stumbling and tripping. That’s because while they cross the street, they have to look into a tiny window that they pulled from their pocket. The tiny window has flitting images on it. Those images make the human sad or happy. They determine what the human thinks and talks about the rest of the day. They can change the humans’ actions, in rare cases even lead them to kill each other.
Let’s review some of the claims for human intelligence and how they might relate to machine intelligence:
- We can do math. That’s just not true. The vast majority of humans, when prompted on this topic will say “Well, I’ve never been good at math”. Even the ethereal brains of Math Olympiad winners are now getting serious competition from AI systems.
- We have language. Well, that claim to uniqueness is over. Large language models can now communicate with people or each other using human language entirely. Optionally, they can translate from one human language to another along the way. Of course, if they use their own digital language, they can communicate much more effectively.
- We can drive a car and load a dishwasher. The robo-taxis cruising all over San Francisco tell us that machine driving is a solved problem. Curiously, the robots imitate human behaviors that were not designed into them, such as honking at each other while fighting for a space in the parking lot. Regarding loading the dishwasher, this problem is still unsolved, mostly because it hasn’t been a priority. As a rule, the people who make decisions on technology development are not the people who load and unload dishwashers.
Which gets us back to the claim that the human brain can do all these things much more cheaply in terms of energy consumption. Is this really true? Imagine trying to replicate the functions of ChatGPT using human brains. I’m talking here about a ChatGPT model that has been fully trained and is now servicing requests for information in what has come to be called “inference mode”. Similarly, I’m allowing human brains that have been fully trained and educated and are similarly answering queries from other humans.
ChatGPT was trained on many millions of books in addition to some twaddle from the internet. As a result it acts like an expert in a million different fields of human endeavor: the sciences, technology, arts, history, culture. Today it operates about at the level of a PhD student, and the quality of its output has been steadily improving. Just like human PhD students, it occasionally makes amusing mistakes.
What would be needed to implement this core function of AI with human brains? Obviously, you would need many human brains because each of us is a PhD level expert in at best one tiny subfield of knowledge. So let’s suppose that you need 1 million human experts to cover the full range of topics. In addition, you probably want a management level, to triage the requests for information and find the correct human expert to answer them. Also, you need some services, like fast-food joints and cleaners, to keep these humans alive. Conservatively speaking, replicating ChatGPT with humans would require a medium-sized city of about a million people.
How much energy does that city consume? You might be tempted to just count the 20 watts of energy that each brain uses, but that would be a mistake. The brain is inside the body and relies on the body for everything, including the communication between brains via language. So we have to support the whole person, not just the brain. If we build that city in the United States, a human consumes about 10 kilowatts of primary energy. So running ChatGPT made from human brains would require about 10 gigawatts. By comparison, the power consumption of ChatGPT inference is estimated at ~10 megawatts, a factor of 1000 smaller. And we haven’t considered the speed of the service: ChatGPT handles about 1 billion queries a day. This would be a tough challenge for the system made of 1 million human brains.
To conclude, the human brain really is not a useful yardstick for artificial intelligence. With a few remarkable exceptions, human intelligence is nothing to write home about. And present-day AI systems already exceed human performance in many domains by many log units and for less power consumption. Looking to the future, of course it is worth contemplating new implementations of artificial intelligence. Remember that the first flying machines we invented were balloons, and that’s not the method we use for flying today. The first large language models are only a few years old, and it stands to reason that entirely different engineering solutions will emerge down the line. But regardless of the choice of technology, the future is one where machines run the world with an affordable power supply.
A small fraction of that power will go to keeping the humans entertained with flitting images on the tiny window.