The race for the master algorithm has begun

This article was first published in the January 2016 issue of WIRED magazine. Be the first to read WIRED's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

Geoff Hinton believes that the way our brains learn can be captured in a single algorithm, and he's spent the last 40 years trying to discover it. A psychologist turned computer scientist who now splits his time between Google and the University of Toronto, he tells of coming home from work one day in a state of great excitement, exclaiming, "I did it! I've figured out how the brain works!" His daughter replied, "Oh Dad, not again!" But after many ups and downs, his quest is starting to pay off. Backpropagation, a brain-inspired learning algorithm that he co-invented, is taking the world by storm. Rebranded as "deep learning", it's used by Google, Facebook, Microsoft and Baidu for, among other things, understanding images and speech as well as choosing search results and ads to show you. DeepMind, the startup that Google paid £400 million for, is essentially 
a backpropagation shop.

Hinton is the leader of the connectionists, a school of thought in machine learning that takes its name from the belief that all our knowledge is encoded in the connections between neurons. The most optimistic of the connectionists think backpropagation is the "master algorithm": an algorithm capable of learning anything from data, and therefore of ultimately automating all knowledge discovery. But the more sober ones admit that backprop is still a far cry from the master algorithm, and other machine-learning camps have different ideas on how to get there.

Take the evolutionaries. Led by the University of Michigan's John Holland until his death in August 2015, they believe that evolution, not the brain, is the master algorithm. Backpropagation may be good for fine-tuning connections between neurons, but evolution created all life on Earth. In the 60s, Holland started simulating evolution on a computer, complete with populations of competing individuals, fitness scores and sexual reproduction between the fittest individuals. By the mid 90s, his followers had succeeded in evolving devices such as radio receivers and amplifiers from random piles of components, amassing an impressive collection of patents along the way. Now they're busy evolving real hardware robots, with the fittest in each generation programming 3D printers to produce the next one. If the T-1000 from Terminator 2 ever comes to pass, this may well be how it happens.

But most machine-learning researchers believe that imitating biology, whether it's evolution or the brain, is at best a very circuitous path to the master algorithm. Better to solve the problem from first principles, using what we know from computer science, logic and statistics. For Bayesians, creating the master algorithm boils down to efficiently implementing Bayes's theorem, a mathematical rule for updating our degree of belief in a hypothesis when we see new evidence. A long-persecuted but now-ascendant minority in statistics, Bayesians maintain that if a learning algorithm is not consistent with Bayes's theorem, it must be wrong. But learning with hypotheses rich enough to put in a robot's brain was beyond their power until Judea Pearl, a professor at the University of California, Los Angeles, made a breakthrough for which he won the Turing Award, the Nobel Prize of computer science, in 2011. Pearl's Bayesian networks, as they're called, can encode probability distributions over millions of variables without breaking a sweat. Your first self-driving car will probably have one inside.

Bayesian networks are still not powerful enough for the symbolists, the machine-learning camp closest to classic, knowledge-based AI. Symbolists such as Imperial College's Stephen Muggleton believe a truly general-purpose learning algorithm must be able to freely combine rules, and they discover those rules by filling the gaps in deductive reasoning: if I know that Socrates is human, what do I need to know to infer that he's mortal? That humans are mortal, of course - and now we can add this rule to our knowledge base. Eve, a robot scientist at the University of Manchester, works on this principle. Starting with basic knowledge of molecular biology, she formulates hypotheses, runs lab experiments to test them, and repeats, all without human help. In 2014, Eve discovered a new malaria drug.

Where symbolists' algorithms emulate the thought processes of scientists, those of the analogisers, the fifth and last major machine-learning tribe, are more like a lazy child that doesn't study for an exam and then improvises the answers. Faced with a patient to diagnose, analogy-based algorithms find a patient in their files with the most similar symptoms and assume the same diagnosis. This may seem naive, but analogisers have a mathematical proof that it can learn anything given enough data.

That could be a lot of data, though, so they're working on more sophisticated forms of analogical reasoning to go the rest of the way. Douglas Hofstadter, cognitive scientist and author of Gödel, Escher, Bach, has no doubt 
that analogy is the master algorithm.

Who will win the race to invent the ultimate learning algorithm? Perhaps none of the five major camps has all the pieces of the puzzle, and what it will take is a combination of ideas from them all: a grand unified theory of machine learning, akin to the standard model of physics or the central dogma of biology. Or perhaps it will take entirely new insight, which may come not from a professional researcher but from an outsider or a student in a dorm, like Geoff Hinton was when he started out on his quest.

Pedro Domingos is a computer scientist and author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Allen Lane)

This article was originally published by WIRED UK