1. Home >
  2. Extreme

Did these researchers just create an autistic computer program?

Artificial neural network are advancing at a lightning pace, and some scientists are asking: What might we learn if we make them wrong?
By Graham Templeton
neural net head

Last month, it was revealed that Google's research in artificial neural networks (ANNs) was occasionally producing some truly weird images -- and some users noticed right away what a striking resemblance these visuals bore to the ones people report after taking hallucinogenic drugs. It turns out(Opens in a new window) the reason the ANNs "tripped out" is the same reason our brains do: with a reduced ability to judge the actual meaning of visuals as they're processed (either because you're high, or because you're an experimental computer program), pattern recognition algorithms flail about.

The algorithms follow the simplest route from basic shapes to a guess at the objects those shapes represent, often getting off track without meaningful direction from the higher brain. Without some wisdom to go with the raw intellect of the computer, every roundish shape could, over many iterations, get categorized as an eye. And without subjective information about the context of a scene, otherwise totally functional algorithms can slowly turn clouds into creepy mutants on unicycles.

This example shows how the computational metaphor of an artificial neural network can grant basic insight into the workings of the brain -- but scientific insight? ANNs are just now reaching the level of sophistication where they might be able to be used as a tool by scientists, allowing them to actually predict how the brain will react to changes in its structure. Now, an amazing new study from the Baylor College of Medicine claims to have done just that. When neuroscientists Ari Rosenberg and Jaclyn Sky Patterson simulated one theorized cause of autism in an artificial neural network, that simulation began exhibiting recognizably autism-like behavior.

autism network 2 Google has talked a bit(Opens in a new window) about its trippy neural network research, but the search giant's image recognition process is not designed to produce scientific insight. On the other hand, the recent Baylor study managed to generate its autism-like results(Opens in a new window) by changing just a single parameter in a direct simulation of a portion of the visual cortex. It's an elegant, if unproven, experimental design. Why such a small change worked, or seemed to, is wrapped up in the structure of the brain, and a possible cause of one of medicine's most complex disorders.

Autism as emergent property

The theory seems almost too simple to be true. Autism is one of the most complex health issues studied today, an unhelpful knot of genetics, life history, behavioral analysis, and widely varying cultural standards. Yet one theory of autism claims that many of the disorder's most characteristic symptoms could be the result of just a single, chemically induced modification: autistic brains may simply be too noisy.

Couched in the overly syllabic language of biochemistry, this refers to the concept of divisive normalization (DN), a measure of the extent to which the activity of any one neuron is inhibited by the activity of the overall population of neurons around it. In "normal" brains, neurons reduce the urgency of their firing when surrounded by many other firing neurons -- and several years of investigation seem to suggest that by doing so, they save our conscious minds from being confused and overwhelmed in the way that autistic people often report.

Divisive normalization (DN) may also be responsible for helping us make sense of multi-sensory input on the same stimulus.Divisive normalization (DN) may also be responsible for helping us make sense of multi-sensory input on the same stimulus.

Divisive normalization is considered a "canonical computation," meaning that it's found in multiple brain regions, and multiple species -- this simple regulatory scheme seems to be fundamentally important to complex brain systems. So, as soon as cloning and gene editing technology allowed it, scientists turned to animal models, asking a very simple question: if they could lower divisive normalization, could they create an autistic mouse?

Studies seem to indicate that the answer is, at least superficially, yes. When researchers turned off genes critical to GABA (the primary inhibitory neurotransmitter), they exhibited(Opens in a new window) what scientists have deemed to be autism-like behaviors: they avoided social contact with other mice; engaged in odd, repetitive actions; displayed worsened spatial learning; and even seemed to acquire a mild fear of open spaces. Reintroducing GABA, and allowing neurons to lower their own volume in response to an overwhelming cacophony, resulted in markedly less noticeable autism-like behavior.

There is some evidence that autistic children develop more neural connections than non-autistic children, as visualized here.There is some evidence that autistic children develop more neural connections than non-autistic children, as visualized here.

The implication was clear: though it seemed incredible, the results suggested that many of autism's varied symptoms could all be an emergent property of a single low-level computational irregularity in the brain.

But mice are, unfortunately, mice -- who knows why they do what they do? And GABA has a multitude of functions, beyond its role in divisive normalization. The science of autism was progressing quickly down several other, complementary paths, and identified a number of other possible neural explanations for autism. The strongest evidence in favor of the DN theory of autism continued to rely on human observational studies, and the questionable applicability animal behavior.

Since genetic manipulation of human test subjects was of course out of the question, that situation seemed unlikely to change. After all, it's not as though you could just build a human brain from scratch...

Artificial neural networks

Over the past five years, artificial neural networks have come a long way. They began mostly as a curiosity in computing science, then in biology -- ambitious attempts to model the overall organization of the brain by using software to simulate individual neurons and the connections between them. They provided some nice demonstrations of how a network of exceedingly simple programmed actors (neurons) can work together to solve complex problems quickly. The primary visual cortex, for example, is physically structured to sift incoming visual information for basic things like movement, patterns, and the outlines of static objects, and it can do this with astonishing efficiency relative to a digital computer.

A highly simplified diagram of ANN organization.A highly simplified diagram of ANN organization.

ANNs describe brains as logical pachinko machines in which stimuli fall down through various weighted statistical paths determined by their starting attributes, and the programming of each neuron they encounter. What this means is that just one tiny adjustment to the behavior of all neurons can have an enormous cumulative effect to the ultimate fate of data being processed -- just as we find in biological neural networks (brains). In brains, such a behavioral adjustment for neurons likely comes from an altered surface protein, while in artificial neural networks it comes from direct adjustments to a numerical parameter.

One such number might control the strength of neuron-to-neuron signaling, collapsing all the nuance of synaptic function down to a single quantity. Another might dictate the tendency of neurons to become non-responsive after long periods of constant stimulation, deadening them to being over-excited. These are computational metaphors for much more complex biological systems and they can, in the aggregate, mirror or even predict some aspects of neural function.

This attempt by a Google ANN to understand a complex frame as simply as possible led to some of the same results human painters arrive at when trying to do the same thing.This attempt by a Google ANN to understand a complex frame with as few lines as possible arrived at some of the same aesthetic results human as painters trying for the same thing. But for research purposes, simply simulating the brain isn't enough. Artificial neural networks could offer a chance to model not only a typical human brain, but ones with crucially important experimental changes built in. These changes would always be impossible in human testing, but no trouble at all in a computer model. Scientists were beginning to ask themselves: what might happen if an ANN's mind went wobbly?

A neural network with autism

Ari Rosenberg and colleagues stepped into this situation, and decided to synthesize.

They knew that autism had several recognizable effects on basic visual processing, and that the primary visual cortex (called "V1") was currently one of the best areas of the brain for modeling with an ANN. And they knew that because it could be represented with just a single changed parameter, the theory of divisive normalization offered a possible bridge between autism's highly subjective effects and the numerical operations of an ANN.

neural net 5They had a working computer model to use, a relevant change to make to that model, and a prediction about the effect that change should have -- in other words, they had the makings of an experiment.

"Even in very simple visual tasks... you find altered behavior in individuals with autism," Rosenberg said in a recent phone interview. "So, we built a very low level neural network model of the primary visual cortex... and then we just started playing around with the parameters."

The first thing they tested was complex visual processing. When presented with dynamic "gratings" of sinusoidal lines, autistic human beings consistently outperform control testees by correctly identifying the direction of motion more quickly, and increasing visual contrast greatly enhances this advantage. As the size of the lines increases, everyone's performance gets worse -- but autistic people's scores remain higher than their "wild-type" counterparts.

The top two graphs show human results (autistic tests in blue). The bottom graphs show the ANN results.The top two graphs show the grating test results of human trials (autistic participants in blue). The bottom graphs show the ANN results.

Rosenberg's two artificial neural networks showed the same general trends. The autism model, with divisive normalization tuned down, consistently outperformed the "healthy" version. The models' relative abilities followed the same trends as human test subjects.

Next, they tested "tunnel vision" -- the observed tendency of autistic people to be less attentive to visual stimuli occurring far from the current object of their attention. The team presented their simulated V1 visual cortices with the same sin-curve gratings as before, but this time placed them at different distances from a locked center of attention. The results are broadly similar to those collected by prior researchers testing humans: the autism ANN was far less interested in stimuli far from its center of attention.

Finally, the team tested their ANNs against a known relationship between statistical inference and autism -- that autistic people don't tend to take prior knowledge about the world into account as efficiently as non-autistic individuals. They accomplished this using the so-called "oblique effect," which describes the fact that people are better at identifying horizontal and vertical lines than ones oriented at an oblique angle.

Performance on the tunnel vision test. Top graphs from humans, bottom graphs from the ANNs.Performance on the tunnel vision test. Top graphs from humans, bottom graphs from the ANNs.

Telling a normally functioning neural network to expect oblique lines, essentially giving it some experience on the issue of line orientation in its "surroundings," resulted in much better performance. The autism model improved far less due to expectation, and gained a far less powerful advantage than the normal model from the "priors" the researchers provided.

That's three tests, and three impressive confirmations of correlation. ANNs with divisive normalization turned down seem to behave much like mice with GABA production turned down, and human beings with diagnosed autism. The question is, what does that result actually mean?

The trouble with models

As with seeming agoraphobia in mice, the output of a neural network is just a metaphor for the much more complex behavior of a real human brain -- but the results are undeniably provocative. If a simulated visual cortex is affected in autism-like ways by autism-like organization, as it seems to be, then in the future perhaps a simulated frontal lobe could hint at the causes of autism's higher cognitive effects, as well.

neural net 8

For now, this study seems to provide powerful support for the divisive normalization theory of autism. An ANN can't prove a biological theory all on its own, but Rosenberg said that this sort of software modeling could allow a "synergistic process" in research. Insights gained from work with patients could update ANN models to be more accurate, which would allow the ANNs to motivate more insightful biological experiments. You can't publish the truth of a theory just because an ANN reacted a certain way -- but you could probably justify a request for funding.

To study computational changes in brain function, physical experiments must change genes and proteins, then hope that these changes adjust the computational behavior the way they desire -- without introducing some other confounding change along the way. ANNs can simply make computational changes directly, and leave it to biologists to later reverse engineer a molecular path to that new behavior.

Divisive normalization occurs across all regions of the brain, yet this study only looked at its effects in the very first link in the chain of visual processing. The effects of decreased inhibition by population, when applied over the whole brain, could possibly explain an even wider variety of autism symptoms -- but any such statement will require more work, with real patients.

In the end, this study may be more interesting as a proof of concept than as a contribution to autism research -- the theory of divisive normalization existed long before this team got to it. But ANNs provide a potentially novel method of generating medical hypotheses. They can search very quickly through enormous possibility spaces, simulating oddball modifications to human brain architecture that no ethical scientist could ever investigate in the real world.

It's an exciting time for brain research.

Tagged In

Autism Neural Networks Brains Medicine Science

More from Extreme

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up