A network of artificial neurons learns to use human language

Unattributed, NeuroScientistNews, Nov 15, 2015
Commentary by Stephen Downes

One of the truisms always repeated by cognitivists and proponents of the physical symbol system hypothesis is that a natural system, like a neural network, cannot learn a language without prior encoding. This why people like Chomsky and Foder assert that we have innate linguistic structures encoded at birth, and that (therefore) learning is a matter of rule formation and the construction of models and representations. I have never believed this. Gradually, slowly, over time, the evidence has been piling up in the opposite direction. Specifically, we are learning that very simple neural networks can do very complex things, like learn languages. This journal article is a case in point. The research describes a system "made up of two million interconnected artificial neurons, able to learn to communicate using human language starting from a state of 'tabula rasa', only through communication with a human interlocutor."

Views: 0 today, 524 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact: stephen@downes.ca
This page generated by gRSShopper.
Last Updated: Jun 21, 2018 02:43 a.m.