A neural conversation model

Adrian Colyer, The Morning Paper, Jun 29, 2016
Commentary by Stephen Downes
files/images/neural-conversation-fig-1.png

One of the key questions in learning and technology, from my perspective, is whether a neural network needs domain knowledge in order to function effectively. This article summarizes a paper describing an effort to create an effective conversational tool that operates without domain knowledge, "a bot that is trained on conversational data, and only conversational data: no programmed understanding of the domain at all, just lots and lots of sample conversations." As we see from the examples, "The surprising thing is just how well it works." It's far enough from reliable, though, that the author concludes "any real service is going to need to some more complex logic wrapped around it."

You might be asking, why is this question so important? The answer is complex, but in a nutshell, if we require domain knowledge in order to learn, then we require memorization; by contrast, if learning can be accomplished without domain knowledge, then it can be accomplished by practice alone, without memorization. You might say "so who cares? Just memorize some stuff." You could do this, but this makes it a lot harder for the learner to correct memorized stuff that is wrong, and makes them less able to learn on their own or think critically. The learner's knowledge becomes based more on their pre-constructed model or representation of the world, not experience or evidence. So if you can get to the same place without rote memorization, that would be preferable.

Views: 0 today, 265 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact: stephen@downes.ca
This page generated by gRSShopper.
Last Updated: Oct 23, 2017 02:18 a.m.