Stephen Downes

Knowledge, Learning, Community

So we've seen this argument before: "many GenAI tools are capable of mimicking human responses to a wide range of questions and, therefore, passing the Turing Test.... However, it remains incorrect to suggest that any GenAI tool is intelligent – as they lack any understanding of either the prompt or what they produce in response. In other words, GenAI cannot generate anything that it hasn't ingested; the production of text is solely based on statistical probability." It's true of everything and everyone that they "cannot generate anything that it hasn't ingested". Including students. The question is how they configure and reconfigure what they've experienced to be thought of as 'understanding' it. And here's the question: how do we know our students aren't simply mimicking human responses to a wide range of questions?

Today: 0 Total: 314 [Direct link]

files/images/Bot-Populi-banners1.png


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Feb 23, 2024 09:22 a.m.

Canadian Flag Creative Commons License.

Force:yes