Stephen Downes

Knowledge, Learning, Community

This is a good article describing the principle of 'back-propagation' in some detail. This is one of the major algorithms used to train neural networks (we've mentioned it here a lot over the years). The simple explanation is that back-propagation is the process of correcting outputs in response to feedback. But the trickier part is now this happens when we're looking at a neural network with multiple layers (so-called 'deep' learning). Darren Broemmer could go into more detail and describe the mathematics of it, but he doesn't, and the article doesn't really suffer for it. He does look at some alternatives to correct back-propagation around the edges, and considers some misconceptions, including the larges question, which is whether the human brain itself uses back-propagation (answer: probably not, though it needs to solve similar challenges).

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Mar 02, 2026 09:12 a.m.

Canadian Flag Creative Commons License.