[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics. We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:


What is the 'forward-forward' algorithm, Geoffrey Hinton's new AI technique?
Ben Dickson, TechTalks, 2022/12/20


Icon

As Ben Dickson outlines, Geoffrey Hinton has introduced a new type of neural network algorithm, the 'forward-forward algorithm'. "The idea behind the forward-forward algorithm is to replace the forward and backward passes of backpropagation with two forward passes. The two passes are similar, but they work on different data and have opposite objectives." The new algorithm can work in cases where back-propagation algorithms can't, and more closely models actual human neural networks. "Connections between different cortical areas do not mirror the bottom-up connections of backpropagation-based deep learning models. Instead, they go in loops, in which neural signals traverse several cortical layers and then return to earlier areas." Here's Hinton's paper.

Web: [Direct Link] [This Post]


How ChatGPT, other AI tools could change the way students learn
Salmaan Farooqui, Globe and Mail, 2022/12/20


Icon

Your chatGPT update for today: Bryan Alexander hosts a discussion on what chatGPT may mean for education. "AI could be a great motivator for educators to find more creative ways for students to learn," writes Salmaan Farooqui in his article. "It used to be that education was about getting information to your fingertips. We have the opposite problem: there's too much information," he writes, quoting OISE chair Earl Woodruff. Similar views are expressed by Vitomir Kovanovic in Academic Matters. As Dave White says, having technology redefine learning isn't a problem. More on academic chatGPT from Jennifer Sandlin. Also, this image explaining what AI isn't copying, exactly, via Stable Horde. Artists stage a mass protest and fight back by feeding corporate images into the AI, with Mickey Mouse results. Finally,  Wilfred Rubens looks at the importance of a framework for AI ethics.

Web: [Direct Link] [This Post]


Vocabulary Practice to Make Your Head Spin — Learning in Hand with Tony Vincent
Tony Vincent, Learning in Hand, 2022/12/20


Icon

This is a pretty small thing, but it's a nice example of how new technology can shape everyday practices in teaching. Tony Vincent outlines a 'vocabulary spinner' - the words are pre-loaded, students spin the wheel to select words, and then they have chatGPT to incorporate the words in a sentence. To be clear, this is only one of a sequence of activities to teach vocabulary; it doesn't replace other activities, it enhances them.

Web: [Direct Link] [This Post]


Is the empirical research we have the research we can trust? A review of distance education journal publications in 2021
Yiwei Peng, Junhong Xiao, Asial Journal of Distance Education, 2022/12/20


Icon

The answer that seems to emerge from this study is a pretty empatic "no". I would have consulted a wider selection of distance education (DE) journals, but I would imagine the result would have been the same, if not more so: "Lack of methodological rigor has long plagued the field of DE. Our research indicates that the situation continues to be less ideal, to different extents in different aspects, especially in terms of research approach and design, sampling, data source, and possible weaknesses such as limitations, researcher bias and ethical concerns." I have often commented in this newsletter about overgeneralizations being made on the basis of tiny samples and flawed research. But there has been no real attempt over the years to address this, due I think to a variety of factors, including the economics of publishing, lack of agreement of desired outcomes, the politicization of education, the influence of commercial media and technology companies, and the pervasive influence in our field of theory over observation.

Web: [Direct Link] [This Post]


Research shows...
Dave Snowden, 2022/12/20


Icon

I have to agree with Dave Snowden  here: "Human systems have two many factors and two many abstractions to be studied in the same way as ant behaviour or similar. Many of the classic experiments in psychology have not survived attempts at replication in consequences.  You can't reduce the variables to achieve causal correlations in a complex adaptive system... So if someone says research shows a good starting assumption is to assume they are selling snake oil and they may or may not be duplicitous in doing so.  A more interesting approach is to start with the research, referenced and then explore the consequences." So many writers in our field should take note.

Web: [Direct Link] [This Post]


Causal Inference and Bias in Learning Analytics: A Primer on Pitfalls Using Directed Acyclic Graphs
Joshua Weidlich, Dragan Gašević, Hendrik Drachsler, Journal of Learning Analytics, 2022/12/20


Icon

There's some really good thinking in this article (17 page PDF) that will reward the careful reader. In a nutshell: most social science research based on randomized controlled trials (RCT) is not sufficient to generate what we need to draw conclusions about cause and effect: counterfactuals. This article used directed acyclic graphs (DAG) to highlight three "pitfalls" in such research:  confounding bias, overcontrol bias, and collider bias. A DAG connects or objects in a specific way: it is 'directed', meaning the connection only goes one way (past to future, for example), and it is 'acyclic', which means that the connections never form a loop. DAGs, therefore, are appropriate for representing causal sequences. The authors describe a mechanism to reduce these biases in learning analytics (LA) research and argue "causal reasoning with DAGs provides a valuable non-technical tool to incorporate knowledge from different sources - for example non-research stakeholders or researchers from different disciplines - to arrive at actionable insights for substantive questions. Also on ResearchGate.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.