[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Prompting engineering or AI literacy? How to develop a critical awareness of Generative AI in education
Mari Cruz García Vallejo, #ALTC Blog, 2024/02/27


Icon

This post covers similar ground to Maha Bali's post and should probably be read back-to-back with it. As part of AI literacy it describes 'chain-of-thought prompting' (which I have found useful to write Javascript functions, developing them iteratively with chatGPT 4 (I should demo this in a video one day)). It also extends the definition of 'AI literacy' similar to the way Bali extends the definition of 'critical', by including regulatory frameworks, social justice, and copyright considerations. Finally, it offers a six-step approach to helping educators develop AI literacy. See also Posts & Resources on Critical AI Literacy.

Web: [Direct Link] [This Post]


Towards Software Commons
Chad Whitacre, Open Path, 2024/02/27


Icon

The context here is a company that announced its product was open source under something called the Business Software License (BUSL). It had one condition: "The restrictive nature of it is fairly lightweight within the context of Sentry and Codecov: you simply cannot commercialize it as a competing service." This though was enough to get it flagged as Not Open Source. Those with memories as long as mine will know that this was the real purpose of open source back in the beginning: to protect coders from companies who would take their code, change a few words, and relaunch it as a commercial product (All Rights Reserved). But in the hands of places like Berkeley and MIT, Open Source came to mean no restrictions whatsoever - including commercial use. This column is a proposal for an alternative model, 'Software Commons', "all computer software which is available at little or no cost and which can be reused with few restrictions."

Web: [Direct Link] [This Post]


Intelligence requires understanding & meaning
Tim Klapdor, Heart Soul Machine, 2024/02/27


Icon

Tim Klapdor's main point is both the challenge and the issue: "intelligence requires understanding & meaning. Therefore, if you want to call something intelligent, then it must be able to exhibit understanding and meaning." OK, sounds great, until you push a bit on what counts exactly as 'understanding' and 'meaning'. What is 'understanding'? Knowledge of causal principles? Not robust enough. General laws and principles? Too inflexible. A model or world view? Sure, now we're getting closer. But that's what AIs do! Kids learning 'why' - what sort of answer do you give them? Cause, principles, theory. Right? So what is there to 'understanding' that AIs don't do. The same sort of questions arise around 'meaning'. Do we mean 'intentionality'? 'Intensionality?' "emotions or tone" Aw, we already know AI can respond to these. It's too easy to simply say "understanding & meaning." Image: University of Tennessee, Multiple Intelligences Theory (not a big leap from where we are in this post).

Web: [Direct Link] [This Post]


TB872: Wenger-Trayner and communities of practice
Doug Belshaw, Open Thinkering, 2024/02/27


Icon

This is another post from Doug Belshaw as he takes a 'systems thinking course', this one covering "a basic overview of communities of practice, fractal growth, and 'world design'." Obviously I would have a lot to say about this, but I'll just make a couple of comments. First, with respect to the diagram of 'legitimate peripheral participation', we see people moving toward the core of the community, which seems right to me, but we should note that the core bounces around and sometimes splits and is in essence a 'strange attractor' for the community. Second, Belshaw comments (I think citing Wenger-Trayner) "One thing that CoPs struggle with, especially in the civic arena, is stewardship. Coalitions 'do not take sustained responsibility for stewarding a civic domain...'". I don't think people have adequately reconciled network (or systems) theory with the idea of leadership. The concepts of one person as 'steward' and self-governing communities are at odds with each other. We need to part ways with one of the other - and I'm not willing to give up on self-governing communities.

Web: [Direct Link] [This Post]


A virtual tour of three Advanced Learning Spaces
Zac Woolfitt, Video Teaching, 2024/02/27


Icon

The first of these is a VR lab from KU Leuven where "students can walk through basic safety instructions and answer multiple choice questions that pop up on the screen"; the second is a Highly Adaptable Hybrid Study Room at the University of Amsterdam, which is basically a physical space with a lot of plugs; and the third is a "green screen on steroids" at the University of Michigan, specifically, "The screens in the studio are composed of 187 LED panels including on the floor which allows for the presenter to be 'immersed' in the virtual space." My overall reaction is 'meh'. No doubt they represent a lot of work and innovation, but I don't really see any of them as advancing our thinking on the use of digital technology in education.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.