As someone who has previously taught philosophy, I found this post interesting. It's based on the idea a professor had whereby students would be challenged to raise an argument they thought irrefutable, and the professor would play "devil's advocate". Unsurprisingly, there were some topics that were off limits, as the professor did not want to be in the position of advocating some seriously bad positions. But how are these 'no-go' areas decided? "What I consider acceptable to discuss or off limits reflects my own values and blind spots, and the privilege of being the professor means that I get to decide what questions are fair game in a context where reasonable people may very well disagree with my decisions."
Today: 106 Total: 106 Brynn Welch, Blog of the APA, 2024/10/16 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.
Stephen Downes,
stephen@downes.ca,
Casselman
Canada
Heidi Yarget seeks to "spend some time unpacking the concept of a literacy framework and what might be missing." The argument here is that information literacy ought to be about more than just fact checking. "Rather than considering whether these stories are true or false, we will closely read these personal accounts to consider what we might learn about values, identity, and power when we challenge ourselves to look at these resources with compassionate analysis." To this point I'm fine, though I personally draw the line when the expectation is that I treat such stories as true.
Today: 117 Total: 117 Heidi Yarger, ACRLog, 2024/10/16 [Direct Link]This is a summary of OpenAI's recent Education Forum. Marc Watkins highlights "Leah Belsky acknowledging what many of us in education had known for nearly two years—the majority of the active weekly users of ChatGPT are students." The article leans into the question, "what is OpenAI doing to support education," where education is what colleges and universites like Harvard (which purchased access for its students) provide. This approach to me feels artificial, though George Veletsianos says ed tech innovations "haven't had the systemwide transformational impacts that their proponents promised... that's how the edtech industry operates regardless of evidence and history." Of course, ed tech has transformed education; you'd have to be blind to miss that. AI will as well. Maybe just don't listen to some of the high profile proponents (or critics) without deep backgrounds in the field. What are the people who are building (not selling) the stuff saying? What are you reading (if you're reading) in the discussion lists and informal discussions saying (as opposed to the people in the glossy tech media and education magazines)?
Today: 118 Total: 118 Marc Watkins, Rhetorica, 2024/10/15 [Direct Link]According to this post, "Google made the headlines today, signing a groundbreaking deal to power its data centres with six or seven mini-nuclear reactors, known as Small Modular Reactors (SMRs)." According to the SMR website, "mall modular reactors (SMRs) are advanced nuclear reactors that have a power capacity of up to 300 MW(e) per unit, which is about one-third of the generating capacity of traditional nuclear power reactors." Google plans to have the first of these running by 2030 (which seems like a long time to me, but I'm not a nuclear expert). I personally think this is a good idea, for a variety of reasons (I could easily imagine major universities building their own SMRs to mee their power needs).
Today: 23 Total: 23 Donald Clark, Donald Clark Plan B, 2024/10/15 [Direct Link]The core of this post is the slide presentation in the middle (easily viewed in your browser) though it ends rather abruptly leaving me hanging. Maha Bali's focus is on literacy, but there's a good deal of philosophy and ethics in her presentation. Indeed, at this point I'd say we're beyond talking abut literacy (and hence the motivationfor my own approach to critical literacies) but I do agree with her that an ethical approach to AI is going to involve learning about more than just AI (which is in general the weakness of a lot of 'AI ethics' approaches) and ultimately we need to be looking at an ethics of care as well.
Today: 24 Total: 24 Maha Bali, Reflecting Allowed, 2024/10/15 [Direct Link]This item is from this week's AI Tidbits newsletter. Retrieval Augmented Generation (RAG) is the latest trend in generative AI (I had someone drop it as an acronym on a panel recently and took the time to spell it out for people). Basically it's the idea of seeding your AI engine with documents to produce context-specific results. There are various ways to do this, and that's what this website is about. I haven't tried any of the code, but the presentation is in iteself a nicely structured and clear presentation of the different approaches. If you worked through all the code it could be quite an effective crash course in RAG. Here's the Diamant AI newsletter.
Today: 24 Total: 24 Nir Diamant, GitHub, 2024/10/15 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Oct 15, 2024 2:37 p.m.