How to Succeed in MrBeast Production
Simon Willison,
2024/09/16
MrBeast is legitimately one of the world's largest YouTube stars. Simon Willison calls this article (36 page PDF) "legitimately fascinating" and I agree. You can read Willison's post to get the sense, though I recommend reading the entire thing (no I won't give you $1000). There's a lot of emphasis on creativity, communication and continuous learning, points that are strengths (or critical weaknesses) in any enterprise. There's also some good understanding of YouTube analytics and what makes videos on the channel successful. I think academics doing YouTube presentations could learn a lot from this (even if they don't intend to create the world's largest explosion). I don't watch MrBeast much, but maybe I should.
Web: [Direct Link] [This Post][Share]
Does a new theory of dopamine replace the classic model?
The Transmitter: Neuroscience News and Perspectives,
2024/09/13
Though the field of education has its own versions of 'learning theory', neuroscientists are working their way toward an understanding of how learning actually happens. This article describes a new approach, called adjusted net contingency for causal relations (ANCCR) that better represents the relation between dopamine spikes and prior experiences than one base simply on prediction errors called temporal difference (TD). For a complex system such a neural network no simply theory will be completely accurate, though the formulation or this alternative may lead to new directions in research in both human learning and AI. Here's the study (11 page PDF) this article is based on. See also this recent article.
Web: [Direct Link] [This Post][Share]
Has ChatGPTo1 just stolen 'Critical Thinking' from humans?
Donald Clark,
Donald Clark Plan B,
2024/09/13
Donald Clark reports that "Critical thinking was one of the famous 21st century skills, that everyone thought AI could never solve. It just has." Seeing AI successfully solve math and reasoning problems is not surprising; once you've seen what it can do with computer code, similar progress would be expected in these other areas. But I'm going to withhold judgement for now because critical thinking is a bit broader than the formal methods involved in these disciplines. Judgements need to be made about meaning and intent. Now I'm not saying AI will never become an effective critical reasoner - in fact, I'm pretty sure it will, which is bad news for all the flim flam artists and propagandists out there. It will take time. But as Clark says, "Limitations being eliminated week by week."
Web: [Direct Link] [This Post][Share]
Strengths, Weaknesses, Opportunities, and Threats: A Comprehensive SWOT Analysis of AI and Human Expertise in Peer Review
Roohi Ghosh,
The Scholarly Kitchen,
2024/09/13
Roohi Ghosh provides us with a nice SWOT analysis presenting AI and human peer review side by side. It's a useful presentation because it doesn't simply idealize human reviewers and demonize AI, but rather, provides something like the same standard of assessment to each. The results are presented in an easy-to-use table (pictured).
Web: [Direct Link] [This Post][Share]
Most students are using AI for academics
Laura Ascione,
eCampus News,
2024/09/13
This post refers to a report from the Digital Education Council (spamware, as they want you to provide personal information before you can see it). According to the report, 86 percent of higher-ed students are already using AI, with 54 percent reporting using it weekly. Not surprisingly, "80 percent of students say their university's efforts to integrate AI tools have not met expectations, underscoring the need for institutions to understand how students prioritize AI tools and to integrate them in ways that align with students' expectations." I think it's safe to say the AI ship has sailed, which means educators will increasingly need to shift from 'resist' to 'adapt'.
Web: [Direct Link] [This Post][Share]
Ed Tech Startup Behind L.A. Schools’ Failed $6M AI Chatbot Files for Bankruptcy
Mark Keierleber,
The 74,
2024/09/13
This is an example of a failed AI company that promised to deliver to a school division. Pretty much everything that could go wrong actually went wrong, as there were questions about invoices and subcontracting, about salaries and expenses, about potential misuse of student data, and of course about the investors who pulled the plug before the company could collect on its big contract. This is less of an AI story, I think, than it is a business story, but it's not hard to predict that we'll see a lot more of this in the near future.
Web: [Direct Link] [This Post][Share]
Things I was wrong about pt 4
Martin Weller,
The Ed Techie,
2024/09/13
The latest thing Martin Weller is admitting he was wrong about is artificial intelligence. But for good reasons: "I was," he writes, "largely dismissive of it, partly because I was grounded in symbolic AI (expert systems and the like), and had not really monitored the rise of large, language models and generative AI." But of course, "in 2024, we can't deny it's a thing." Still, he writes, "I think the impact of AI is wildly exaggerated and we're probably heading for a bubble burst for all those companies investing heavily in it."
Web: [Direct Link] [This Post][Share]
The Friday Fifteen: September 13
HESA,
2024/09/13
HESA reveals a new format today, the Friday Fifteen. I like the idea of a weekly listing of small posts linking to news items. The sources in this first edition are strictly mainstream news: HEQCO, OECD, Pie News, etc. which suggests to me they're maybe using a clipping service of even something like Perplexity. Some items of note: a story about a 'private' university in North Korea, Russian tuition fees, and that old paean about universities (in the US) costing less than you think, because of discounts, via the Hill.
Web: [Direct Link] [This Post][Share]
The growing estrangement between universities and society — University Affairs
Paul Wells,
University Affairs,
2024/09/13
I have virtually no opinions in common with Paul Wells, but I have addressed the need for universities to become more relevant to communities, and this post seems to address that. It reads though more like a thinly veiled warning that universities are going to have to align politically. For example, "The easiest way to 'force' universities 'to do better' is to 'review the eligibility requirements for the receipt of federal research funds to ensure strong university governance'" which would more directly address, say, protest encampments. To me, this sort of argument underlines once again the need for universities to become essential in the lives of the people in the community, because only the community can support the university when its funding is being pressured.
Web: [Direct Link] [This Post][Share]
Doors (the game)
Nele Van de Mosselaer,
2024/09/12
I found this game more annoying than fun, and I have no idea why they would need to use the Unreal engine to design it. It reminded me of the old-style Flash games. Anyhow, as Nele Van de Mosselaer says, "Take, for example, a door in a videogame. There is nothing fictional about such a door: it is a simulated door that actually exists on your computer. You can see it, as it is there on your screen. You can open it, close it, maybe even lock and unlock it. You don't need to imagine anything, you just interact with it." Which does raise some interesting questions about representation. So, enjoy?
Web: [Direct Link] [This Post][Share]
The AI-Copyright Trap
Carys J. Craig,
Osgoode Hall Law School, SSRN,
2024/09/11
Carys Craig (29 page PDF) argues (and I agree) that "Copyright law should neither incentivize and reward the use of generative AI nor obstruct its training and development." However, "it seems clear that copyright law (or a contorted version thereof) is increasingly being invoked as a regulatory response to the harms of AI." This paper is an extended treatment of the argument. In particular, "The so called "3Cs" of 'Consent, Credit and Compensation' are getting a lot of air time these days." Craig argues, "the pursuit of the 3Cs is intended to push back at power, employing the blunt tool of copyright control, but reaching beyond what copyright actually requires by narrowing the scope of what fair use permits (in the name of greater fairness)." We have to remember that these limits on copyright are "limits that have traditionally restrained corporate power to protect the public interest... copyright is entering the fray as a false friend."
Web: [Direct Link] [This Post][Share]
Generative AI Can Harm Learning
Hamsa Bastani, et al.,
The Wharton School,
2024/09/11
So the way this study (59 page PDF) worked is that a teacher taught the students, then they had some AI-assisted practice, then they took a test on the same material with no AI support. They performed worse when they used the AI than when they didn't. I found this a pretty narrow study, and I'm not really sure about the AI the researchers used (it was a ChatGPT 4 'base' model, which they report frequently made math errors). I wouldn't think simply using the AI for a 'practice session' would be particularly interesting or engaging, but maybe that's just my perception.
Web: [Direct Link] [This Post][Share]
Students Are Using AI Already. Here’s What They Think Adults Should Know
Ryan Nagelhout,
Harvard Graduate School of Education,
2024/09/11
This article summarizes a report Teen and Young Adult Perspectives on Generative AI and states the main points as follows (quoted):
None of these should be surprising, except perhaps that so few students report using AI daily.
Web: [Direct Link] [This Post][Share]
Professor tailored AI tutor to physics course. Engagement doubled
Anne Manning,
Harvard Gazette,
2024/09/11
Publication of the study is still pending, but according to this article the use of an AI tutor in a physics course greatly increased engagement and with it the amount of learning. "'We went into the study extremely curious about whether our AI tutor could be as effective as in-person instructors,' Kestin, who also serves as associate director of science education, said. 'And I certainly didn't expect students to find the AI-powered lesson more engaging.' But that's exactly what happened."
Web: [Direct Link] [This Post][Share]
Study Buddy or Influencer
Lisa Chesters, et al.,
Parliament of Australia,
2024/09/10
This report (150 page PDF) from an Australian Parliamentary committee argues essentially for an embrace of AI technologies in schools, coupled with measures to limit risks and avoid harms. "The best way to implement GenAI education tools into the school system, like study buddies, is by integrating them into the national curriculum, creating and implementing guidelines and polices... foundation models, especially large language models (LLM), should be trained on data that is based on the national curriculum." Via Rhiannon Bowman.
Web: [Direct Link] [This Post][Share]
At the end of the corridor
Alastair Creelman,
The corridor of uncertainty,
2024/09/09
Alastair Creelman, whom I have cited numerous times in these pages over the years, has decided to call it a career and discontinue the blog. "Having been retired from academic life for two years I don't feel I have much more to contribute to the discussion of technology in education." He writes that he has lost his enthusiasm and that his posts have become darker over the last five years. Never say never, but in case we don't see him in these pages again, I just want to say thanks on behalf of the OLDaily community.
Web: [Direct Link] [This Post][Share]
Mohism
Fraser, Chris,
Stanford Encyclopedia of Philosophy,
2024/09/11
Almost contemporary to Confucius and one of the most famous of the Chinese philosophers we in the west seldom hear about, Mozi was the founder of a philosophical school of ethics and political philosophy based on order and good government. He writes, "Those in the world who perform tasks cannot do without models (fa) and standards. There is no one who can accomplish their task without models and standards." However, "of these three, parents, teachers, and rulers, none is acceptable as a model for order." He argues for a roughly consequentialist approach, a "normative theory based on equal, impartial concern for the welfare of all," and as Chris Fraser reports in this newly revised encyclopedia entry, "moral education is regarded as akin to teaching a practical skill, such as how to speak a language. It is accomplished primarily by emulating the judgments and conduct of moral exemplars." See also Mohist Canons and School of Names by the same author, and also The Ethics of Mozi: Social Organization and Impartial Care, from 1000-word Philosophy.
Web: [Direct Link] [This Post][Share]
Exploratory study: New forms of tertiary education | Stifterverband
Stifterverband, Heinz Nixdorf Foundation,
2024/09/09
This report (56 page PDF) on future forms of higher education is worth reading but still strikes me as a plan for building a bridge half way across the river. It begins by identifying four 'pain points' in the German system: insufficient access and integration of underrepresented student groups; lack of dynamism in adapting teaching and learning content to new skills requirements; lack of innovation in the design of learning experiences; and insufficient structural and institutional agility (readers will be forgiven for thinking that predetermined solutions are built into these definitions). It them maps these against seven case studies including Arizona State University (ASU), Erasmus University Rotterdam (EUR), and Heilbronn, which produces a set of 'innovations' like multiple professorships, agile competence frameworks, microcredentials, and the like - all essentially trappings of the existing system, with no way to actually get to the other side of the river. Via Gilly Salmon.
Web: [Direct Link] [This Post][Share]
Rethink: On the fraying of our shared reality and how to protect it
Rachel Botsman,
Rethink with Rachel,
2024/09/10
"Our lives have moved towards an image-driven existence where genuine human experience, what actually happened, is obscured by a saturation of false representations and spectacles," notes Rachel Botsman. "We must be able to align around the concrete facts of what happened on the day. There also needs to be a distinction between what has already happened, what will happen, and what might happen." I wish it were as simple as that, as though there were this third strand of 'objective reality' that could ground our agreement in fact. But 'objective reality' crucially depends on social agreement about what counts as evidence, what words we use to describe evidence, and what importance evidence has (or perhaps I sould say, more accurately, society is the result of thise sort of social agreement). From where I sit, we have always lived in different societies, but we haven't even been able to see these other societies; the supposed 'fraying' is these other societies coming into view for the first time, and challenging our view of reality. That's a good thing.
Web: [Direct Link] [This Post][Share]
Eating the Future
Alex Usher,
HESA,
2024/09/10
I agree with Alex Usher here but I wonder about the framing. He writes, " we never spend time talking about expenditure now vs. investing in the future... all our policy discussions are about how to avoid investing in the future, and spend money now, in the name of 'affordability.'" Now at the risk of being called "crackers" I would depict quite a bit of the discussion as being about the future. It's pretty much the main focus of this here newsletter. When we talk about making education accessible to all, that is a discussion about the future. But my main question here is about agency. Who is framing all these debates in terms of whether the government can afford it? And why are we letting them get away it with it?
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2024 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.