[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Technology's Child: Making the Complex More Concrete for Research on Kids and Tech
Katie Davis, Connected Learning Alliance, 2023/03/20


Icon

Katie Davis asks, "When does technology support child development, and when does it not?" I'm going to say first that I think this is a pretty good answer: "I argue that digital experiences that are self-directed and community supported are best for children's healthy development," writes Davis. Now - is this the complex made concrete? Not exactly - though it does give people a tool to create their own concrete response to the question (and that's what you need, because no single concrete answer will ever resolve a complex question). You can read Davis's brand new blog here (I opted not to subscribe to the newsletter, which appears to function more as advertising for her book). Personally, I think that something like a blog should be used to help develop ideas, rather than market them afterward - because it's experiences and community support that create knowledge, not 672 footnotes covering 86 pages (in the field we just call that 'academic cover').

Web: [Direct Link] [This Post]


AI makes plagiarism harder to detect, argue academics – in paper written by chatbot
Anna Fazackerley, The Guardian, 2023/03/20


Icon

No doubt this one is going to be on slide presentations for years to come. "An academic paper entitled 'Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT' was published this month in an education journal, describing how artificial intelligence (AI) tools 'raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism'. What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT." The Guardian article only links to a ResearchGate copy but it can still be found in Innovations in Education and Teaching International. Though the journal itself was "tipped off", "the four academics who peer-reviewed it assumed it was written by these three scholars." And why wouldn't they? From where I sit, this is an experiment with human subjects (the four reviewers) conducted without prior consent, which seems pretty unethical to me. I feel for the reviewers.

Web: [Direct Link] [This Post]


Critical AI: A Field in Formation
Rita Raley, Jennifer Rhee, American Literature, 2023/03/20


Icon

Though it will no doubt be credited to some ivy-covered institution, the concept of AI literacy (or as it's called here, 'critical AI') has been bubbling up through the discourse for the last few years as an offshoot of digital literacy. This article takes as its point of departure Anna Ridler's 2018 installation Myriad (Tulips), which it calls "one example of an art practice that self-reflexively uses the tools and techniques of ML, also perfectly encapsulates, indexes, and indeed embodies a critical perspective on AI, one that both informs and is shaped by academic research on the same." The same could probably have been said about many of the exhibits at the Ars Electronica exhibit I attended in Linz in 2009. At any rate, we're already to the point where we can find an AI literacy literature review, though just from last year; "the ability to understand, use, monitor, and critically reflect on AI applications without necessarily being able to develop AI models themselves is commonly referred to as being 'AI literate'". You can see more at the McGill guide to AI Literacy. Or the World Economic Forum page on it. Or the Birmingham City report on AI literacy in primary education. Or this training course for young people from the Council of Europe.

Web: [Direct Link] [This Post]


Humanery and/or Machinery
CogDogBlog, 2023/03/20


Icon

Do read the whole article, the contents of which I can only hint at here. Alan Levine argues, "Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it's not because of what the AI did it's because of what a person did." So, like Alan, I am a photographer, and like a million other people, I took a photo of the Taj Mahal (which I consider the most beautiful building in the world). I could have simply purchased a phone, but mine is based on my experience of being there. This is an important point, because while it's true that 'all art is a remix', what Levine reminds us here is that the associations humans make are different from the ones AIs make. That's because human associations, and therefore, human remixes, are based on individual experiences. Even if our algorithms are the same as the AIs (and there are arguably similarities) our data is very different. And this, too, is what makes something aesthetically pleasing to us - not because of how it was created, or even because of who created it, but because of how it speaks to our experiences.

Now (at the risk of making this post too long) let me take this a step further. The greatest danger of AI is not that it will replace human authors, or anything like that, but rather, it is that it will reshape human experiences. This happens in one of at least two ways: either it reshapes them in its own image, reducing human experience to the bland and the generic (think 1960s bowdlerized television); or it reshapes them at the hands of some unethical AI manipulator (think recommendation algorithms that take us deeper down the extremist rabbit hole). The human, indeed, ethical, response to AI is to experience the world as fully and completely as possible, and to offer back to AI and other humans the remixes that are based on that experience in all their unpredictable and chaotic glory.

Web: [Direct Link] [This Post]


We're in a productivity crisis, according to 52 years of data. Things could get really bad.
Michael Simmons, Medium, 2023/03/20


Icon

The thesis here is that while  there was an incredible 50x increase in the productivity of the average manual worker from 1870–1970, this productivity gain has leveled off in the 50 years since then, with dire consequences for out future economic prospects. While I find this article very conservative in its approach (and it reads a lot like much of what we see in the business press) its strength is that it least tries to consider objections to that account - for example, the great decoupling that took off starting in the Reagan era (he puts it at 1972, which is inaccurate), or for example, the argument against productivity (reminding of Kalle Lasn's "economic progress is killing the planet" argument), or for example, the productivity backlash based on opposition to the rise of billionaires.

I think he misses one major consideration - that much of the productivity gains were illusory, created by over-exploitation of resources and offloading environmental costs. We have massive non-manual worker sectors (specifically: cultural, health, education, information, and service) that didn't really exist in 1870. And, of course, we can't measure productivity by GDP. "What we really want is a new kind of productivity. We want a kind of productivity that is actually more productive, more inclusive, leaves us time for an uninterrupted personal life, and ultimately feels better — more purpose, more fulfillment, more aliveness, and less hurry." Pro tip: stop reading at 'What You Can Do Now', because what follows is a sales pitch for some online program.

Web: [Direct Link] [This Post]


Why do authors persist in submitting trial reports that do not meet the journal eligibility criteria or AllTrials standards?
Jane Noyes, Journal of Advanced Nursing, 2023/03/20


Icon

The AllTrials initiative in medical research is sponsored by such organizations as the Cochrane Collaboration and PLoS and is intended to ensure that clinical trials reported in academic journals meet a certain standard of evidence: that the trial is designed before, not after, the research is conducted; that conflicts of interest are known; that the research is conducted under proper scientific and ethical guidelines;  that the research is fully reported; and that submissions reflect all trials, not just those that were successful. As this editorial reports, researchers are not meeting this standard. I'm not sure whether there's a similar initiative in education with the Campbell Collaboration and research journals, but I can say that the research is similarly sub-standard. Now I have been (and still am) a critic of the narrowly defined range of what counts as 'research' in education, but I would agree that if you're going to present quantitative research 'evidence' for this or that intervention, you should do it properly.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.