[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

The End of AltspaceVR - What it Means for the Metaverse
Emory Craig, Digital Bodies, 2023/01/24


Icon

In an almost forgotten corner of technology comes the news that "Microsoft announced it is shutting down AltspaceVR... one of the first social VR sites that let users meet and interact in a virtual environment." The usual lessons about single-owner platforms apply: "Users – especially those who built extensive environments in AltspaceVR – will face the arduous task of trying to replicate their work on another platform." The real challenge in this (or any) space is interoperability across platforms, and here the author expresses fears that this is a step backwards: "They also laid off the entire team behind MRTK, their open-source project to accelerate cross-platform MR development." Instead, Microsoft is focusing on something they call Mesh, "a new platform for connection and collaboration," and focusing on workplace experiences.

Web: [Direct Link] [This Post]


Let's get off the fear carousel!
Alexandra Mihai, The Educationalist, 2023/01/24


Icon

"Academia's response to ChatGPT is more about academic culture than about the tool itself," says Alexandra Mihai. It's exposing the flaws in the current system, ranging from poor working conditions, non-transparent processes, lack of trust in students, lack of a pedagogical plan, stale quality assurance, inertia, and (ironically) an attitude of technological determinism. Quite right. So much of what I'm reading treats students and educators as pawns with no real agency of their own, and nothing more than unwilling subjects being flung about by the winds of technology and change. "If there is one good thing coming from the ChatGPT debate," says Mihai, "(it) is becoming aware of the need to constantly reassess what is uniquely human."

Web: [Direct Link] [This Post]


Chad GPT on Research-y Volcanology
Alan Levine, CogDogBlog, 2023/01/24


Icon

As has been described at length elsewhere, modern large language models (LLM) do not reason from values and principles, but rather, are statistical engines that simply predict what words or phrases should come next. So - the argument goes - they don't 'know' or 'understand' in the way that we do. Alan Levine with characteristic sharpness cuts clear to what the difference means: "Here is a question: Would you prefer to do the hard work to love and be loved or to just get it easily to have something just looks like love?" Or course, we would all prefer real love. But here are the counter-questions. What if we can't tell the difference? Or even more, what if the exact same processes create what we call 'real love' and 'just looks like love'? We like to think there's something special about the way we reason, find truth, care, seek value, and love. But what... if there isn't?

Web: [Direct Link] [This Post]


AI, ChatGPT, instructional design, and prompt crafting
George Veletsianos, 2023/01/24


Icon

In the context of David Wiley's post on AI and instructional design, George Veletsianos focuses on the question, "What new knowledge, capacities, and skills do instructional designers need in their role as editors and users of LLMs?" Using the existing state of chatGPT as a guide, he suggests that "a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive." At a certain point, I think we might find ourselves uncomfortable with the idea that an individual designer's values can outweigh the combined insights of the thousands or millions of voices that feed into an AI. True, today's AIs are not very good examples of dedication to truth, justice or equity. But that, I'm sure, is a very temporary state of affairs.

Web: [Direct Link] [This Post]


AI, Instructional Design, and OER
David Wiley, improving learning, 2023/01/24


Icon

David Wiley considers the impact of tools like chatGPT on instructional design, where this is "the process of leveraging what we understand about how people learn to create experiences that maximize the likelihood that the people who participate in those experiences will learn." But what is not instructional design, he says, is "the creation of accurate descriptions and explanations of facts, theories, and models." He appeals to the well-worn distinction between 'informational resources' and 'educational resources' (by contrast, I have long argued that what makes something a learning resource is how you use it, but I digress). To him, it's not an educational resource unless you add (at a minimum) practice and feedback. Not surprisingly, while he agrees that AIs will make it a lot easier to create informational resources, some sort of special instructional designer skill will still be required, specifically, "instructional design expertise will be reflected in the output of these systems in proportion to the degree that instructional design expertise is embedded in the prompts fed into the systems." Given that nothing else in AI is "proportional to the input" I don't see why instructional design should be the exception. I think we'll find that, to the AI, the distinction between information and education is mean ingless; it's all just content.

Web: [Direct Link] [This Post]


PhD training is no longer fit for purpose — it needs reform now
Nature, 2023/01/24


Icon

This editorial from Nature has been widely circulated, and for good reason. PhD training (so-called) has been in need of reform for decades now. When I was studying for a PhD in the 1980s the issues cited here were already apparent, and as graduate student president for two years I did everything I did to bring to light the inadequacy of funding, supervision, and working conditions. And after all that I never did graduate with a doctorate. Given that, decades later, I have enjoyed a successful academic career as a researcher, this seems to me to be more the institution's failing than my own. But changing direction will be, as the article says, like changing the direction of the Titanic. We're far more likely to see responses like this from Andrew Akbashev affirming support for exactly those parts of the PhD experience that don't work.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.