[Home] [Top] [Archives] [About] [Options]

OLDaily

Degreed Returns To Its Roots: Acquires LearnIn, Founder Returns As CEO
Josh Bersin, 2022/06/27


Icon

Long story short: Degreed was a pioneer in the field of learning experience platforms, according to this article, but it was outflanked by the competition, and so now with its acquisiton of Learn In, is pioneering a new type of system, a 'capability academy'. Bersin, which developed the concept, represents it as a step forward, but I find it hard to see it as anything other than retrenchment: "they do projects, they may take developmental assignments, and they may even be assessed by their peers. What Learn In is trying to do is build a platform to connect all these moving parts." Are 'capabilities' the new skills? Is a 'capability academy' really a thing? A few have jumped on the concept - NovoEd and Salience, for example. but Bersin, which is normally very good, appears more to be promoting the concept rather than reporting on it.

Web: [Direct Link] [This Post]


Octopus
Octopus, 2022/06/27


Scheduled to be launched on Wednesday, Octopus is a research publication platform that is advertised as "fast, free and fair". "Octopus is designed to replace journals as the primary research record, allowing journals to concentrate on editorialising primary research and disseminating it in a suitable form to their specific readerships." I've tried it out a bit and even more to the point, Octopus is very structured, breaking down research publications into eight categories: problems, hypotheses, methods, results and data, analysis, interpretation, application, and (peer) review. Contributions to any of these need to be linked to others that already exist. "Publications therefore form branching chains of work, each following on from the other, and clustered under 'problems'." Contributors are linked by their ORCID identities and free to add to the chain. Here's an example of a problem I added to the dummy version being tested. I find the approach at once both constraining and liberating. Why do I have to start with problems? And I worry about things like duplication, sharability, and more (so much more). Is is centralized? Can we fork things? Is there an API? Can we syndicate? But I applaud the experiment.

Web: [Direct Link] [This Post]


Reasoning in attitudes
Franz Dietrich, Antonios Staras, Philpapers.org, 2022/06/27


Icon

A proposition is an expression asserting that something is the case. "Paris is the capital of France" is an example of a proposition. An attitude is an opinion about that proposition, for example, that it is true, or believed, or desired, or offensive, etc. We can reason about attitudes, for example, by describing the conditions in which a person might 'believe that Paris is the capital of France." But this paper is about reasoning in attitudes. For example, if I believe one thing, do I believe a related thing? If I wish for something, does it matter whether I wish for something else? Reasoning in attitudes is a lot messier than reasoning generally, and it depends on choices, not just facts. And each choice changes us a bit, creating a feedback loop. If you want to dive into the mechanics of all this, then this is the paper for you.

Web: [Direct Link] [This Post]


By Exploring Virtual Worlds, AI Learns in New Ways
Allison Whitten, Quanta, 2022/06/27


Icon

This article describes embodied AI, a field involving "AI agents that don't simply accept static images from a data set but can move around and interact with their environments in simulations of three-dimensional virtual worlds." Based on the work of Princeton scientist Fei-Fei Li, it suggests a type of AI that "could power a major shift from machines learning straightforward abilities, like recognizing images, to learning how to perform complex humanlike tasks with multiple steps, such as making an omelet." The diffewrence here is like the difference between presenting a student with text and images to learn from and giving them a real environment where they can move about and try things. "The meaning of embodiment is not the body itself, it is the holistic need and functionality of interacting and doing things with your environment," said Li.

Web: [Direct Link] [This Post]


Monitoring Employees Makes Them More Likely to Break Rules
Chase Thiel, Julena M. Bonner, John Bush, David Welsh, Niharika Garud, Harvard Business Review, 2022/06/27


Icon

I can't assess this research directly because its hidden in a paywalled article in the Journal of Management from last year, but the upshot is that if you monitor employees, they feel less responsibility for their own actions, and as a result, are more likely to cheat or break rules. The reason for this, suggest the authors, is that people are only partially motivated by rewards and punishments; the greater part of their motivation is based on their 'internal moral compass', though this effect exists only of people have the agency to make moral decisions on their own. There's a logic here. I do not commit murder not because I fear getting caught, but because I believe it is wrong to commit murder. The implications for education are twofold: first, it demonstrates the need to promote agency in learning environments, and second, it shows how important the development of a person's sense of agency (and hence, their own moral code) is for society as a whole.

Web: [Direct Link] [This Post]


Learning, Dialogue and AI: Offline initiatives and Political FreedomT
Mark Johnson, Improvisation Blog, 2022/06/27


Icon

There's a lot going on in this post. To begin, it explores the use of offline web browser applications instead of a cloud-based LMS using Electron - to learn more about this approach I recommend my presentation Electron Express form a couple years ago. This gives learners a way to gather and look at their learning data without having to share it with third party data aggregators. So author Mark Johnson also looked at applying AI analysis within the Electron environment. Interestingly, he suggests that this is a way of having conversations that cannot be monitored by non-free regimes. "Large-scale language models are basically self-contained anticipatory dialogical engines which could function in isolated circumstances... suddenly individuals can have conversations which are not monitored - simply by being in possession of a particular AI file.  "

 

Web: [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.