Tony Bates commented on this article, but it seems to be open access, so I may as well go directly to the source. The authors ended up assessing only four systems: Zoom, Microsoft Teams, Skype, and WhatsApp. They should have considered more systems (including WebEx, WeChat, Big Blue Button, GoToMeeting, Shindig, Google Meet, Jitsi, Discord, and more) but these were the ones the authors were familiar with. This proves to be important, because as Bates notes, "they used an analytical evaluation based on usability inspection: a set of guidelines based on an expert’s experience (the experts being the authors in this case)." I don't mind the method so much; it's at least as accurate as "empirical data such as students’ responses or achievement", but they maybe needed a broader pool of ex[pertise to pull it off.
So here's the claim: "When it comes to talent mobility, a real-life 'talent concierge' can beat out any algorithm." Well, maybe, but how does he know? I don't see anything in previous human performance that suggests that machines couldn't perform as well or better. Matthew Daniel writes, "trying to solve the problem of building internal talent and getting such talent to the right roles has largely been relegated to software, algorithms and other forms of technology." Why is that? Because humans are doing such a good job already? He argues we should have a new position, "consider them 'upskilling coaches' or 'talent concierges' or what have you." Or as I call it, "the function formerly known as HR." I think that in learning and development any predictions of the form "only humans can do x" are short-term predictions at best.
We covered assessment at an introductory level yesterday, and this post takes us more deeply into it as Julian Stodd offers "a pragmatic view at a complex subject." He describes an 'Assessment Intent document' that asks three questions: " what are you trying to measure,  what is the context of measurement, and  why are you measuring it." He also describes three types of measurement: ‘self reported’; ‘produced’ (assets of learning – group stories, co-created narratives etc – also some formal tests or assessments etc); and 'inferred' (which may include observation). I would say each of these has strengths, but also that they are also subject to weaknesses as well, including (for each of the three types respectively), self-delusion, misleading abstraction, and subjective interpretation.
Creative Commons has been focusing more on open education recently, which is overall a good thing. I recently participated in discussions regarding their work plan for the next year. Maybe 90 percent of the 'platform goals' are centered around advocacy. I'm not a fan; as I said at the meeting, "it's easy, and consists of telling other people what they should do" (as I tell my left-wing friends, I believe in working for change, not fighting for change). Two of the four actual activities are based on advocacy, while we may also see another round of Lightning Talks (which were very popular) and 'additional learning circles', which I think should be the top priority (and I'll participate and share whatever knowledge I have if these get organized). Image: Ursuline College.
This is a longish post but it reads pretty quickly and offers a sound overview of what is known these days as 'deep learning' - that is, unsupervised neural networks constructed with hidden layers of unlabeled neurons. The article gets into a few technical concepts like connection weights, overfitting and local minima, but it does so in a way that makes them clear. If you know noting about deep learning but would like to know what it's about, this is probably a good place to start.
The focus of this short post is to advertise the launch of training and assessment on eight "most in-demand skills" that can be taught "in the context of a given course or program." The eight skills are about what you would expect: critical thinking, empathy, resilience, etc. But of course the real motivation here is to highlight the platform and associated vsbl (pronounced 'visible', because fcrse) microcredential initiative. It's an interesting strategy, and suggests a future where a full course could be composed largely of microcourses (and microcredentials) from various providers.As the site says, "Learners are in the driver’s seat, getting credit and skills from many different providers, and credit for learning wherever it happens." But it's annoying to have to register for an account before even seeing any content, and registration multiplied by 'many different providers' would quickly become a significant problem.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2021 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.