When I was studying philosophy in university I would find works like Henry Kyburg's Recent Work in Inductive Logic to be invaluable (sadly behind a paywall today). Today what I am seeing are literature reviews conducted (in the first instance) by search and filtering algorithms. These seem to me a clumsy and inaccurate way to survey research data (even if they are all the rage in the social sciences). I prefer to judge the relevance of a paper personally. But there's so. many. papers. So in a world without Henry Kyburg, something like this article summarizer might be useful. It's a technology that has been around for 20 years (I remember NRC reserachers introducing me to it when I first started with them) but it's finally coming into its own.
This is a nice resource from Google with some really practical advice on how to design for human interactions with AI. For example, here's some really solid advice: "AI-powered systems can adapt over time. Prepare users for change—and help them understand how to train the system." The Guide also covers user needs, data sources, feedback and coltrol, explanability, and how to handle AI errors, among other topics.
OK, here's my prediction: very few people are going to keep their "skills profile" up to date. How do I know this? Because they keep pretty much nothing else up to date. How's your website doing? Are the friends on your friend list still your friends? So what needs to happen instead? What we will have (evebtually) will be an AI that looks at everything we've done and constructs a skills profile for us. Parts of it will be human-readible, but parts of it (the most useful parts) will be based on byzantine data-processing algorthms only an AI could come up with.
After making the point that "innovation is fifteen different things to fifteen different people," Harold Jarche offers a nice overview of the concept of innovation and how it has evolved in recent years. At the core, though, this remains true (and contrary to the perspective of managerialists everywhere): "Innovation is like democracy, it needs people to be free within the system in order to work. Empowering knowledge artisans to use their own cognitive tools creates an environment of experimentation, instead of adherence to established processes.... There is no innovation assembly line." So many managers (including my own) don't understand this.
Class blogging hasn't disappeared; it has become mainstream. This list is I am sure nothing like a complete list of all the class blocgs around the world. But it's a nice selection of them, and there's a space for you to add your own to the list. The spreadsheet is dividied by grade, subject and type.
Most of us aren't building machine learning interfaces in educational technology, but it's still useful to stay up-to-date with the terminology (a short list begins this article) and best practices for machine learning implementations. The guide is also a helpful checklist for evaluating machine learning proposals. Finally, if you are implementing machine learning, it's best to "do machine learning like the great engineer you are, not like the great machine learning expert you aren’t."
As summarized by Jonathan Kantrowitz, " The new report, California’s Positive Outliers: Districts Beating the Odds, finds that the proportion of teachers holding substandard credentials—such as emergency permits, waivers and intern credentials—is significantly and negatively associated with student achievement for all students." (44 page PDF) This stands in contrast to the other report I read today emphasizing the role of family income in education outcomes. But for this report we have to read carefully and sometimes between the lines. As the authors write, "We find that, aside from socioeconomic status, a major predictor of student achievement is the preparedness of teachers." So there's isn't really a contradiction between the two reports, just a (potentially misleading) difference in emphasis.
Study Finds Wealth More Advantageous Than Smarts
Anthony P. Carnevale, Megan L. Fasules, Michael C. Quinn, Kathryn Peltier Campbell, Anthony P. Carnevale, Megan L. Fasules, Michael C. Quinn, and Kathryn Peltier Campbell, 2019/05/17
I think that those that are smart already knew this. But it's always healthy to have the research (59 page PDF) to support one's views. The authors do take pains to argue that education does move the needle a bit, and something is better than nothing. But there should be no illusions here; being born wealthy conveys significant advantage. I would add this: the comparisons are made in terms of socio-economic status (SES) - in other words, wealth. But it extends far beyond that. Wealthy people are more influential in society, more likely to become political leaders, to be published as authors or recognized as scientists. The wealthy
This perspective is not wrong: " We are told debate is the great engine of liberal democracy. In a free society, ideas should do battle in the public forum.... In practice, modern debate has a structural bias in favour of demagoguery and disinformation. It inherently favours liars." So what do we do instead? The author suggests, ultimately, "the peerless technology of writing," but I don't think that's it either.
IMS has announced "the availability of Learning Tools Interoperability (LTI) version 1.3, a significant update to the core standard, and three new services that comprise LTI Advantage. The latter, LTI Advantage, is a package of services offered through LTI, specifically, "Names and Role Provisioning Services, Deep Linking, and Assignment and Grade Services." Providers need to be LTI Advantage Certified. IMS CEO Rob Abel is quoted as saying "Adoption of LTI Advantage by leading platform and learning tool suppliers is helping institutions accelerate the movement toward establishing a fully integrated and innovative digital ecosystem."
This article is dedicated to "establishing a common understanding of, and vocabulary around, the data-driven decision-making process." To that end, the author described for categories of data users, the "three Vs of big data descriptions", structured versus unstructured data, the data-driven decision-making process, types of machine-learning data analysis (prediction, estimation; supervised, unsupervised), and one short paragraph at the end about privacy.
I have often compared the learning of a new discipline to the exploration of a new city, and made the point that while you could take a guided tour (either in a class or in a tour bus) there are other, more independent, ways to explore that are of equal or greater value - on foot, via metro, with a friend, with a map. Your choice. This article makes some suggestions for alternative ways to explore a city, and it seems to me they are equally valuable as ways to explore a discipline: look for ghosts and ruins, get there the hard way, eat somewhere dubious, read the plaque, and follow the quiet. I do all of these things both when exploring a city and when studying something new, and yes, it's worth the effort.
I'm not really sure what to make of this announcement. The story is that the Gates Foundation is convening today a research group called the Commission on the Value of Postsecondary Education to measure the value of a college credential. The membership is high quality, but their day jobs are demanding, so they won't be doing the actual research. At best they'll sit in a room and exchange opinions while a consultant writes it up. The data, meanwhile, has already been collected (without the input of the commission). Since it's all US-based, I'll leave it to others to discern the political leaning in the group. Most likely, as the designated critic says, "the panel is unlikely to bring down the price tag of a college education. Instead, it is focusing on whether programs are worth the price tag.”
The MOOC design tool is a template for describing and sequencing activities in a MOOC. This article (14 page PDF) describes the tool and the results of a limited survey-based evaluation of its function and utility. From my perspective, the main weakness of the tool is that it is designed horizontally rather than vertically. There's only six types of activity (video, readings, discussions, tests, emails, and other; these should be columns, rather than rows. Then each row can be a time element - thus, making the template more like an itinerary rather than a Gantt chart, and giving people more room to enter data. (It should also be digital rather than paper, so the page can increase in size when there's more to add).
As YABOs go, this isn't bad. It offers a quick outline of some of the key concepts, overviews some applications, offers some 'review' comments, poses some questions, and suggests some alternatives to blockchain. My main complaint is that after reading it, you still won't know what a blockchain actually is. (YABO = Yet Another Blockchain Overview).
I'm not sure whether it's worth filling out the spamwall form for this report (this direct link to the 45 page PDF appears to work) but it is a fairly comprehensive discussion of owned and earned social media commentary for higher education institutions. I know the title says 'conversation' but I'm not sure the authors have appropriately distinguished conversation from 'chatter'. This especially seems to be the case when they add campus sports-related commentary to the mix. The self-stated pr4urpose of the article was to create benchmarks for the measurement of social media campaigns and conversation. I still found it an interesting read, even if I didn't feel especially enlightened at the end.
The Public Library of Science (PLOS) publishes open access scientific articles. These articles allow commenting, but very few people actually comment, a fact that caught the authors' eye given the nascent popularity of the concept of post-publication review. If people don't comment, how can we depend on them to review publications? It's a valid question. Different explanations offered - perhaps academics prefer traditional venues, like staff rooms and conferences. Perhaps what's missing (especially for mega-journals) is community. Perhaps it's just taking new models of review to be accepted. Or maybe (and this is my own speculation now) academics don't comment because there's no reward for commenting, and they do what they've always done for reward: cite and comment in a publication of their own. And that's not so bad - and what would be especially useful would be were there to be a way to view these follow-up publications linked from the relevant paragraph in the original publication, the way WebMentions works.
I've thought a lot about comments. I've never had many comments - so few, in fact, that I no longer make the effort to support comments on my website. The same with distributed comments - I don't see many responses on Twitter or elsewhere. Even in the days of mailing lists, I wouldn't get many responses to my emails - I always just figured I had closed the discussion with the correct answer, so there was nothing else to say. Someone once said to me that everyone knows everyone else has read it, so there isn't much to add. I used to comment more than I do today, but my tolerance for logins, passwords and capchas is almost zero. Anyhow, this article talks about the flavours of comments. I don't know - I see websites with long comment threads and I'm still not sure what makes people comment on one website and not another.
This report (65 page PDF) outlines the road ahead in Europe for student mobility and credit transfer. It reports that "the European Commission has proposed the creation of a European-wide hub for “online learning, blended/virtual mobility, virtual campuses and the collaborative exchange of best practices." This is directly related to the Common Microcredential Framework (CMF) mentioned here yesterday - here is a slide presentation from the European Association of Distance Teaching Universities (EADTU) outlining the motivation and approach that will be taken to establish the CMF.
A long time ago (a long time ago) there was this concept of the 'triad model', whereby a student studied at a remote institution while being hosted by a local learning centre. I wrote about it here, borrowing the term 'host-provider framework' from somewhere, and using my experiences supporting Athabasca University courses offered through vocational colleges in northern Alberta as a model. This model has never really disappeared, and we see it (sort of) revived with this article. The difference is that the name has been modernized ('micro-campus') and (in the case of this article at least) the learning centre offers courses from only one university (why?) and is located on some other university campus (why?). I can still forsee a role for local learning centres, offering access to support and coaching, the use of tools (such as a hololens or 3D printer) that students can't afford, and proving meeting space and project rooms.
Process Design of Cooperative Education Management System by Cloud-based Blockchain E-portfolio
Sukosol Wanotayapitak, Kobkiat Saraubon, Prachyanun Nilsook, International Journal of Online and Biomedical Engineerin, 2019/05/14
This paper is written in poor-quality English, so adjust your expectations. Additionally, it is focused at the conceptual level, and contains a proposal only. Having said that, it succinctly describes the communications problem in the use of e-portfolios in cooperative education, and describes how a blockchain-based solution would address the issue. The paper is short of the many details that would need to be attended to in creating such a solution, but it's a glimpse into a possible future.
This is a good overview of Penn State's World Campus. A bit of a misnomer, World Campus is the public university's online branch and serves mostly US-based students. It enrolls 20,000 students and earns $170 million a year, making an everage pricetag per student of $8,500. The article focuses mostly on how Penn State stayed small, forgoing the splashy ad campain of (say) University of Maryland-Global, or the high-profile partnerships of Arizona State University.
As recently as last year staff at the Canada School of Public Service would have been very hesitant to even consider talking about offering an online course to the general public. Today we see one being launched. " The Public Engagement team at the Privy Council Office (PCO), in partnership with the Canada School of Public Service (CSPS), is pleased to invite you to a collaborative online program on public engagement and consultations. Hosted by CSPS and PCO, this program will offer tools and tips on everything from planning a consultation to running and facilitating a session to analyzing your data." Just.... awesome.
Photographs, murals and statues inescapably point to the past, but the past is infinite - we could display anything, from primordial soup to yesterday's breakfast. So we have to choose. That's why the art we display in our public places reflects what we think today about the past. It tells us what we want to celebrate, cherish and remember. That is why our choices about murals and statues are important. That's why we put up monuments to heroes, not criminals. So when people say we should take down art depicting oppression, slavery or genocide, we are not saying we want to erase history, we are saying that perhaps today we should stop celebrating oppression, slavery or genocide as though they were good things. And it's hard not to think of people who defend these displays as people who still think they were good things.
I was going to just skip past this article, but then the question occurred to me, what if we were to prohibit all classification schemes and taxonomies from writing on education? These abound, but they don't actually provide any information. These 8 characteristics are a case in point. What have we learned when we are told there's a thing called "innovator's mindset" that includes 'reflective', 'resilient', 'creators', etc? What is is about these things that leads us to group them together, why these things and not some other things (like, say, 'energetic' or 'curious')? What is their causal efficacy - what creates them, and what do they create in turn? What is their role in explanation (beyond, say, 'problem solvers solve problems')? And - to get back to my original question - what would the same author say if they weren't allowed to use taxonomies and classifications to make their point?
According to this article, the European MOOC Consortium (EMC) has launched a Common Microcredential Framework (CMF). " The CMF launched at the recent EADTU-EU Summit 2019 in Brussels with the EMC’s founding platform partners, including FutureLearn, France Université Numérique, OpenupEd, Miríadax, and EduOpen." According to FutureLearn's Mark lester, “Leaving work for long periods of time to earn a traditional qualification will be less applicable in this new world and a new solution is needed from the education sector to meet this growing need.”
This is a pretty good article on ethical issues related to the use of AI. One strength of this article is that it shows how the risk varies for different flavours of AI. "Models with more predictive power are often more opaque... leaders must probe their data-science teams on the types of models they use by, for example, challenging teams to show that they have chosen the simplest performant model (not the latest deep neural network) ."
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2019 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.