I'm linking to this item to make sure it remains available to me in future discussions of critical literacies. Here's the argument: "Many of the frameworks... do not take into consideration the social practices governing the use and writing on the web." The frameworks are conceptually defined and focused on finding and consuming rather than creating and communicating. "The classical approach to digital literacy is the reference framework for web literacy. This approach assumes that digital skills are useful in order for people to be capable of selecting, analyzing, processing, organizing, and transforming information into knowledge based on context and personal and social needs. We believe that this approach is excessively instrumental. This is because it does not take into account the new competencies the web offers for people to be active in constructing new pathways for social participation and, especially, learning." Exactly right. Image: Sandwell and Lutz.
More peer-reviewed literature involving the use of learning styles to describe, explain or predict learning outcomes. This study examines how students with different cognitive styles (i.e., Holist/Serialist) react to three presentation modalities (i.e., text, text with graphic, and context) in game-based scenarios." I'm willing to gfrant the learning styles sceptics the benefit of the doubt, but at some point, other than simply repeating that "there are no such things as learning styles" they will have to explain the continued persistence of learning styles in published reserach and explain why results like this are irrelevant.
Pew released a big report on truth and misinformation online yesterday and I was one of those consulted to contribute to it. The overall result was that "experts are evenly split on whether the coming decade will see a reduction in false and misleading narratives online." My opinion was that the incentives aren't right to offer hope of improvement. “There is too much incentive to spread disinformation, fake news, malware and the rest. Governments and organizations are major actors in this space.” Additionally, I can't see either legislation or technology that limits what we can say helping the situation in any way. Read Umair Haque and you get the idea. More from Mic, Inside Higher Ed, Mashable, Poynter, Recode, As Week.
Over the last decade or so ther was no end to the stories talking about how mobile phones would bring the internet to developing nations and especially to Africa. I've covered these over the years. But the mobile internet has remained a chimera as the rollout of more advanced wireless - 3G and especially 4G - has stalled. "4G deployment in Africa will only reach 32 percent in 2020, and the actual adoption of 4G will be less than 10 percent." Now we're looking at 5G. On the one hand, it's a terminus - there won't be a 6G, as 5G is a collection of protocols that will evolve independently. On the other hand, it may offer a more stable target. This article predicts that Africa will ctach up, but it will be a challenge. "Mobile networks have been optimized for phones, but 5G requires they support mobile broadband services, massive IoT and mission-critical services."
Nice review of work foward mixed reality (XR) (which would include virtual reality (VR) and augmented reality (AR)) in 2017 as well as discussion of "a draft WebXR API proposal for providing access to both augmented and virtual reality devices." Here's the review (quoted):
It's a busy time in the community for a technology that might be finally reaching it's potential. I'm sure developers and marketers will be careful not to over-hype. Even in a field which benefits directly from it like e-learning the applications are limited to specific cases.
MIT's Media Lab has discovered the cMOOC (which they will now rebrand as 'not a MOOC'). "The ultimate goal of LCL is to cultivate an ongoing learning community, where people from around the world can meet one another and share ideas, strategies, and practical tips on how to support creative learning,” says Resnick. Mmm hmm.
"Algorithms can be an asset to nonprofit organizations, reducing costs and making processes more efficient," write Mancha and Ali, "but they can also be an ethical liability. "There are many examples of algorithms making unethical decisions. For example, " in mid-September, when Hurricane Irma battered the Florida peninsula, the algorithms airline companies use to price flights increased rates in response to peaks in demand." Uber's surge pricing did the same during the London attacks and New York Bombing. The simple principle that's it's not ethical to profit from tragedy eluded these systems. Algorithms also violated basic ethical codes when making hiring, lending and face recognition decisions. These lapses are problems with the technology per se, they're the result of companies that don't care about ethics. This article makes recommendations to change that: make ethics important in your organization, hire employees well-versed in ethics, and test your algorithms against ethical standards.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.