There is power in a factory, there is power in the land
There is power in the hands of the worker
But it all amounts to nothing if together we don't stand
There is power in a union
Why does Daniel Willingham continue to rail on about learning styles theories? After more than a decade, most people wrap up a discussion and move on to a different topic. He is, in fact, arguing against something very specific. After all, he agrees that "he style distinctions (visual vs. auditory; verbal vs. visual) often correspond to real differences in ability. Some people are better with words, some with space, and so on." Where he disagrees with the theory is where people argue that "everyone can reach the same cognitive goal via these different abilities." This corresponds to what he has always said: that it is the nature of the content that dictates how it should be taught, not the nature of the learner. But why would this matter so much that he comes back to it year after year. I think it's to reassert, again and again, that learning is about content, not learners. And there's where we disagree. If you're pushing content into a learner, then you focus on the content. But if you're developing the learner, you focus on the learner. The former can be mass produced by publishers and content vendors. The latter can't.
One of the things I've learned working with the new version of mooc.ca is that course providers pay very little attention to syndicating their content (and typically even those that syndicate would really rather you simply stayed with their site, and many absolutely won't let you export content out of their environment). And of course there's little to no consistency in course syndication formats. I've been working with course syndication formats for many years - I've worked with LOM and IMS content packaging, created an RSS-LOM, and more recently have been working with various JSON formats. Here we have old-school XML specifications, and a crosswalk between a couple of them, XCRI-CAP (eXchanging Course Related Information, Course Advertising Profile) and schema.org json-ld.
There is a push afoot - led by Google but supported by multitudes - to move the entire web to encrypted communication. Websites which use 'https' are encrypted, but the others aren't. Encryption requires a certificate, which has always been the stumbling block, because these require verified identity, and they can be expensive, especially if (like me) you're running a number of domains. Jim Groom's post highlights Let's Encrypt, an effort to get free certificates into the hands of website owners in a drive to encrypt everything. I'll have to wait until January of 2018 for the wildcard certificates. Also, while I've gone through the install process a few times now (once for downes.ca which has a now-expired certificate, and also for my mail server) it still remains mysterious and complicated.
This post has been making the rounds recently, and it was no doubt calculated to generate the negative response it's receiving. And let me jump on board and agree that nationalizing social media is a dumb idea. We would never generate the value for the money we'd spend. But. There is an argument for noncommercial alternatives to Facebook and Twitter, an analogue to public mail delivery or public broadcasting. The sort of model I would envision would be a public service providing each person with web server space and a distributed social media app (along the lines of Mastodon, but where each person could have their own individual instance). The trick is doing it on a cost-effective basis (though note that the government spends upward of $1 billion on the CBC (about $27 per Canadian (money well spent))).
This is an interesting and very detailed examination of attempts to create music using artificial intelligence. It tracks what are (to my mind) two major stages in the evolution of this work: first, the shift from symbolic representations of music to actual samples of music; and second, the shift to convolutional neural networks: "Convolutional networks learn combinations of filters. They’re normally used for processing images, but WaveNet treats time like a spatial dimension." It makes me think: that's why humans haave short-term memory (STM). Not as a staging area for long-term memory (LTM) but as a way of treating time as a spatial dimension. There's the obligatory question of whether these will replace humans, posed at the very end of the article (to no effect whatsoever) and a look at the use of these techniques to generate spoken word audio.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.