Yes, it has only been a year, and I'm asking again. I have maintained OLDaily and the rest of this website at my own expense since 2001. It is not subsidized by my employer or anyone else. I've always been happy to do it, but I need your help. Click here to Donate.
This site gets a lot of traffic - 476K unique visitors and almost five million page views in 2017. 1690.37 gigabytes of traffic. On average, it has cost $125 a month for the last ten years (currently, it's $US 140, or almost $200 Canadian, per month). Thank you to everyone who helped last year. I raised just over $3000, which paid for the server and the traffic.
I am committed to keeping all my services and resources free, and will not add a subscription to any part of my website, ever. That's a promise. So if you help me provide this service, I'd be happy to recognize your contribution, as thanks, on my Donation Page.
I always appreciate article like this describing the why and how of some aspect of education technology. In this case it's Vicki Davis describing the tools and methods she uses to broadcast a regular podcast (I've always been tempted - I love audio - but it would take more time than I have in a week). Just for fun, as well: try to guess the sponsor link in the article - she discloses that she has a sponsor link, but doesn't tell us who the sponsor actually is (which breaks the spirit of the Federal Trade Commission regulation she says she's following in the disclosure, I think).Today: Total:
This article combines some much-needed optimism about educational technology (which has been in short supply lately) with some useful links. There's the HAIL Storm Network, Tsugi and NGDLE, Authorea (built on top of GitHub), and closer to the author's home at Duke, the OSPRI Lab's open source education technology project.Today: 1 Total: 1
This is the text of science fiction writer Charlie Stross's address to the 34th Chaos Communication Congress in Leipzig in December. The speech rambles a bit but there are interesting reflections on how to predict the future (the key is combining the 85 percept of trends that will continue as expected and the small percentage that leave you wondering what happened), the role of corproations in society, the question of what AI wants, and what went wrong with it all (his explanation: the "mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.").Today: Total:
The story here (buried in the third and last paragraph of this short article) is that "the University of Oklahoma Libraries has made available a Pandoc-based, web-hosted, open-source Markdown Converter." The idea behind 'markdown' is that it's a way of writing text that can be formatten (into bold, paragraphs, lists, etc) without the use of computing code (like HTML or the languages used to define PDFs and MS-Word documents). It makes entering text into (some) web-based forms a lot easier. That's it. Meanwhile, I'm wondering why the author refers to Lincoln, Ryan and Konrad only by their first names while Alex Gil gets the full first-and-last name treatment. (Update: figured it out: Lincoln, Ryan and Konrad are regular Chronicle bloggers, while Alex Gil is not.)Today: Total:
The Columbia Journalism Review is not above writing click-bait headlines, it seems (rememeber when clickbait headlines were the biggest problems in social media?). The idea has merit on first glance: make sure people have read the story before they can comment by asking them simple questions about what the story said. This did reduce the number of comments, but did not keep commenters on track. And critics point out that making it more difficult to engage with the story does not encourage people to engage with the story. When my high school English teacher used the same tactic on me I boycotted a year's worth of content quizzes.Today: Total:
"In short," writes Mike Caulfield, "the social media audience becomes one big training pool for your clickbait or disinfo machine." True. But even without social media, there will be no shortage of data for the machines. Consider, for example, the content from our learning management systems. Or our loyalty card programs. Or the telephone listings. There are issues with the input end - biased training data, for example - but the real problems are happening at the application end. That's when this data is used by machine learning algorithms to perpetuate stereotypes, create fake news, teach falsehoods, etc., and this can't be solved by the technology. Nor even by teaching people how to spot fake news. It's a social problem. It's a governance problem.Today: Total:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2018 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.