by Stephen Downes
Jan 10, 2017
We are rapidly approaching a world in which software and service designers simply plug their application into an AI service to perform increasingly useful tasks. It's a question of the need for scale. "If you’re talking about systems that... do difficult things like natural language processing and unstructured data mining... it makes sense to centralize them in the cloud." But opportunity makes a virtue out of necessity . We can have light-weight applications that access numerous services on an as-needed basis. Note: O'Reilly requires social media sign-in to read this article (and probably sell your data).
Traditional educational research is (to my mind) often misleading or irrelevant. I am not alone in this assessment, as this article suggests. And while I'm quite properly sceptical about the research that may be offered up by a commercial enterprise (in this case, Pearson) I think the arguments in this post are sound, and in particular endorse this: "For research to meaningfully impact teaching and learning, it will need to expand beyond an emphasis on controlled intervention studies and prioritize the messy, real-life conditions facing teachers and students."
If gthe descriptions in this post are accurate (and there's no reason to suppose they aren't) then proposals from the European Commission to greatly extend copyright law would render many common online behaviours (including this newsletter) illegal. European Parliament member Julia Reda writes, "These proposals are pandering to the demands of some news publishers to charge search engines and social networks for sending traffic their way (yes, you read that right), as well as the music industry’s wish to be propped up in its negotiations with YouTube." Among the illegal behaviours: sharing snippets of news articles, tweeting a news headline, pinning photos to Pinterest, having a search engine index the web for you, and more.
This article contains all kinds of goodness as it profiles Terry Winograd, one of the pioneers of human-computer interaction (though I wonder how many people in HCI have even heard of him). Winograd's story is intertwined with the history of philosophy of mind and artificial intelligence (AI) as at MIT he is taught by and interacts with the likes of Marvin Minsky, Hubert Dreyfus and John Searle. Then there's a stint a Xerox PARC. Then at Stanford a student named Larry Page had the good fortune to have him as an advisor. Winograd himself marks the transition from the belief that AI is based on symbolic representations of the world to the belief that AI is based on "bringing forth of the world through the process of living itself.” (All this, as an aside, is also much of the philosophical basis for connectivism.) Don't miss this article.
The headline would make more sense if you couldn't use Google to learn philosophy, but in fact, you can. Of course, learning philosophy (or anything else) means doing much ore than merely reading about it and remembering stuff. You have to do philosophy, and online or off, it is every bit as hard as Charlotte Blease says it is. "It requires us to overcome personal biases and pitfalls in reasoning. This necessitates tolerant dialogue, and imagining divergent views while weighing them up." Having said that, I have long argued for the teaching of philosophy - and especially critical thinking - in schools, as have many others before me. Far better to learn to think that to learn to memorize.
This is a funny story with a surprise inside. The funny part is the artwork: an artist created glass blocks exactly the dimensions of a FedEx box and then shipped them in those boxes, producing unique art out of the cracks and breakage that resulted. The surprise is that it turns out that Fed Ex has corporate ownership over that space. "There’s a copyright designating the design of each FedEx box, but there’s also the corporate ownership over that very shape. It’s a proprietary volume of space, distinct from the design of the box." Now I'm afraid I might accidentally violate FedEx's ownership over that specific shape should I decide to, I don't know, create my own mailing box.
Programmed math instruction is fraught with potential pitfalls, as any designer knows, and some of them appear to have caught the Teach to One math program being piloted in California. But this article is as much a study in perceptions as it is about technology. In the letter written to the school board parents complain that (in addition to some bugs in content alignment) the Teach To One system isn't the way it used to be. Look at the criticisms: it doesn't follow a 'logical pathway', it spends less time on some topics, teachers spend less time with students, the content isn't organized by levels, there are no textbooks, and collaboration isn't working. None of these are flaws in and of themselves. If students aren't engaged - yes, that's a real flaw. But it's not wrong just because it's different.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.