OK, you're not actually going to learn how to do this simply by reading the article, but you will learn how it's done, and more importantly, that it can be done. The task breaks down into three parts: classifying images (do you see a cat, a rabbit?), describing images (providing a natural language summary of the content), and annotating images (generating text descriptions for specific parts of the image). So basically we're associating object recognition with language strings (in English, in French, whatever). Going further, the neural networks can act as feature extractors, which map images to "an internal representation of the image, not something directly intelligible." Language generation algorithms, coder-decoder algorithms, and an attention mechanism mechanism round out the picture. It's pretty interesting.
Lots of movement on the algorithmic accountability front (this is the idea that companies need to be able to explain, and be accountable for, conclusions their software draws about people). According to this article, Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, "announced today the AI Now Institute, a research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence.” We've been hearing this idea, in this article and elsewhere, for example from Cathy O’Neil in the New York Times, that there's no academic reserach being done in this area. But as pointed out in this Chronicle article, "the piece ignored academics and organizations that study the issues.” Said Siva Vaidhyanathan, on Twitter, “There are CS departments and engineering schools that take this very seriously. MIT, Harvard, UVA, CMU, Princeton, GaTech, VaTech, Cornell Tech, UC-Irvine, and others all have faculty and programs devoted to critical and ethical examination of data and algorithms.”
At my conference presentations I have the option of using my own backchannel system to allow attendees to use my interface, or a Twitter interface, to post comments in real time. Here's an example of it at work. There are two key differences between my system and the system described in this article, where conference organizers show a Twitter stream behind the speaker. First, I can see the comments in real time, and respond to them directly. Second, I am in control; I can turn off Twitter, and I can turn off the system entirely. This is not to excuse the harassment of women speakers in conferences where Twitter is used. There's no excuse for it, and the attackers should be ashamed of themselves. Putting the speaker in charge of the response, though, goes at least some way toward redressing the power imbalance.
It's way too late for those who argued against the commercialization of the internet to say "I told you so." Though they could. We now need to ask the same questions about the education system. By way of context, here is Ttim Berners-Lee on the current state of the web: "“The system is failing. The way ad revenue works with clickbait is not fulfilling the goal of helping humanity promote truth and democracy... We have these dark ads that target and manipulate me and then vanish because I can’t bookmark them. This is not democracy – this is putting who gets selected into the hands of the most manipulative companies out there.”
This is an interesting article suggestive of future research (and future debates) but to date it is based on the flimsiest of foundations. The hook is a a Kansas State University study claiming that using a brainwave headset, Muse, reduces student office referrals by some 70 percent. But the best I can find is a small group session on the subject; neither the EdSurge article nor the university press release refer to a published study, nor is the study listed on the Muse site, nor could I find it in a search. Still. Muse won't release its algorithm, which rases questions about the method it uses to collect its data. And a related company, BrainCo, "has plans to use student EEG information to create 'the world’s biggest brainwave database.'" So who takes responsibility over how this data is used, or misused?
This is a bit of an odd article, but I'm including it here to keep at the top of mind an important initiative where "California Governor Jerry Brown asked the head of the state's community college system to develop a proposal for a fully online community college by November 2017." Why do I say it's odd? Well, for example, were he begins by saying "community colleges are open-access institutions" as a lead-in to accessibility issues. Yes, accessibility is important, and we should design for accessibility first, but it's not what people usually mean when people say something is an open access institution. Another is the suggestion that the college "using a model course approach". Does he mean a pilot course? A course template? Course design standards? The article also conflates flexible start-times with competency-based learning, the need for "online and face-to-face " faculty meetings. None of this is wrong per se but feels odd. Could be me.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.