The title of this post is a play on the old Kevin Kelly book, What Technology Wants. The author's one-word answer is "understanding". But what would a machine's answer be? "Perhaps what technology feels, what technology wants, is right in front of us – we just can’t relate to it: it already feels & wills but not in a way remotely like us. I find this a terrifying thought: not that technology doesn't feel, but that it feels in a way that is incomprehensible to us." This reminds me of Nagel's What is it Like to Be a Bat? We couldn't really know the answer. Same with a machine, then. What is it like to be a carrot? What is it like to be a calculator?
I'm not (necessarily) going to recommend you view this presentation. It's here because reading it made my eyes roll - I recognized almost nothing in the title. I'd heard of 'continuous delivery' - that's where you continuously update your application or service using automation. But the rest? OK (takes breath). Spring is a framework for building applications in Java (Java is a programming language). Spinnaker is software that deploys applications to cloud services. Canary Analysis is a Google-supported system that evaluates prformance metrics to make sure the update was safe. You can use Spinnaker for automated Canary Analysis. And that's what this presentation is about.
I don't know how much this is overstated and how much is awful truth. However, given everything a browser can do today, I would say that even without being locked out by vendors, it would be very difficult to create a new independent browser (thank goodness for Firefox). Anyhow, the culprit in the present story is the Encrypted Media Extensions, or EME, which is what enables companies like Netflix to offer secure videos. We covered EME in 2017. Though these are proprietary, the W3C agreed to make them a web standard three years ago. Fast-forward today and " Samuel Maddock has been trying to create a rival 'indie' browser, and has been to each of the EME DRM vendors and has been sent away by all of them."
This article in Boston Review argues that democratic deliberation should take place before artificial intelligence (AI) systems are implemented in society. "A democratic critique of algorithmic injustice requires both an ex ante and an ex post perspective. In order for us to start thinking about ex post accountability in a meaningful way—that is, in a way that actually reflects the concerns and lived experiences of those most affected by algorithmic tools—we need to first make it possible for society as a whole, not just tech industry employees, to ask the deeper ex ante questions (e.g. “Should we even use weak AI in this domain at all?”). Changing the democratic agenda is a prerequisite to tackling algorithmic injustice, not just one policy goal among many."
David Wiley offers what appears at first to be a point of contact between his and my philosophy of open: "I am more interested in insuring that other people are able to do whatever they want or need to do with my content than I am concerned about making sure they can only do what I want them to do with it." Yes, for me it has always been about enabling other people. The difference is that Wiley sees this as a relation between himself and the person reusing the content, while I see this as a relation between myself and all potential users of the content. I cannot give (say) one person the right to commercialize my content without harming everyone else who would use my content. If you allow something you could have prevented, then you are endorsing it - whether it be the freedom to express a view different from your own, or the freedom to take your content and prevent anyone else from viewing it.
I've spent the better part of the morning working through this report (38 page PDF) on ethics on AI from many of my colleagues here at NRC. It summarizes work from workshops in Ottawa and London (UK). It's taking me this long because I'm constantly following up links - for example, to the UK House of Lords report on Ethics in AI (which has be reading many of the submissions, such as the one from EFF), and the European Commission's Ethics Guidelines for Trustworthy AI. The NRC document is supplemented with Day 1 slides and presentations and notes from a Day 2 workshop.
Will Thalheimer presents this report (22 page PDF) on 'learning transfer', which he says "occurs when people learn concepts and/or skills and later utilize those concepts/skills in work situations." What we know about transfer is based on what he states upfront is weak evidence. "I will roughly estimate as well over 80%... do not actually measure transfer," he says. He writes that "far transfer hardly ever happens. Near transfer—transfer to contexts similar to those practiced during training or other learning efforts—can happen." But there are exceptions. "A person’s long-term development is likely to benefit from having a range of learning experiences.. Similarly, people generate more creative insights when they have been prompted to look beyond the usual." This, though, speaks to distinct objectives of training - transfer versus creativity. Finally, he suggests "doing more to proactively reach out to the research-translator community" (also known, I would add, as the 'knowledge translation' or 'knowledge mobilization' community).
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2020 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.