This is a really nice use of graphics to demonstrate concepts. You can see different sorting algorithms (bubble, cocktail shaker, insertion, shell, comb, etc.) applied to the sorting of coloured squares (each colour has its own numerical value, so it's easy). This allows you to get a visual sense of what the algorithms do and also to compare the methods. At the end of the post the different algorithms are raced against each other. It's Imgur, so the comments are mostly rude, but there was one nice link to a set of videos showing the same algorithms with both colours and sound. These all have different properties - some are faster, some use less memory, some are optimal for certain sorts of data.
The perceptions of the meaning and value of analytics in New Zealand higher education institutions
Hamidreza Mahroeian, Ben Daniel, Russell Butson, International Journal of Educational Technology in Higher Education, 2017/10/27
This article isn't about analytics, but rather, is what people think about analytics. Actually, it's about what a few people in New Zealand think about analytics. Normally I wouldn't care (even if their thoughts are published in an academic journal, and therefore, Better Than Us) but there's a nice observation to be found in the discussion. Some people "perceived analytics regarding its fundamental elements (e.g. data, statistics, numbers, KPIs, graphs, etc.)." Other people "viewed analytics in functional grounds... as the collection of a process... and tools... used to answer the difficult questions, techniques for extracting data and useful discerning information to describe and predict performance outcomes." It's unusual to see functionalism in academic papers about education; usually they're structural, describing processes and evaluations (but not what they're used for). As an aside, the journal editors should have corrected the Venn diagram, which should have used only two circles to depict the three concepts (structural, functional, structural and functional).
As the subtitle says, "this is the latest example of how bias creeps into artificial intelligence." A sentiment analyzer looks at text-based content (like tweets or blog posts or emails) as determines whether the sender is happy or sad, anngry, bitter, sorrowful, or whatever. It does this based on the association between whatever is in the email and words associated with those sentiments. And that's where the bias creeps in. It is taking various sentences that should be neutral (like "I'm a Christian", "I'm a Sikh" or "I'm a Jew") and assigning positive or negative sentiments to them. "A chief obstacle to programming a non-biased language AI is that the data itself is rarely purified of human prejudice."
I took the Digital Drivers License (DDL) trial quiz and am disappointed to report that I failed, scoring only 6 correct out of 10. Since I don't think that my knowledge of the internet is badly flawed, I conclude that the test is. And that's the problem with (a) tests and (b) the concept of a Digital Drivers License. That, plus the fact that it still took me to a "buy now" page immediately after the end of the quiz. And that's why I think this article in an LSE blog is naive. It touts the benefits of the the digital driver’s licence created by the Alannah & Madeline Foundation in Australia, and notes that some 22% of Australian schools have registered for it (which means revenues of almost $1.8M - not bad for a bit of web content) but doesn't look closely at the content, which I think is biased and in some cases quite misleading.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.