[Home] [Top] [Archives] [About] [Options]

OLDaily

How Harvard Business Review is using a new social-first vertical to reach a younger audience
Kayleigh Barber, Digiday, 2021/10/12


Icon

I personally think of Harvard Business Review as being more in the business of marketing a certain ethical and social perspective than in offering research and insight. And that, I think, goes double for its network fro "modern, global young professionals" called Ascend (I mean, look at the name). The play now is to get it into social networks. "To continue growing the vertical, the brand promoted Paige Cohen as Ascend’s editor-in-chief who is tasked with reaching this audience on rapidly emerging platforms, like TikTok... the strategy depends on Ascend readers discovering and engaging with HBR through this vertical, remaining loyal to HBR - and eventually pay for the coverage - throughout their careers." (Digiday is another one of those publications offering "one more free article this month" that you can circumvent with Firefox-Ublock Origin).

Web: [Direct Link] [This Post]


Feedback options in Turnitin and Canvas SpeedGrader
Emilie Hayter, University of Sussex, 2021/10/12


Icon

This post points to a bit of a change of direction for the anti-plagiarism software TurnItIn. Not that they're going to stop comparing student work to their ever-growing database any time soon. But now they're able to leverage this access into the provision of more services for instructors grading assignments, including some auto-marking (for example, writing 'awk' on awkward phrases, and correcting incorrect citation formats) and comment assistance tools (including audio and video annotations). I think that in the long run anti-plagiarism will be a smaller and smaller part of the company's offerings, and I'm sure they'd like to get into the automated assessment market.

Web: [Direct Link] [This Post]


Gaggle Surveils Millions of Kids in the Name of Safety. Targeted Families Argue it’s ‘Not That Smart’
Mark Keierleber, The 74, 2021/10/12


Icon

This post continues recent investigations on the use of a product called Gaggle to spy on students. The sales pitch is that it helps prevent suicide, but examples show it also flagging students for using profanity, being gay, and other unrelated purposes. Critics argue that inappropriate messages are flagged, that the algorithm is inconsistent, and that it opens students to discrimination on the base of race, gender and other factors. It's also worth adding (for the people who always say 'but we need evidence...") that most of the data on the surveillance and its purported effectiveness are not being released. Good work from the 74.

Web: [Direct Link] [This Post]


Beyond Copyright: the Ethics of Open Sharing
Creative Commons, 2021/10/12


Icon

A Creative Commons working group has been working on a white paper on the ethics of open sharing "that outlines some of the most pertinent and pressing questions and use cases involving the ethics of open sharing." They've issued a call for "you and the members of your community to join us in an open consultation to review and provide comments and feedback on this document. The goal is to get wide-ranging input about these thorny but essential questions at the crossroads of ethics and open sharing. This paper will be published on Creative Commons’ Medium publication and outcomes will be shared in a public webinar hosted by CC on 9 November, 2021."

Web: [Direct Link] [This Post]


The Facebook whistleblower says its algorithms are dangerous. Here’s why.
Karen Hao, MIT Technology Review, 2021/10/12


Icon

While most opposition to the way Facebook runs its business focuses on the harmful content, simply blocking that content is not the most effective response. This to me is the most important part of Frances Haugen's testimony last week against the company. Here's the reasoning: "“I’m a strong advocate for non-content-based solutions, because those solutions will protect the most vulnerable people in the world,” Haugen said. To the extent we need algorithms at all, these algorithms should be owned and run by individuals for their benefit, not by centralized publishing simply to make money.

(Because it only has $18.4 billion in its endowment, MIT can't afford to make Technology Review freely available, so you may be faced with a warning 'You have 2 articles left' or even a paywall. Use Firefox with UBlock Origin to refuse tracking cookies and other spamware and this resets each time you restart Firefox, and you can read the article.)

Web: [Direct Link] [This Post]


Generating Animations from Screenplays
Yeyao Zhang, et.al., Disney Research, 2021/10/12


Icon

Think about the implications of this for learning resource production: "Automatically generating animation from natural language text finds application in a number of areas e.g. movie script writing, instructional videos, and public safety." This paper (16 page PDF) from Disney Research came out a couple of years ago. Here's a more accessible article from VentureBeat about the research. Aside from a mention on Reddit there isn't anything recent on this, but deifnitely watch this space.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2021 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.