[Home] [Top] [Archives] [About] [Options]

OLDaily

Andrew Pelling challenges conventions in science and academia
Pat Rich, University Affairs, 2017/08/09


Icon

I get criticized for my won approach to research, so it's good to see this article about Andrew Pelling, a University of Ottawa biophysicist who indulges in, as the article says, "brainstorming whimsy." I can identify. “I’d make a discovery in the lab," he says, "and I’d be all excited and tell my colleagues, and they’d look at me with this blank expression and say, ‘So what’s the application?’ It sent me a very clear signal that people only valued my research if there was a dollar sign or some bogus application at the end of it.”

[Link] [Comment]


Cognitive bias cheat sheet
Buster Benson, Better Humans, 2017/08/09


Icon

This is from last fall, though I was prompted to link by this post with related flash cards to help you remember the 168 cognitive biases reported by Buster Benson (maybe we could use them to train AIs). The description is interesgting enough, but what really makes this post is the epic diagram of all 168 biases at the end. "If you feel so inclined, you can buy a poster-version of the above image here. If you want to play around with the data in JSON format, you can do that here."

[Link] [Comment]


Lonely Planet’s New Trips App Makes You The Travel Guide
Emily Price, Fast Company, 2017/08/09


Icon

I have often compared the different ways of learning a domain to the different ways of exploring a city. What is typically missing from those accounts is how travellers capture, report on, and share the results of that exploration (though of course I have talked a lot about learning and working openly in general). Here's an application just being released which does the same for the analogy: Lonely Planet's Trips, an iPhone app that "enables anyone to seamlessly upload photos and videos directly from their phone’s photo library and craft stories illustrating each trip." The stories will be interesting, but so too will the knowledge that can be mined from the stories.

[Link] [Comment]


Rise of the racist robots – how AI is learning all our worst impulses
Stephen Buranyi, The Guardian, 2017/08/09


Icon

I think it's a good thing that people are becoming more aware of the (current) limitations of artificial intelligence. When we simply train AI based on the toughts and attitudes of, say, Google employees, we get a skewed perception of reality. But it's easy to criticize; the deeper question here is how we validate AI to ensure that it is not skewed. This is especially difficult given that the people who actually have those views will accuse the validation process of being politically correct and of social engineering. I think it wouldn't be too extreme to require that AIs be constrained by a scientifically-grounded knowledge base. That would be a technical challenge, and given today's climate, also a political challenge.

[Link] [Comment]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2017 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.