[Home] [Top] [Archives] [About] [Options]

OLDaily

Octopus set to make sea change in research culture
Alex Freeman, JISC, 2022/06/28


Icon

This post from the developer of Octopus offers some answers to some of the questions I raised yesterday. "We are still hoping that transparency will lead to self-regulation of the system, that the model can work for a large number of research subjects, and I am still sitting up late at night creating training datasets for the algorithm that someone else is skillfully teaching to 'seed' the system with existing research questions." Now what I'm wondering is whether we can have a set of different octopi, with different world views or competing paradigms. Image: generic male researcher that accompanies the Jisc article and not Alex Freeman; she can be seen here.

Web: [Direct Link] [This Post]


Ethical Principles for Web Machine Learning
W3C, 2022/06/28


Icon

As this page notes, "the Web Machine Learning Working Group has published a First Draft Note of its Ethical Principles for Web Machine Learning." It argues that web machine learning enables "millions of do-it-yourself web developers and aligns this technology with the decentralized web architecture ideal that minimizes single points of failure and single points of control." It then  traces some of the typically central concerns, such as fairness and bias, and draws from the UNESCO guidelines for AI ethics. It offers a developing register of risks and mitigations, with the option to contribute to this via GitHub. Image: Wu, et al.

Web: [Direct Link] [This Post]


How Did Consciousness Evolve? An Illustrated Guide
Simona Ginsburg, Eva Jablonka, MIT Press Reader, 2022/06/28


Icon

Simona Ginsburg and Eva Jablonka argue that the evolution of consciousness can be explored by means of a 'marker' of consciousness, which specifically is a form of open-ended associative learning, which they call unlimited associative learning (UAL). Learning, meanwhile, is described as an "an experience-dependent change in behavior," and "requires that we consider the kinds of stimuli that are attended to, the mechanisms of storage and of recall, the relevant rewards and punishments and the ways the organism responds." But what they call consciousness is very different from what I think most people would accept, requiring intentionality, agency, a sense of self, and more. And what they call evolution appears to me to be a very robust Lamarckian conception, where (say) the suffering experienced by one generation is inherited as changes in the nature of consciousness by the next generation.

Web: [Direct Link] [This Post]


Large language models have a reasoning problem
Ben Dickson, TechTalks, 2022/06/28


Icon

The question being considered here is, "can large language models (LLM) do logical reasoning like humans?" The answer, according to the research paper summarized here, is that is that they find clever ways to learn statistical features that inherently exist in the reasoning problems, rather than doing actual reasoning themselves. But I think it's worth asking whether humans do logical reasoning. We see the same sort of errors in basic courses on logic or probability that the AI systems seem to make. So, yes, while "Caution should be taken when we seek to train neural models end-to-end to solve NLP tasks that involve both logical reasoning and prior knowledge and are presented with language variance," the same holds for human learners. It takes a lot to train humans to perform higher-order functions like logic, math and language. Years, even.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.