[Home] [Top] [Archives] [About] [Options]

OLDaily

Learning: Broad Perspective on Capability
Julian Stodd, 2022/07/13


Icon

This is more on learning outcomes from Julian Stodd. As always, his method is mostly to categorize or classify the phenomenon on he is discussing, and what's interesting is not the classification itself but the thinking behind the classification. Why, I ask when I read him, would you classify things this way? Here he classifies outcomes variously as 'formal' and 'social', and in turn, 'specific', which is based on 'divergence' (or, I would say, the time-honoured technique of analysis and synthesis), and 'general', which is based on 'emergence'. But we need, I think, to be careful here. This strikes me as an unusual use of the terms 'specific' and 'general'. Though each is a form of generalization, abstractions (created through analysis) are not the same as patterns (that are emergent).

Web: [Direct Link] [This Post]


Artificial Creativity?
Mike Loukides, O'Reilly, 2022/07/13


Icon

We've seen a number of examples in recent months of artificial intelligences writing text, generating images, and other apparently creative tasks. This prompts the inevitable classification question: are the AIs being genuinely creative? It's tempting to say no - "a computer replacing a human's limited photoshop skills isn't creativity. It took a human to say 'create a picture of a dog riding a bike.' An AI couldn't do that of its own volition. That's creativity." But what the computer does, says Loukides, is separate the technique (artistic craft, copmpositional craft, etc) away from the artistic process (which is the finding of "something that didn't exist, and couldn't have existed, before"). Could a computer do that? Well - yeah. Just insert some random variables into its algorithms, and you'll get a lot of that. The key part of creativity, to my mind, is being able to note what's worth keeping. That, to me, is a recognition problem. And to me, that means that, yes, a computer could be creative.

Web: [Direct Link] [This Post]


Call for Model Examples of Zettelkasten Output Processes
2022/07/13


Icon

The Zettelkasten is a method of recording thoughts or bits of information on separate slips of paper and storing them for future use. Famously, Luswig Wittgenstein organized his thoughts this way. Also famously, he never completed his 'big book' - almost all of his books (On Certainty, Philosophical Investigations, Zettel, etc.) were compiled by his students in the years after his death. So it is with some relevance that Chris Aldrich calls for "stronger examples of what these explicit creation workflows looked like," especially at the point where the individual items come together to form an essay or a book. In response, Matthias Melcher writes that he would "sift through them one branch after the other.... to see which items need to be pruned because they are tangents that are not well enough connected." I think this is hardly what Aldrich wants.

But it's not a trivial problem. I have compiled, at latest reckoning, 35,669 posts - my version of a Zettelkasten. But how to use them when writing a paper? It's not straightforward - and I find myself typically looking outside my own notes to do searches on Google and elsewhere. So how is my own Zettel useful? For me, the magic happens in the creation, not in the subsequent use. They become grist for pattern recognition. I don't find value in classifying them or categorizing them (except for historical purposes, to create a chronology of some concept over time), but by linking them intuitively to form overarching themes or concepts not actually contained in the resources themselves. But this my brain does, not my software. Then I write a paper (or an outline) based on those themes (usually at the prompt of an interview, speaking or paper invitation) and then I flesh out the paper by doing a much wider search, and not just my limited collection of resources.

Web: [Direct Link] [This Post]


AI Empowers Scalable Personalized Learning and Knowledge Sharing
Markus Bernhardt, Learning Solutions, 2022/07/13


Icon

Now I certainly believe in the value of artificial intelligence (AI) to support personal learning. But I want to be clear that I think this is the wrong way to use it. Here's what Markus Bernhardt describes: "being able to contextualize and 'organize' content through mappings... very similar to how a good textbook would use instructional scaffolding to guide the learning journey (the AI) would guide the learner through the complexity tree, from simple to more complex." The problem is that the learning comes when I do this organizing for myself, not when it's done for me. And I want to have my own learning objectives, not some small set of predefined objectives. In my own work, I train my own AI by feeding it examples of the sorts of work I feel fits into this or that category, and then use it to select from a very large set of resources new items based on those categories. The AI model is personal to me, and that's the way I like it.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.