Needless to say, I don't structure my papers this way. Never have, never will. That said, I can see the logic of the structure and have no problem recommending it to others. There's an especially helpful diagram part way through the paper that describes the structure. Basically the idea is: summarize what other people know and find a problem; gather and analyze some data to address the problem; summarize the gap filled by your work and outline its limitations. So, why don't I use this method? It's hard to explain - my 'data' is my newsletter, which I can't really summarize. Also, I'm never working on one idea at a time. No piece of my work should be viewed outside the context of all the rest of my work; it's all one big work. And I'm not interested in problems so much as I am interested in new ways of seeing and imagining possibilities. Maybe that makes me a bad scientist? Perhaps - but it's what I do.
I really think it's only a matter of time before all student work is marked by AI. At a certain point, it will be hard to justify using human markers when AIs are demonstrably more fair and more reliable. But of course, this still needs to be demonstrated. That's where this work comes in. "This will involve senior human markers marking several thousand student essays multiple times. The responses will then be used to run a competition for 'individuals and organisations with expertise in AI' to attempt to train an AI system to mark 'similarly to the training set'." See also iNews.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2020 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.