by Stephen Downes
May 13, 2016
Language parsing has long been a challenge for artificial intelligence, as (contrary to myth) language defies easy formalization. So it's significant that Google has not only developed this tool, but also that they're making it available online. Even better, it has been given a name that properly reflects its seriousness as a research tool: Parsey McParseface. "One of the main problems that makes parsing so challenging is that human languages show remarkable levels of ambiguity. It is not uncommon for moderate length sentences - say 20 or 30 words in length - to have hundreds, thousands, or even tens of thousands of possible syntactic structures. A natural language parser must somehow search through all of these alternatives, and find the most plausible structure given the context." There's a really nice example of this in the article.
Driving back from a lunch meeting I listened to this interesting program on Spark about LuminAI, a computer program that learns to dance by dancing with you. It does not have dance moves pre-programed into it; rather, it learns through a process of pattern recognition (though the designers did cheat a little bit by giving it a dance repertoire language). What's interesting is not simply the understanding of how to learn without explicit instruction, it's also the idea of humans and AIs learning to work together. “Co-creative artificial intelligence, or using AI as a creative collaborator, is rare,” said Brian Magerko, the Georgia Tech digital media associate professor who leads the project. “As computers become more ubiquitous, we must understand how they can co-exist with humans. Part of that is creating things together.” Additionally, we can see this being a model for future instruction: an AI works with an expect for a period of time, learns what to recognize, then in turn is able to teach by working with novices (or as we called it, 'automated competency detection and recognition').
Good overview of a complex topic. "Knowledge creation is proposed as a third 'metaphor' of learning—in addition to the
learning as acquisition and participation metaphors," write the authors, and now "Knowledge Building (KB) aims to move beyond metaphor to the realization of education as a knowledge-creating enterprise." This article provides an overview of Knowledge Building "to articulate its key ideas and explore its applications in various educational contexts." The result is an examination of topics such as collective intelligence, World 3 knowledge (ie., "the body of human knowledge expressed in its manifold forms"), knowledge communities, knowledge building principles, and ultimately, the role of the teacher and the role of technology. A secondary task of the paper is to consider the empirical evidence for KB approaches and assessment of the methodology, particularly from the perspective of basic and domain-specific literacies. Note that the final article is behind a paywall; this link is to the ResearchGate version.
The battle shaping up over Academic Analytics is an interesting one. The service basically measures the publication and citation activity of some 270,000 faculty members. As their website states, "Academic Analytics' unique "flower chart" affords the viewer a visualization of the overall productivity of the faculty within a given academic discipline. Variables on different scales (per capita, per grant dollar, per publication, etc.) and measuring different areas of scholarly productivity can be viewed simultaneously on a single comparative scale based on national benchmarks for the discipline."
But the subject of this article is the response of professors at Rutgers who are objecting to being assessed by the service. One major concern is that it is inaccurate. This is especially a problem given the difficulties faculty have seeing their own profiles, violating with the Leiden Manifesto recommendation to "keep data collection and analytical processes open, transparent and simple." Moreover, "the data lack nuance or accounting for research quality and innovation." But suppose these conditions could all be met: would there then be an objection to being assessed in this manner? Or are these conditions which, in principle, could never be met?
Uber-U is Already Here
Look at the elements of what we're calling here "Uber U" (quoted):
It would have been nice to be working toward this. Had things worked out the way I planned, we'd be sliding easily into this new vision with LPSS. But of course these concepts will move forward in any case, even while others fritter away working on enterprise LMS technology.
I actually don't care who defines 'personalized learning' nor how they define it so long as I can keep distinguishing it from 'personal learning'. But I think it's far-fetched to say "it seems to have no specific meaning at all" and even more so to say that "it means... robot tutor in the sky" (and yes, of course Knewton was over-reaching - anyone who understands how this technology works understands that it has been over-reaching). And having said all that, their own definition ("a family of teaching practices that are intended to help reach students in the metaphorical back row") is just plain weird. Their examples ("that teachers have been using for a very long time") include 'homework' and 'tutors'. I get what they're after - nobody wants a repeat of the co-option of terms like we've seen with 'open' and 'edupunk' and 'MOOC'. But this sort of non-definition won't help anything. Why not at least refer to a principled way of describing it, and work from there, instead of "asserting squatters’ rights" by pretending that nobody had ever attempted the task before? (p.s. -1 for mixing reference to LoTR and HP).
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.