This is a great visualization of the major streams of thought in the field of complexity theory. I like the way it shows the links between the different strands, and also that ti is an interactive graphic - click on an area and be taken to the relevant Wikipedia page. From my perspective it seems that the more recent topics signal an end-game for the field. In the 2010s we see 'applied complexity', 'complexity policy & evaluation', and 'mixed methods'. Via Dave Snowdon.
This is a review paper intended to "explain what adaptive systems are and what kinds of data they require,... to categorize the main use cases and possibilities of adaptive systems [and] to outline the current limitations and concerns surrounding adaptive systems." In two paragraphs it deftly summarizes the landscape, listing new companies (Acrobatiq, Knewton, CogBooks, Cerego, Realizeit, LoudCloud, Smart Sparrow) as well as the work of publishers, LMS companies and universities. The article lists a number of studies showing effect sizes nearly matching that of 1-to-1 tutoring. But it also references a number of studies where "the results were decidedly mixed." And it describes three potential pitfalls: discrimination and labeling of students, creating consequential feedback loops; nNarrow constraints of knowledge, knowing, and learning; and questions around transparency, availability, and security of data. This isn't a long paper, but it's well-written and informative.
The Promise of Performance Assessments: Innovations in High School Learning and Higher Education Admissions
Roneeta Guha, Tony Wagner, Linda Darling-Hammond, Terri Taylor, Diane Curtis, Learning Policy Institute, 2018/01/18
"Like doctoral candidates with university dissertations," write the authors, in performance assessments "students often defend their projects and papers before panels of judges, who rigorously evaluate them against high standards; students typically revise their work until they meet the standards." The suggestion in this report (42 page PDF) is that performance assessments can (and should) replace more traditional (and test-based) assessments of high school graduates. On the one hand, I agree. Performance assessments would be a better measure. But they would also add more assessment on a system already overloaded with assessment. They would consume too much time and require more resources than the schools could provide. Where we are actually headed with this is automated performance assessment.
This is a post describing a teacher's experiences "opening a brand new micro-school and to work on technology tools that were intended to personalize my students’ learning." It was AltSchool, the Silicon Valley startup where Emerich worked for three years, leaving last June. The company changed course last year from running schools to selling software. Emerich suggests a possible reason for the change in course: "It was isolating with every child working on something different; it was impersonal with kids learning basic math skills from Khan Academy; it was disembodied and disconnected." This is a good article, describing the experience at length and from a first-person perspective.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2018 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.