by Stephen Downes
Jul 22, 2016
This document (64 page PDF) is more of a framework than a final statement on the topic of recognizing individual achievement, but as such it's a great start and will likely become a document of reference in the field. The structure constitutes the areas most people can agree on (for example: the four stages of validation are identification, documentation, assessment and certification) while the questions it leaves open are precisely those that need to be solved at a national or even a domain-specific level (for example: how is the credibility of the authority/awarding body assured?) The section on the centrality of the individual goes a bit further than the rest, and correctly so: "Validation aims at empowering the individual and can serve as a tool for providing second chance opportunities to disadvantaged individuals... The individual should be able to take control of the process and decide at what stage to end it."
The idea of the social contract was introduced by Thomas Hobbes in the 1600s as a means of justifying the continued rule of the monarchy. Without the stern rule of the monarch, he wrote, we would return to the state of nature where the lives of men were "solitary, poore, nasty, brutish and short." The myth of the social contract persists to this day, and is used for the same purpose. This is important, because when authors of articles like this one reference the unequal access to educational technology, and education, in terms of the social contract, it has to be noted that the prevailing social contract in western democracies is that there will be two-tiers, indeed multi-tier, access to everything. And there is no appeal against the social contract - as Locke said, you have two choices: rebellion, or emigration.
As you can see, I wasn't willing to see Heather Ross's blog self-destruct. More to the point, I wanted to share her thoughts on digital citizenship, thoughts which go well beyond digital literacy. She cites Mike Ribble’s list of the Nine Themes of Digital Citizenship, a list which includes digital access, digital commerce, digital communication, and more. She ends with "video about the 'filter bubble' that explains why you see a lot of what you as an individual see online." I don't really experience the filter bubble - there are days when I wish I did. But this isn't one of the posts I'd filter.
George Siemens gets this right. "This is where adaptive learning fails today: the future of work is about process attributes whereas the focus of adaptive learning is on product skills and low-level memorizable knowledge. I’ll take it a step further: today’s adaptive software robs learners of the development of the key attributes needed for continual learning – metacognitive, goal setting, and self-regulation – because it makes those decisions on behalf of the learner." As I (and no doubt many other people) have been saying, learning is about becoming a certain sort of person, not acquiring a certain body of content. So learning management is not a content selection and delivery problem.
Stephen Heppell is Professor of New Media Environments at Bournemouth University and has a long and good reputation in the field. "Lots of people spend time talking about 21st century skills," he says. "I don’t think any of that has changed very much. In the last century we thought about 20th century skills. I think pace is the thing that has changed, the speed of change is so great... I think the role of the teacher is to be passionate about learning. If you look around the world, teachers have become more and more driven to just deliver the curriculum, mark the books, organise the children, to do governance, and some of that passion has been lost."
This article runs through some of the standard pronunciations to the effect that the MOOC is not disruptive, throws out some stats attesting to their popularity, and then shifts into a discussion of what can be done to make MOOCs work, for example, by employing them in the flipped classroom model. Most of the article is structured around a conversation with Stanford University president John Hennessy, which I think explains the focus on traditional education models. The middle part of the article focuses on the Stanford model for universities. "If you look at the threat to most universities, it’s that their cost model currently grows faster than their revenue model," Hennessy says. "So now the question is, can you find a way to introduce technology and help reduce your cost growth?" Which brings us back to MOOCs, and Rick Levin, chief executive of Coursera. "Yale professors, instead of teaching a 15-person seminar three or four times a year, can teach 6,000 people in one sitting," he says.
(Note: to disable the sites limit on articles, search for and delete cookies with the string 'timesh' in your browser.)(The broken image accompanying the article is deliberate; I'm not sure why.)
The first two thirds of this post constitute a pretty good discussion of the Common Core emphasis on close reading (that is, reading where sentence construction and word selection are studied closely in order to understand the author's intent). A good reader reads closely naturally, and instances of ambiguity or errors of reasoning glare red like red scars over the text. But a sole focus on close reading dismisses as irrelevant what the readers themselves bring to the work, rendering it a performance and not a dialogue. "Why should students be denied this same opportunity to 'break away' from the text as they make comparisons to personally relevant and timely issues related to a broader and more lively discussion of who and what determines an unjust law," asks Jonathan Chase? This, he suggests, is a result of the focus of Common Core on outcomes, as defined by standardized testing, rather than on process, where "students’ thoughts and feelings matter a great deal."
Meetings on work integrated learning (WIL) are "are beginning to resemble discussions of how many angels can dance on the head of a pin," according to this article from HEQCO. "we need to refocus the WIL and EE conversation from counting to the far more fundamental question of why we are promoting these experiences in the first place," write the authors. This is perhaps in response to this article from the Business/Higher Education Roundtable (covered in these pages here) where they argue "We need a common set of definitions and metrics to assess our performance, to ensure that we’re on the right track, and to learn what makes the best work-integrated learning programs truly valuable." The HEQCO argues, "the dominant question should not be the number of students having these experiences but rather whether these experiences are actually resulting in the development of the desired skills." Via Academica Group.
I'm not sure what to make of this except to agree that "much remains uncertain". The suggestion is that "some test questions are likely harder to answer on tablets than on laptop and desktop computers." I expect that if they included pen and pencil answers in the survey they'd find more of the same sort of result (by the end of my career as a student the only time I was using a pen was on an examination). We are told "the key to avoiding potential problems is to ensure that students have plenty of prior experience with whatever device they will ultimately use to take state tests." Thinking more outside the box, I would be more inclined to reconsider whether tests are an accurate means of assessment at all.
The The Khanty-Mansiysk Declaration Media and Information Literacy for Building Culture of Open Government (3 page PDF) has been released in English and Russian. It is the outcome of a recent conference on the topic, held in June, and asserts the importance of related competencies such as "reliable information access and retrieval; information assessment and utilization; information and knowledge creation and preservation; and information sharing and exchange using various channels, formats and platforms." Obviously these are institutional competencies as well as individual. Media and Information Literacy was found to be important in contributing to open government, which includes "the transparency and accountability of state governance", "increasing opportunities for citizens' direct participation", and "effective and efficient monitoring of public authorities by civil society". All of this sounds reasonable - if ambitious - to me.
“People should be able to express diverse opinions and beliefs on Twitter. But no one deserves to be subjected to targeted abuse online." Quite right. This is not a question of free speech. Let's call this what it is: hate speech. It's designed to hurt. There's no place for this. It is violence disguised as words, and it causes real harm. It's long past time social networks began to take action on this sort of thing. More.
Ah, this post takes me back to the days of correcting student writing. Commentary requires clarity of thought, which is revealed only in clarity of expression. This piece displays neither, and serves as a good example of the standard to which pundits and academics alike ought to be held. For example, the sentence "In Paul Tough’s new book, he writes..." is badly constructed. Instead, write "In his new book, Paul Tough writes..." (thus making it clear who was writing). Also for example, the word "engendering" is misused. It means 'to cause' or 'give birth to'. But teachers don't "cause" grit to appear in students. They 'promote' it or 'support the development' of it. Also for example, the argument "But what has been left unsaid..." is a non-sequitur. If Tough is relevant at all, it's for what he said, not what he didn't say. Or for example, the phrase "instilling these skills in students" is misused the way "engender" was. Another example, "we could naturally embed..." suggests a very puzzling understanding of the role of the teacher. Or for example, "by moving to a competency-based learning system..." is again a bad phrasing, where the author means "by changing to..." or "by employing instead...". That's the first two paragraphs.
In this launch episode, Katie shares some preview clips from upcoming episodes of "Research in Action" and talks with Oregon State University’s Extended Campus Executive Director Lisa Templeton about how the "Research in Action" podcast came to be.
In this episode, Wendy shares some of her tips and suggestions for writing productivity based on her best-selling book Writing Your Journal Article in Twelve Weeks: A Guide to Academic Publishing Success.
In this episode, Katie talks with Dr. John Creswell about the current state of mixed methods and how he began writing about research methods.
In this episode, Katie talks with Lena Etuk about how social demographic data can be of use to researchers across disciplines and what it means to foster a culture of data-driven decision making.
In her first solo episode, Katie talks about some of the organizational strategies for juggling multiple research projects that she has developed over her time as a researcher.
In this episode, Jim describes his work with allegations of academic misconduct and shares some best practices for Responsible Conduct of Research training.
In this May 2016 preview, Katie shares clips from upcoming episodes and discusses the preparation for RIA's first call-in episode.
In this episode, Kirsten and Katie talk about their work together as research collaborators, discuss what to do when collaborations go wrong, and share some best practices for setting up strong collaborations from the start.
In this episode, Josh explains what it means to be a psychometrician and shares examples of how he uses psychometrics in his research on risk-taking tendencies and decision-making competence.
In this episode, Nina shares some of her strategies for learning new research skills at mid-career as well as how she keeps up with work while traveling.
In this episode, Kevin shares strategies for juggling research with teaching and service and some of the things he has learned in his time as a textbook author.
In this June 2016 preview, Katie shares clips from upcoming episodes and offers a final call for contributions to RIA's first call-in episode.
In this episode, Dannelle shares some of her ideas on the role of journaling for researchers and shares tips and ideas for an effective journaling practice.
In this episode, Katie talks with Steve about data management best practices and strategies for writing effective data management plans.
In this solo episode, Katie shares tips and strategies for creating a five-year research and professional development plan.
In this episode, Katie talks with Brad about his work at COIL and some of the benefits and challenges of creating institutional research agendas.
Take a listen to our July 2016 preview clips!
In this episode, Chrysanthemum shares about her experiences as a research and data analyst and offers some examples of data projects related to student success in higher education.
In this episode, Geoff talks about his experience with theoretical research and how best to share that research with the public.
In this episode, Tanya discusses her work at the National Research Center for Distance Education and Technological Advancements (DETA).
It's important that we note that "we are entering a world where an intelligence assistant recognizes our 'intent.' This could spawn a massive consumer behavior shift, as AI-influenced bots would mean far fewer Google searches by humans." I had hoped that by this time our 'personal learning assistant' could have made the list. Alas. Here's a high resolution version of the image.
Artificial Intelligence as a Service (AIAAS) is here. As this article notes, "At the less-expensive end is a knowledge-based approach that organizes data and language into highly malleable and helpful blocks of information." For example, there's "a virtual assistant known as ABIe (pronounced “Abby”) to answer questions from its 12,000 agents (for All State). It was a bit like hiring Apple’s Siri at a sliver of the cost. Mike Barton, the division’s president, put it this way: 'We think of ABIe as our precursor to cognitive computing on a shoestring.'"
I studied under Verena Huber-Dyson when I was in Calgary and was opened to a world where we question assumptions, consider alternative (but complete and consistent) forms of formalization, and a range of reasons why we ought to question our core 'truths' about mathematics and logic. "This century has seen the development of a powerful tool, that of formalization, in commerce and daily life as well as in the sciences and mathematics. But we must not forget that it is only a tool. An indiscriminate demand for fool proof rules and dogmatic adherence to universal policies must lead to impasses," she writes in this article from 1998. "Think of mathematics as a jungle in which we are trying to find our way. We scramble up trees for lookouts, we jump from one branch to another guided by a good sense of what to expect until we are ready to span tight ropes (proofs) between out posts (axioms) chosen judiciously. And when we stop to ask what guides us so remarkably well, the most convincing answer is that the whole jungle is of our own collective making - in the sense of being a selection out of a primeval soup of possibilities. Monkeys are making of their habitat something quite different from what a pedestrian experiences as a jungle."
"A science of human intelligence is indeed possible," writes Pierre Levy in a post last year, "but on the condition that we solve the problem of the mathematical modelling of language. I am speaking here of a complete scientific modelling of language, one that would not be limited to the purely logical and syntactic aspects or to statistical correlations of corpora of texts, but would be capable of expressing semantic relationships formed between units of meaning, and doing so in an algebraic, generative mode." I think we can agree that Facebook isn't this. Where the question gets hard is when we ask whether this is what we need. Is a scientific modelling of language, or of thought, possible? Is it desirable? Would we find this language physically instantiated in the human brain?
Words like like 'intuition' or 'consciousness' are "suitcase words", says Marvin Minsky in this interview from 1998, "that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain." And in turn, he says, we start to think of these as entities in their own right, as things with no structures we can analyze. But consciousness, he says, "contains perhaps 40 or 50 different mechanisms that are involved in a huge network of intricate interactions... human brain contains several hundred different sub-organs, each of which does somewhat different things." Or, for example, "A 'meaning' is not a simple thing. It is a complex collection of structures and processes, embedded in a huge network of other such structures and processes." Or memory: "we use... hundreds of different brain centers that use different schemes to represent things in different ways. Learning is no simple thing."
I am increasingly left wondering how long social networks - Twitter, Facebook, Google Plus, LinkedIn - can survive. They can disappear absurdly quickly - remember Friendster? MySpace? And I think that dissatisfaction with the existing sites is strong enough that users will quickly drop them if something better comes along. There are several issues. One is the lack of privacy and security. This is what Paul Prinsloo addresses in this article. But there's more. Another are the sorting algorithms that struggle with the basic contradiction between what we want to see and what the social network makes money showing us. Another is the steadily dropping quality of discourse on these sites. The advice to "never read the comments" should now be applied to the daily news.
Me: Have you been told not to do bad things online?
Me: How about good things?
These problems are nothing we haven't seen before, but this article makes a good case for each, plus some good discussion on proposed remedies (quoted):
Science, they say, is "ripe for disruption". But what would that even look like?
This is a summary of a study from Facebook, and it's important to keep in mind that Facebook is lobbying for a limited Facebook-only version of the internet in poorer countries. This is why it makes sense to say, for example, that "75% of the unconnected had never heard of the word 'internet.'" It's like they won't know what they're missing if they get only Facebook. That said, it is unacceptable that 4 billion don't have access to internet. And it's not because the internet isn't relevant ('reason 3') for these people, nor is it because they are not ready ('reason 4') for the internet. It has everything to do with a global model of resource distribution where the necessities of life and the means of producing them - not only internet, but food, energy, housing, and the rest - are provided only to those who can pay for them. Facebook's wealth, and the system that produced it, is the reason 4 billion people are offline.
Kevin Kelly has a long history of being wrong about the future and his streak will continue with this article. The world he depicts here is not some sort Star Trek Federation economy or socialist ideal - it's an end-state for a capitalist dream, where all ownership has been consolidated in corporations and individual people have nothing of their own. It's a world where, if you don't pay, you don't have anything, which means that (as today) social control and individual labour will be secured by corporations through the threat of cutting access to food, housing, entertainment, and more. Security, continuity, affinity - these are important to people, and physical objects are tangible instances of them.
"It's at the intersection of machine learning and graph technology where the next evolution lies and where new disruptive companies are emerging," according to this article. These are neural network technologies, and they work by analyzing connections, not contents. But there's a difference between 'machine learning' and 'graph technologies'. "machine learning takes large quantities of data to make predictions about future events. While graph technology is more concerned with the relationship between different data points... Some ML methods use ‘graphs’ to represent the learnings while others don't.”"
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.