[Home] [Top] [Archives] [About] [Options]

OLDaily

Ed Tech Boxes
Tom Woodward, Bionic Teaching, 2022/07/19


Icon

This article captures what I feel is the essence of the recent 'edtech angst' people have been writing about. He quotes Martin Weller, who says, "a lot of new ed tech people are driven by values, such as social justice, rather than an interest in the tech itself," and responds, first, by saying edtech very much is about tech, at least in part (it's in the name!), and then suggests, second, "Edtech people with no interest in technology are like chefs who aren't interested in food. I guess you can do it at some level but you'll never be any good. Maybe you want a different job?" That's not to say there aren't problems. "I feel like if you're not mad about a huge swath of what's going on then you're not paying attention." But edtech is about the struggle. " I want people with serious concerns about edtech but I want people who see potential... Give me people who help you navigate complexity but don't hide it. Give me people who can see when it's technology causing a problem and when technology is (merely) providing evidence of larger societal issues." Yes.

Web: [Direct Link] [This Post]


Like Zombies, After 10 Years, the 60000 Times Myth Will Not Die
Alan Levine, CogDogBlog, 2022/07/19


Icon

Alan Levine makes the point that the oft-repeated claim that "people process visual input 60,000 times faster than they do text" is completely unsubstantiated. I have two thoughts. First, the publication of this sort of unverified claim is a sign or marker that the publication (in this case Inc., but also Business Week and a host of similar titles) is not trustworthy. There's a lot that could be said on this theme and about media literacy generally. Second, the actual literature about textual and visual input is a lot more interesting than this superficial claim and these publications do us a disservice by misrepresenting it. I'm no expert, but I start, for example, with the question of whether what we do is even 'information processing'. For example: a thermometer measures temperature by converting the expansion and contraction of materials (most famously, mercury) into a digit (eg., the thermometer says '40 degrees C' in London, U.K.). Now, did this thermometer 'process information'? And what if human cognition is like that?

Web: [Direct Link] [This Post]


CO2 meter review
Naomi Wu, Twitter, 2022/07/19


Icon

This thread on CO2 meters is among the more interesting things I've read this year. I'm not so interested in the comparison between meters (though do note that it's not worth the money to buy cheap meters). What's more interesting were the results observed in the testing process. Where levels of CO2 are high, as they are in poorly ventilated rooms with lots of people, we get sleepy. Oliver Quinlan writes on Mastodon, "I've definitely felt that classroom and conference sleepiness too. Also driving sleepiness." It might also help explain, I suspect, why we feel so much better and more productive working from home. Good edtech in this case might simply be an open window (though not in the U.K. just now) or better ventilation.

Web: [Direct Link] [This Post]


Feminism Is For Everybody, Especially Educators!
We Are Open, 2022/07/19


Icon

I've signed up for this. "another free email-based course to help you become introduced to the philosophy of feminist pedagogy. This course is written for leaders and educators who want to integrate more feminist practice into their learning environments." This course is based on A Guide to Feminist Pedagogy by the Vanderbilt Center for Teaching (if you're reading the guide, look to the lower right for the 'Next' link to click through the contents).

Web: [Direct Link] [This Post]


Forget “Open-Source” Algorithms — Focus on Experiments Instead
Thomas Dimson, Future, 2022/07/19


Icon

I think this is great advice. "The problem with our current system is that the people running experiments are the only ones who can study them." writes   . "y providing the community with more transparency about the experiments, the teams in charge of them can establish best practices for making decisions and reveal effects from experiments beyond what the team is studying." This breaks down into two types of openness:

  1. Open-source methodology: What is the intent of ranking changes?
  2. Open-source experimentation: What are the consequences of ranking changes?

Viewing the algorithm on its own tells us almost nothing about the software. But knowing how it's being tested and what testers are looking for tells a lot about what it's being designed to do. And, really, is what's important. Image: Towards Data Science.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.