Why I Built a Homelab (and Why You Might Want To
2025/10/10
I have the feeling Ian O'Byrne may conclude by the end of his series that this approach isn't the best way to manage your online presence, but that said, running a 'homelab' from scratch is no doubt going to teach him a ton about the tools we use. "At its simplest, a homelab is just a personal computing environment you control. It might be an old desktop running Linux in a corner, a tiny Raspberry Pi serving your files, or a small server rack quietly humming in the basement. It's a space where you can experiment, break things, fix them, and learn how digital systems actually work." As he sets up his self-hosting environment, he'll learn about the real issues developers face. "Ultimately, this isn't just about technology. It's about understanding the systems that shape us, and imagining how we might shape them in return."
Web: [Direct Link] [This Post][Share]
AI tutors coming to California Community Colleges
Shawna Chen,
Axios,
2025/10/10
According to this article, California Community Colleges (CCC) is partnering with AI company Nectir to launch an AI learning assistant for its 2.1 million students." The point-form article reads like it was AI-generated, though there's no indication of this on the web page. Still, according to the article, "One of Nectir's first pilots at Los Angeles Pacific University found that after a full term of using the platform, students saw a 20% jump in GPA, 13% increase in final scores and a 36% boost in their intrinsic motivation to learn."
Web: [Direct Link] [This Post][Share]
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Qizheng Zhang, et al.,
arXiv,
2025/10/10
Your new acronym for today is Agentic Context Engineering (ACE), "a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation." As Robert Rogowski summarizes today, "It overcomes two major failures in adaptive LLMs: brevity bias (over-compression of prompts) and context collapse (loss of detail during rewriting)." Indeed, from my observation, a lot of the'AI hallucinations' we read about can be avoided with better prompts. On the other hand, it is arguable that prompt engineering is just a way of smuggling human knowledge into AI systems. Either way, though, it does show that the context of an inquiry has a significant impact on the outcome. 23 page PDF.
Web: [Direct Link] [This Post][Share]
What Large Language Models Teach Us About 'Human Knowledge'
Tony Fish,
Medium,
2025/10/10
I like the way this article topples the 'wisdom' pyramid. Consider AI: it "doesn't climb a virtual digital pyramid from data to information to knowledge to wisdom... Instead, it processes patterns, generates responses, and creates what appears to be wisdom through statistical relationships between tokens." This raises the question: why do we suppose humans are any different? "Perhaps the hierarchy we've constructed is less about the nature of information and more about how we've chosen to categorise our own cognitive processes." Perhaps "the process of generating appropriate responses to complex situations doesn't require climbing a hierarchy at all... Perhaps the value isn't in the accumulation but in the generation; not in reaching the summit, but in developing the capacity to respond thoughtfully to whatever terrain we encounter."
Web: [Direct Link] [This Post][Share]
To Understand AI, Watch How It Evolves
Ben Brubaker,
Quanta Magazine,
2025/10/10
This is quite an interesting article on how we understand AI, though it feels like it ends too soon. I recommend reading it from the perspective of human learning. What I mean is, imagine the interviewee, Naomi Saphra, is talking about humans, not AI. Most of what she says still makes sense. For example, "Just as biologists must understand an organism's evolutionary history to fully understand the organism, she argues, interpretability researchers should pay more attention to what happens during training." And also, "The model already wants to learn the easy thing. Your job is to keep it from learning the easy thing right away, so that it doesn't just start memorizing exceptions. That might make it hard to generalize to new inputs in the future."
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.