Stephen Downes

Knowledge, Learning, Community

As Ben Dickson writes, ChatGPT and other LLMs are limited to their training data. That's why they make factual errors; they simply don't have the facts in the first place! The solution to this is 'embeddings' (and we'll see a lot more about this in the future). The idea is that you supplement chatGPT with your own resource library, and when a request comes in, it retrieves the appropriate document (or documents) from your library in order to form a response. I haven't tried it yet, but this article provides complete instructions, meaning that a trial is in my near future.

Today: 0 Total: 407 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Aug 28, 2025 9:09 p.m.

Canadian Flag Creative Commons License.