Stephen Downes

Knowledge, Learning, Community

As Ben Dickson writes, ChatGPT and other LLMs are limited to their training data. That's why they make factual errors; they simply don't have the facts in the first place! The solution to this is 'embeddings' (and we'll see a lot more about this in the future). The idea is that you supplement chatGPT with your own resource library, and when a request comes in, it retrieves the appropriate document (or documents) from your library in order to form a response. I haven't tried it yet, but this article provides complete instructions, meaning that a trial is in my near future.

Today: 2 Total: 606 [Direct link]


Stephen Downes Stephen Downes, Casselman, Canada

Copyright 2024
Last Updated: Feb 26, 2024 06:44 a.m.

Canadian Flag Creative Commons License.