I am always interested in affordances. That's why with Simon Willison, "Every time I evaluate a new technology throughout my entire career I've had one question that I've wanted to answer: what can I build with this that I couldn't have built before?" Discovering things like Vosk, an open source library that includes models that can run speech recognition on your desktop, for example. This is to my mind the right way to approach something like Large Language Models (LLM). Sure, there's a ton of things they can't do. The same is true of everything! But how do we measure what they can do? We need a common dimension - and Willison describes a new industry standard called 'vibes'. These are measured in the LMSYS Chatbot Arena. Willison also discussed 'openly licensed' models (or 'open weights'). He also discusses some neat tricks - like Retrieval Augmented Generation (RAG) that can be used to help LLMs using 'wrapper code' (and in this way, makes sense of dangers like 'prompt injection'). "The key rule here is to never mix untrusted text—text from emails or that you've scraped from the web—with access to tools and access to private information." Also: the ChatGPT Code Interpreter. This is a brilliant talk, with stunning discoveries almost every other slide.
Today: 0 Total: 7 [Share]
] [