The idea here is to blend augmented reality with large language models to produce a system that can make context-aware recommendations. The AR system "collects relevant data from their immediate surroundings, such as the presence of a fire extinguisher or an emergency exit and passes this on to the generative and multimodal language model... or so-called 'Multi-Modal Large Language Models'." The hope is that "future users of these new XR-systems will be able to interact seamlessly with their environment by using language models while having access to constantly updated global and domain-specific knowledge sources."
Today: 0 Total: 98 [Share]
] [