From David Myers Commugny, Switzerland
My experience with large language models (LLMs) is that questions about technical systems, such as Windows 11, produce fairly good answers because the information comes from professionally produced documentation. Everything else is a mixed bag. The reason why is evident and not easily fixable. If LLMs are trained on unfiltered data from the web, then they must necessarily be unreliable due to the well-known dictum, rubbish in produces rubbish out.
