Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/Google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Google</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hallucinations</span></a> <a href="https://tldr.nettime.org/tags/RIG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RIG</span></a> <a href="https://tldr.nettime.org/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> <a href="https://tldr.nettime.org/tags/FactChecking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FactChecking</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a>: "As long as chatbots have been around, they have made things up. Such “hallucinations” are an inherent part of how AI models work. However, they’re a big problem for companies betting big on AI, like Google, because they make the responses it generates unreliable. </p><p>Google is releasing a tool today to address the issue. Called DataGemma, it uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. </p><p>The first of the two methods is called Retrieval-Interleaved Generation (RIG), which acts as a sort of fact-checker. If a user prompts the model with a question—like “Has the use of renewable energy sources increased in the world?”—the model will come up with a “first draft” answer. Then RIG identifies what portions of the draft answer could be checked against Google’s Data Commons, a massive repository of data and statistics from reliable sources like the United Nations or the Centers for Disease Control and Prevention. Next, it runs those checks and replaces any incorrect original guesses with correct facts. It also cites its sources to the user.</p><p>The second method, which is commonly used in other large language models, is called Retrieval-Augmented Generation (RAG)." <a href="https://www.technologyreview.com/2024/09/12/1103926/googles-new-tool-lets-large-language-models-fact-check-their-responses/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">technologyreview.com/2024/09/1</span><span class="invisible">2/1103926/googles-new-tool-lets-large-language-models-fact-check-their-responses/</span></a></p>