Jack William Bell<p>I've been thinking about human interaction with <a href="https://rustedneuron.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a>, especially in regards to the recent Replit thing. And I agree with this:</p><p>> <a href="https://social.vlhl.dev/objects/6df762e2-91a0-41c7-9821-fdbe3302af43" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">social.vlhl.dev/objects/6df762</span><span class="invisible">e2-91a0-41c7-9821-fdbe3302af43</span></a></p><p>It is incredibly <a href="https://rustedneuron.com/tags/stupid" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stupid</span></a> to keep using a tool that would do this kind of thing. But I'm also thinking about how weird the language used is – in context with *what LLMs are*.</p><p>An LLM can DO something, but it cannot KNOW anything; for knowledge is more than a collection of weighted data.</p><p>Moreover?</p><p>[contd]</p><p><a href="https://rustedneuron.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://rustedneuron.com/tags/HumanCondition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanCondition</span></a></p>