toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

206
active users

#llama

1 post1 participant0 posts today

What is a Transformer? A visual guide.

#Transformer is a #neural #network architecture that has fundamentally changed the approach to Artificial Intelligence. Transformer was first introduced in the seminal paper "Attention is All You Need" in 2017 and has since become the go-to architecture for deep learning models, powering text-generative models like #OpenAI's #GPT, #Meta's #Llama, and #Google's #Gemini. Beyond text, Transformer is also applied in #audio generation, #image recognition, #protein structure prediction, and even #game playing, demonstrating its versatility across numerous domains.

poloclub.github.io/transformer

poloclub.github.ioTransformer Explainer: LLM Transformer Model Visually ExplainedAn interactive visualization tool showing you how transformer models work in large language models (LLM) like GPT.

It took a few hours for a noob like me, but I’m now running multiple self-hosted LLMs from my #Linux server including #DeepSeek, #Llama, and #Mistral.

Accessible remotely too.

I’m using Ngrok to access it remotely, but the free plan doesn’t have a fixed URL which is a pain. I’m open to better alternatives if anyone has any suggestions? #AI

Meta Platforms Inc. won a key U.S. court case allowing the use of books for AI model training under fair use, a decision with limited scope that intensifies the debate over copyright in artificial intelligence development.
#YonhapInfomax #MetaPlatforms #CopyrightLawsuit #AITraining #Llama #FairUse #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
en.infomaxai.com/news/articleV

Yonhap Infomax · Meta Wins Copyright Lawsuit Over Use of Books for AI Training
More from Yonhap Infomax News

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.