toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

230
active users

#modelcollapse

2 posts2 participants0 posts today

New #review today: "Or you could just listen to #AncientPsychicTripleHyperOctopus and find yourself in a sound-world of weird electronics, percussion, and trumpet that floats along without rhyme or reason, but manifests as a fascinating journey. The perpetrators of this experiment are #AlexBonney (trumpet, bass recorder, Strohviol), #WillGlaser (drums, percussion), and #IsambardKhroustaliov (aka #SamBritton, electronics)." #ExposeOnline #ExperimentalMusic #ModelCollapse expose.org/index.php/articles/

👾 Model collapse: when AI systems trained on synthetic content begin degrading like photocopies of photocopies. Recent experiments show recursive training leads to semantic drift and loss of signal—errors accumulate slowly, precision decays steadily.

⚠️ The real risk isn't technical fragility but institutional misalignment. When knowledge production prioritizes scale over truth, collapse becomes symptom of concentrated power.
#AIResearch #ModelCollapse

🔗 open.substack.com/pub/massimof

open.substack.comHow AI Is Transforming Our Search for InformationExploring the evolving landscape of information discovery in the age of artificial intelligence.
Replied in thread
arXiv.orgThe Curse of Recursion: Training on Generated Data Makes Models ForgetStable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

#AI #GenerativeAI #LLMs #ModelCollapse #SyntheticData: "Stable diffusion revolutionized image creation from descriptive text. GPT-2 (ref. 1), GPT-3(.5) (ref. 2) and GPT-4 (ref. 3) demonstrated high performance across a variety of language tasks. ChatGPT introduced such language models to the public. It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet."

nature.com/articles/s41586-024

NatureAI models collapse when trained on recursively generated data - Nature Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.

#GenerativeAI is saturating the #internet with algorithm-generated content, leading to #ModelCollapse over time.

Model collapse occurs when #AI trains on its own outputs, resulting in declining quality and diversity of data.

#GenerativeAI's "model collapse" will cause it to poison itself, here's what that means
xda-developers.com/generative-

XDA Developers · Generative AI's "model collapse" will cause it to poison itself, here's what that meansModel collapse is an impending issue when it comes to AI, and here's what it means.
Replied in thread

@dragfyre

What makes that inevitable? I'm not disputing the effect you describe. Model collapse makes sense.

But developers are rational actors. I think the last year has already seen considerable movement towards more curated data sets. (After finding CSAM in LAION, for example.)

The foundational models like GPT4 may have needed to consume everything in order to reach where they are at, but that's not necessarily true of models broadly.

SHOT:

“Against the threat of model collapse, what is a hapless machine-learning engineer to do? The answer could be the equivalent of prewar steel in a Geiger counter: data known to be free (or perhaps as free as possible) from generative AI’s touch.”

scientificamerican.com/article

CHASER:

#PreWarSteel is the equivalent of clean, human-generated data.

superversive.co/blog/synthetic

Scientific American · AI-Generated Data Can Poison Future AI ModelsBy Rahul Rao