toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

303
active users

#AGI

14 posts14 participants0 posts today

"OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.

Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development.

Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.

The altruistic OpenAI is gone, if it ever existed. What future is the company building now?"

theatlantic.com/technology/arc

The Atlantic · What Really Happened When OpenAI Turned on Sam AltmanBy Karen Hao
🔥 Just watched a powerful segment from this #SamAltman interview → [🔗](https://youtu.be/ctcMA6chfDY?feature=shared&t=891)

Altman drops a striking insight: young people aren’t just *using* #AI tools — they’re *living* with them. AI isn’t just a productivity hack anymore; it’s evolving into a **life operating system**.

> 🧠 The shift:
> - Boomers = Google++
> - Gen Z & Millennials = **context-aware co-pilot for life**
>
> Think:
> - A coach who *remembers everything*
> - A sounding board for daily decisions
> - A strategist for career, relationships, and self-improvement

📱 The smartphone was the digital limb.
🤖 AI is becoming the **digital mind**.

#ChatGPT #AGI #DigitalAssistant #FutureOfWork #AIUsage #Nostr #TechCulture
Replied in thread

@PixelJones so I'm an AI-researching grad student with a huge interest in "AI alignment" and this is a massive red flag to me

#AGI will likely be made within 10 years, very possible. And at that point, if there aren't any regulations already in place, it's too late

we'd essentially be *hoping* the billionaires in charge are making their AGI model as fair and impartial as possible... which is not likely at all lol

so yeah, this would be bad :/

I breezed past a post on LinkedIn last week that named some categories of #AI research that were more likely to produce #AGI than #LLM tech

I want to look in to some of those categories, but I can't find the post

I remember that the writer was contrasting these with the massive predictive statistics foundation of LLMs

Anyone have any insight into what fields and technologies this post might have been talking about?

#AI is really developing rapidly. In 1970 we would achieve #AGI (Artificial General Intelligence) in 3-8 years. Now 55 years later it's down to only 3-5. Another 55 maybe it will only be 3. If progress is linear (doubtful), arrival of AGI in 2135?

These forecasts for AGI resemble the #cicada broods coming out periodically after years in the dark.

See following quotes.

@rdviii

If true, #hallucinations cast serious doubt on whether the end goal of #AGI can be achieved with today’s #LLM architectures and training methods.

While ongoing research explores #RAG and hybrid models and inference techniques, no implementation to date has fully eliminated flawed reasoning.

What consumer would trust mission-critical decisions if an AGI is known to confidently state falsehoods?

newscientist.com/article/24795

New Scientist · AI hallucinations are getting worse – and they're here to stayBy Jeremy Hsu

Nel giro di tre anni potremmo vedere nascere una mente sintetica capace di pensare, decidere, creare. Cosa succederà quando l’IA smetterà di "rispondere" e inizierà a "proporre"? Chi guida, chi segue, chi sparisce? Questo articolo non dà risposte comode. Ma se ti accende l’idea di guardare oltre, è da leggere adesso.

#ai #agi #superintelligenza #Futuro #previsioni

futuroprossimo.it/2025/05/supe