toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

230
active users

#aibubble

10 posts9 participants0 posts today

"[I]t appears that SoftBank may not be able to — or want to — proceed with any of these initiatives other than funding OpenAI's current round, and evidence suggests that even if it intends to, SoftBank may not be able to afford investing in OpenAI further.

I believe that SoftBank and OpenAI's relationship is an elaborate ruse, one created to give SoftBank the appearance of innovation, and OpenAI the appearance of a long-term partnership with a major financial institution that, from my research, is incapable of meeting the commitments it has made.

In simpler terms, OpenAI and SoftBank are bullshitting everyone.

I can find no tangible proof that SoftBank ever intended to seriously invest money in Stargate, and have evidence from its earnings calls that suggests SoftBank has no idea — or real strategy — behind its supposed $3-billion-a-year deployment of OpenAI software.

In fact, other than the $7.5 billion that SoftBank invested earlier in the year, I don't see a single dollar actually earmarked for anything to do with OpenAI at all.

SoftBank is allegedly going to send upwards of $20 billion to OpenAI by December 31 2025, and doesn't appear to have started any of the processes necessary to do so, or shown any signs it will. This is not a good situation for anybody involved."

wheresyoured.at/softbank-opena

Ed Zitron's Where's Your Ed At · Is SoftBank Still Backing OpenAI?Earlier in the week, the Wall Street Journal reported that SoftBank and OpenAI's "$500 billion" "AI Project" was now setting a "more modest goal of building a small data center by year-end." To quote: A $500 billion effort unveiled at the White House to supercharge the U.S.’s artificial-intelligence
Replied in thread

@SceNtriC @WildaSoftware @maciek33 I don't like #ETFs like MSCI All Country World Index. It consists of 60% #USA, 30% #bigTech and 21% only 4 companies: #amazon, #apple, #meta, #microsoft

This means, if you buy MSCI ACWI, you make #JeffBezos richer. He doesn't work like normal person from a salary but instead makes money by selling his #Amazon shares. Every single investment of 1€ increases the value of Amazon by 4€.

One of the big problems of most ETFs is the distribution by market capitalization. This favours big stocks and monopoles. Additionally, due to the passive nature of ETFs this increases volatility and makes bubbles bigger and more likely to happen #AIbubble
Another thing to mention is the ETF provider is using the voting right at the stockholders' meeting. This gives companies like #blackrock huge power.
See also homemadefinance.de/etf-nachtei

[1/2]

Homemade Finance11 ETF-Nachteile die du kennen musst (Wichtig!)<p>ETF-Nachteile? Quatsch, die Dinger sind das perfekte finanzielle Allheilmittel für jeden Anleger! Punkt. Wenn man durch die gängigen Finanzblogs im Internet streift, dann drängt sich sehr schnell der Eindruck auf, ETF wären so etwas wie eine Wunderwaffe an der Börse. Auch ich muss mich dahingehend als schuldig bekennen, denn auch ich feiere die börsengehandelten Indexfonds hier auf Homemade Finance sehr. Und das aus gutem Grund: ETF haben die Börse revolutioniert und fairer gemacht. Denn jetzt können auch kleine Anleger exzellent diversifizierte Portfolios für wenig Geld halten und damit besser abschneiden als traditionelle Fonds. Dies war aufgrund von Transaktionskosten und Gebühren </p>

"- OpenAI and Anthropic both lose billions of dollars a year after revenue, and their stories do not mirror any other startup in history, not Uber, not Amazon Web Services, nothing. I address the Uber point in this article.

- SoftBank is putting itself in dire straits simply to fund OpenAI once. This deal threatens its credit rating, with SoftBank having to take on what will be multiple loans to fund the remaining $30 billion of OpenAI's $40 billion round, which has yet to close and OpenAI is, in fact, still raising.

- This is before you consider the other $19 billion that SoftBank has agreed to contribute to the Stargate data center project, money that it does not currently have available.

- OpenAI has promised $19 billion to the Stargate data center project, money it does not have and cannot get without SoftBank's funds.

- Again, neither SoftBank nor OpenAI has the money for Stargate right now.

- OpenAI must convert to a for-profit by the end of 2025, or it loses $20 billion of the remaining $30 billion of funding. If it does not convert by October 2026, its current funding converts to debt. It is demanding remarkable, unreasonable concessions from Microsoft, which is refusing to budge and is willing to walk away from the negotiations necessary to convert.

- OpenAI does not have a path to profitability, and its future, like Anthropic's, is dependent on a continual flow of capital from venture capitalists and big tech, who must also continue to expand infrastructure.

Anthropic is in a similar, but slightly better position — it is set to lose $3 billion this year on $4 billion of revenue. It also has no path to profitability, recently jacked up prices on Cursor, its largest customer, and had to put restraints on Claude Code after allowing users to burn 100% to 10,000% of their revenue. These are the actions of a desperate company."

wheresyoured.at/the-haters-gui

Ed Zitron's Where's Your Ed At · The Hater's Guide To The AI BubbleHey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free work too.  Also,

On reflection, I think the big mistake is the conflation of #AI with #LLM and #MachineLearning.
There are genuine exciting advances in ML with applications all over the place, in science, (not least in my own research group looking at high resolution regional climate downscaling), health diagnostics, defence etc. But these are not the AIs that journalists are talking about, nor that are really related the LLMs.
They're still good uses of GPUs and will probably produce economic benefits, but probably not the multi- trillion ones the pundits seem to be expecting

fediscience.org/@Ruth_Mottram/
Ruth_Mottram - My main problem with @edzitron.com 's piece on the #AIbubble is that I agree with so much of it.
I'm now wondering if I've missed something about #LLMs? The numbers and implications for stock markets are terrifyingly huge!

wheresyoured.at/the-haters-gui

FediScience.orgRuth Mottram (@Ruth_Mottram@fediscience.org)My main problem with @edzitron.com 's piece on the #AIbubble is that I agree with so much of it. I'm now wondering if I've missed something about #LLMs? The numbers and implications for stock markets are terrifyingly huge! https://www.wheresyoured.at/the-haters-gui/

"Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.

There is a gaping wound in the side of Anthropic, and it threatens financial doom for the company.

Some caveats before we continue:

- CCusage is not direct information from Anthropic, and thus there may be things we don’t know about how it charges customers, or any means of efficiency it may have.
- Despite the amount of evidence I’ve found, we do not have a representative sample of exact pricing. This evidence comes from people who use Claude Code, are measuring their usage, and elected to post their CCusage dashboards online — which likely represents a small sample of the total user base.
- Nevertheless, the amount of cases I’ve found online of egregious, unrelentingly unprofitable burn are deeply concerning, and it’s hard to imagine that these examples are outliers.
- We do not know if the current, unrestricted version of Claude Code will last.

The reason I’m leading with these caveats is because the numbers I’ve found about the sheer amount of money Claude Code’s users are burning are absolutely shocking.

In the event that they are representative of the greater picture of Anthropic’s customer base, this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code, and in each price point’s case I have found repeated evidence that customers are allowed to burn their entire monthly payment in compute within, at best, eight days, with some cases involving customers on a $200-a-month subscription burning as much as $10,000 worth of compute."

wheresyoured.at/anthropic-is-b

Ed Zitron's Where's Your Ed At · Anthropic Is Bleeding OutHello premium customers! Feel free to get in touch at ez@betteroffline.com if you're ever feeling chatty. And if you're not one yet, please subscribe and support my independent brain madness. Also, thank you to Kasey Kagawa for helping with the maths on this. Soundtrack: Killer Be Killed -

"The intoxicating buzz around artificial intelligence stocks over the last few years looks concerningly like the dot-com bubble, top investor Richard Bernstein warns.

The CIO at $15 billion Richard Bernstein Advisors wrote in a June 30 post that the AI trade is starting to look rich, and that it may be time for investors to turn their attention toward a more "boring" corner of the market: dividend stocks.

"Investors seem universally focused on 'AI' which seems eerily similar to the '.com' stocks of the Technology Bubble and the 'tronics' craze of the 1960s. Meanwhile, we see lots of attractive, admittedly boring, dividend-paying themes," Bernstein wrote.

Since ChatGPT hit the market in November 2022, the S&P 500 and Nasdaq 100 have risen 54% and 90%, respectively. Valuations, by some measures, have surged back toward record highs, rivaling levels seen during the dot-com bubble and the 1929 peak.

While Bernstein said he's not calling a top, trades eventually go the other way, and the best time to invest in something is when it's out of favor — not when a major rally has already occurred."

businessinsider.com/stock-mark

Business Insider · AI stocks look 'eerily similar' to the dot-com craze, CIO warnsBy William Edwards

"In May, researchers at Carnegie Mellon University released a paper showing that even the best-performing AI agent, Google's Gemini 2.5 Pro, failed to complete real-world office tasks 70 percent of the time. Factoring in partially completed tasks — which included work like responding to colleagues, web browsing, and coding — only brought Gemini's failure rate down to 61.7 percent.

And the vast majority of its competing agents did substantially worse.

OpenAI's GPT-4o, for example, had a failure rate of 91.4 percent, while Meta's Llama-3.1-405b had a failure rate of 92.6 percent. Amazon's Nova-Pro-v1 failed a ludicrous 98.3 percent of its office tasks.

Meanwhile, a recent report by Gartner, a tech consultant firm, predicts that over 40 percent of AI agent projects initiated by businesses will be cancelled by 2027 thanks to out-of-control costs, vague business value, and unpredictable security risks.

"Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied," said Anushree Verma, a senior director analyst at Gartner.

The report notes an epidemic of "agent washing," where existing products are rebranded as AI agents to cash in on the current tech hype. Examples include Apple's "Intelligence" feature on the iPhone 16, which it currently faces a class action lawsuit over, and investment firm Delphia's fake "AI financial analyst," for which it faced a $225,000 fine.

Out of thousands of AI agents said to be deployed in businesses throughout the globe, Gartner estimated that "only about 130" are real."

futurism.com/ai-agents-failing

Futurism · The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the IndustryBy Joe Wilkins

"I think this highlights a few interesting trends.

Firstly, the era of VC-subsidized tokens may be coming to an end, especially for products like Cursor which are way past demonstrating product-market fit.

Secondly, that $200/month plan for 20x the usage of the $20/month plan is an emerging pattern: Anthropic offers the exact same deal for Claude Code, with the same 10x price for 20x usage multiplier.

Professional software engineers may be able to justify one $200/month subscription, but I expect most will be unable to justify two. The pricing here becomes a significant form of lock-in - once you've picked your $200/month coding assistant you are less likely to evaluate the alternatives."

simonwillison.net/2025/Jul/5/c

Simon Willison’s WeblogCursor: Clarifying Our PricingCursor changed their pricing plan on June 16th, introducing a new $200/month Ultra plan with "20x more usage than Pro" and switching their $20/month Pro plan from "request limits to …
Continued thread

The full #Anthropic post on their experience with #Claude running a vending machine is chock full of amusingly crazy behaviors.

However, this paragraph near the end exemplifies how we are stuck in an #AIBubble, because once again someone says "we experimented with having AI do a task, and it completely failed, but we will continue to believe that AI will be great at this task because surely someone will figure out how to get AI to do this task."

anthropic.com/research/project

"In a study they performed with AI Detector Pro, Dandage’s team searched to see what percent of the internet was AI-generated and found an astonishing amount of content right now is AI-generated — as much as a quarter of new content online. So if the content available is increasingly produced by AI and is sucked back into the AI for further outputs without checks on accuracy, it becomes an infinite source of bad data continually being reborn into the web.

And Binny Gill, the CEO of Kognitos and an expert on enterprise LLMs, believes the lapses in factual responses are more of a human issue than an AI one. “If we build machines inspired by the entire internet, we will get the average human behavior for the most part with sparks of genius once in a while. And by doing that, it is doing exactly what the data set trained it to do. There should be no surprise.”

Gill went on to add that humans built computers to perform logic that average humans find difficult or too time-consuming to do, but that “logic gates” are still needed. “Captain Kirk, no matter how smart, will not become Spock. It isn’t smartness, it is the brain architecture. We all want computers to be like Spock,” Gill said. He believes in order to fix this program, neuro-symbolic AI architecture (a field that combines the strengths of neural networks and symbolic AI-logic-based systems) is needed.

“So, it isn’t any kind of ‘cognitive decline’; that assumes it was smart to begin with,” Gill said. “This is the disillusionment after the hype. There is still a long way to go, but nothing will replace a plain old calculator or computer. Dumbness is so underrated.”

And that “dumbness” might become more and more of an issue if dependency on AI models without any sort of human reasoning or intelligence to discern false truths from real ones."

huffpost.com/entry/ai-hallucin

"The negative response online is indicative of a larger trend: Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back.

When reached for comment, Duolingo spokesperson Sam Dalsimer stressed that “AI isn’t replacing our staff” and said all AI-generated content on the platform would be created “under the direction and guidance of our learning experts.” The company's plan is still to reduce its use of non-staff contractors for tasks that can be automated using generative AI.

Duolingo’s embrace of workplace automation is part of a broad shift within the tech industry. Leaders at Klarna, a buy now, pay later service, and Salesforce, a software company, have also made sweeping statements about AI reducing the need for new hires in roles like customer service and engineering. These decisions were being made at the same time as developers sold “agents,” which are designed to automate software tasks, as a way to reduce the amount of workers needed to complete certain tasks.

Still, the potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI. Add that to the error-ridden outputs, the environmental damage, the potential mental health impacts for users, and the concerns about copyright violations when AI tools are trained on existing works."

wired.com/story/generative-ai-

WIRED · The AI Backlash Keeps Growing StrongerBy Reece Rogers