toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

206
active users

#writings

1 post1 participant0 posts today

sometimes, I think of ponies

Have you ever noticed that every projection about “AGI” and “superintelligence” has an “and then a miracle occurs” step?

I have.

I shouldn’t say every projection – there are many out there, and I haven’t seen them all. But every one I’ve personally seen has this step. Somewhere, sometime, fairly soon, generative AI will create something that triggers a quantum leap in capability. What will it be? NOTHING MERE HUMANS CAN UNDERSTAND! Oh, sometimes they’ll make up something – a new kind of transistor, a new encoding language (like sure, that’ll do it), whatever. Sometimes they just don’t say. Whatever it is, it happens, and then we’re off to the hyperintelligent AGI post-singularity tiems.

But the thing is … the thing is … for Generative AI to create a Magic Something that Changes Everything – to have this miracle – you have to already have hyperintelligent AGI. Since you don’t… well…

…that’s why it’s a miracle. Whether they realise it or not.

I’m not sure which is worse – that they do realise it, and know they’re bullshitting billions of dollars away from productive society to build up impossible wealth before the climate change they’re helping make worse fucks everything so they can live like feudal kings from their bunkers, or whether they don’t, and are spirit dancing, wanking off technofappic dreams of creating a God who will save the world with its AI magic, a short-term longtermism, burning away the rest of the carbon budget in a Hail Mary that absolutely will not connect.

Both possibilities are equally batshit insane, I know that much. To paraphrase a friend who knows far more about the maths of this than I, all the generative AI “compute” in the universe isn’t going to find fast solutions to PSPACE-HARD problems. It’s just not.

And so, sometimes, sometimes, sometimes, I think of…

…I think of putting a short reading/watching list out there, a list that I hesitate to put together in public, because the “what the actual fuck” energies are so strong – so strong – that I can’t see how anyone could take it seriously. And yet…

…so much of the AI fantasia happening right now is summed by three entirely accessible works.

Every AI-fantasia idea, particularly the ideas most on the batshit side…

…they’re all right here. And it’s all fiction. All of it. Some of it is science-shaped; none of it is science.

But Alice, you know, we’re all mad here. So… why not.

Let’s go.

1: Colossus: The Forbin Project (1970)

This is the “bad end” you see so much in “projections” about AI progression. A new one of these timelines just dropped, they have a whole website you can play with. I’m not linking to it because why would I, holy shit, I don’t need to spread their crazy. But there’s a point in the timeline/story that they have you read – I think it’s in 2027 – when you can make a critical choice. It’s literally a one-selection choose-your-own-path adventure!

The “good” choice takes you to galactic civilisation managed by friendly hyperintelligent AGI.

The “bad” choice is literally the plot of The Forbin Project with an even grimmer ending. No, really. The beats are very much the same. It’s just The Forbin Project with more death.

Well. And a bioweapon. Nukes are so messy, and affect so much more than mere flesh.

2: Blindsight, by Peter Watts (2006)

This rather interesting – if bleak – novel presents a model of cognition which lays out an intriguing thought experiment, even if it … did not sit well with what I freely admit is my severely limited understanding of cognition.

(It is not helped that it directly contradicts known facts about the cognition of self-awareness in various animals, and did so even when it was published. That doesn’t make it a worse thought experiment, however. Or a worse novel.)

It got shortlisted – deservedly – for a bunch of awards. But that’s not why it’s here. It’s here because its model of cognition is functionally the one used by those who think generative AI and LLMs can be hyperintelligent – or even functionally intelligent at all.

And it’s wrong. As a model, it’s just wrong.

Finally, we get to the “what.” entry:

3: Friendship is Optimal, by Iceman (2012)

Friendship is Optimal is obviously the most obscure of these works, but also, I think maybe the most important. It made a big splash in MLP fandom, before landing like an absolute hand grenade in the nascent generative AI community when it broke containment. Maybe not in all of that latter community – but certainly in the parts of which I was aware. So much so, in fact, that it made waves even beyond that – which is when I heard of it, and how I read it.

And yes… it’s My Little Pony fanfic.

Sorta.

It’s that, but really it’s more an explicit AI takeoff story, one which is absolutely about creating a benevolent hyperintelligent Goddess AI construct who can, will, and does remake the world, destroying the old one behind her.

Sound familiar?

These three works include every idea behind every crazy line of thought I’ve seen out of the Silicon Valley AI crowd. These three works right here. A novel or a movie (take your choice, the movie’s quite good, I understand the novel is as well), a second novel, and a frankly remarkable piece of fanfic.

For Musk’s crowd in particular? It’s all about the model presented in Friendship is Optimal, except, you know, totally white supremacist. They’re even kinda following the Hofvarpnir Studios playbook from the story, but with less “licensed property game” and a lot more more “Billionaire corporate fascism means you don’t have to pay employees anymore, you can just take all the money yourself.”

…which is not the kind of sentence I ever thought I’d write, but here we are.

You can see why I’m hesitant to publish this reading list, but I also hope you can see why I want to.

If you read Friendship is Optimal, and then go look at Longtermerism… I think you definitely will.

So what’re we left with, then?

Some parts of this technology are actually useful. Some of it. Much less than supports the valuations, but there’s real use here. If you have 100,000 untagged, undescribed images and AI analysis gives 90% of them reasonable descriptions, that’s a substantial value add. Some of the production tools are good – some of them are very good, or will be, once it stops being obvious that “oh look, you’ve used AI tools on this.” Some of the medical imaging and diagnostic tools show real promise – though it’s always important to keep in mind that antique technologies like “Expert Systems” seemed just as promising, in the lab.

Regardless, there’s real value to be found in those sorts of applications. These tasks are where it can do good. There are many more than I’ve listed, of course.

But AGI? Hyperintelligence? The underlying core of this boom, the one that says you won’t have to employ anyone anymore, just rake in the money and live like kings?

That entire project is either:

A knowing mass fraud inflating a bubble nobody’s seen in a century that instead of breaking a monetary system might well finish off any hopes for a stable climate in an Enron-like insertion of AI-generated noise followed by AI-generated summarisation of that noise that no one reads and serves no purpose and adds no value but costs oh, oh so very much electricity and oh, oh, oh so very much money;

A power play unlike anything since the fall of the western Roman empire, where the Church functionally substituted itself in parallel to and substitute of of the Roman government to the point that the latter finally collapsed, all in service of setting up a God’s Kingdom on Earth to bring back Jesus, only in this case, it’s setting up the techbro billionaires as a new nobility, manipulating the hoi polloi from above with propaganda and disinformation sifted through their “AI” interlocutors;

Or an absolute psychotic break by said billionaires and fellow travellers so utterly unwilling and utterly unable to deal with the realities of climate change that they’ll do anything – anything – to pretend they don’t have to, including burning down the world in the service of somehow provoking a miracle that transcends maths and physics in the hope that some day, some way, before it’s too late, their God AI will emerge and make sure everything ends up better… in the long term.

Maybe, even, it’s a mix of all three.

And here I thought my reading list was the scary part.

Silly me.

the enemy of my enemy

This appeared first on my Mastodon account last night; it’s proven popular, so here it is – trivially expanded because I had to trim it more than I liked to fit in 750 characters – for here, too.

Some weeks ago, protesters at UW occupied an engineering building on campus, demanding that UW cut ties with Boeing over Israel’s war in Gaza.

“That’s fine,” I thought, and I started relaying news… until I saw their ebullient praise for Hamas and the October 7th attacks. Then I stopped.

Some people will roll their eyes at that reaction, noting – correctly! – that the Israeli government has done so much worse since. But that doesn’t make Hamas into good guys here. They are not.

For example, here’s translated Palestinian reporting on Hamas death squads killing Gazans trying to get food from non-Hamas aid stations, condemning them as “collaborators.”

It is an inconvenient truth that Hamas is a nightmare organisation – but it’s still a truth.

Don’t let Netanyahu’s crimes erase that.

Middle East Media Research Institute · Palestinian Authority Daily: Hamas Is Murdering Civilians Who Come To The Gaza Aid Distribution CentersIn its June 19, 2025 editorial, the Palestinian Authority Al-Hayat Al-Jadida daily accused Hamas of murdering Gaza residents trying to get to the American food aid distribution centers established i
Continued thread

She is a former director and longtime member of the Taller de Historia Oral Andina (Workshop on Andean #Oral History). The Taller de Historia Oral Andina has conducted an ongoing critique of Western #epistemologies through #writings and #activism for nearly two decades. She is also an #activist who works directly with #IndigenousMovements in Bolivia, such as the #Katarista movement and the #CocaGrowersMovement.

2/2

en.wikipedia.org/wiki/Silvia_R

en.wikipedia.orgSilvia Rivera Cusicanqui - Wikipedia

As in most villages, the bar is a hub of life here. It has a family atmosphere: the boss at the counter, the boss's wife in the kitchen, and their son who comes to help on his days off.
The notary tells jokes to the electrician, who has stopped by for his break... The hunters gather there before and after the hunt... The roadmender has his own regular table.
It's a second home for everyone...

Under the winter sheets, the hot water bottle of dreams warms the soul. Behind the eyelids, eyes bloom with encounters, friendship, sharing... Sometimes you join me to play the cat and mouse game... and each, in turn, in a forest of childish laughter and cries, we chase each other… without ever managing to catch one another…

WHELP THAT WAS FAST

Guilty! Guilty, guilty, guilty.

Guilty.

All 34 counts. Every single one. Guilty as sin. It didn’t even take the jury that long. It was just long enough – they requested enough evidence and clarity to make sure the were crossing their Ts and dotting their Is and couldn’t be accused of ignoring the facts… and they were done.

And here’s the thing. Because there is a thing.

This was hard to do. This was very hard to do… historically. It’s not a thing done in the US, regardless of how much it should be done. But…

…the biggest thing making it hard is the convention – the unwritten rule – of not holding a former president to the law. That’s the biggest thing. That’s why Ford was able to get away with pardoning Nixon – an incredibly damaging act – just for one example..

But if all you do, your entire political career..

…if all you do…

…is attack, spit upon, break down every convention, every common ground, every civility and every unwritten rule and as many written laws as you can manage, smash them against the ground and shit on them, all for your own advantage and for your own power…

…eventually…

…those unwritten rules are not such a big deal anymore to anyone. You’ve made sure of that. You’ve broken every one you can, tearing down all the rules, written and unwritten, smashing all the conventions and agreements and social contracts that make everything work and can never be really written too far into law because that’s not how it functions but that doesn’t make them not important.

So when this particular contract, this particular convention, this particular gentleman’s agreement gets thrown out, as it has been today?

It’s your own. damn. fault.

He made this possible. Hell, gor this jury – he made it easy. And it’s all on him.

Petard, how dost thou hoist me?

He did this to himself, and it is delicious.

[link] #politics #USPol #writing #politics #uspol #uspolitics #writings

what drives me mad about LLMs in particular

Here’s what’s driving me crazy, okay? Here’s what’s driving me crazy about all this LLM shit in particular.

I made a baby version of this myself. Years ago, in UNDERGRAD, on a MAINFRAME, in FORTRAN IV, writing an English parsing and construction AI in a language that didn’t even have STRINGS.

I had to make up the language statistical data and did so over an hilariously minimal domain because I was making it all up and even that much was a lot of work. But I knew what it would take to get the real data I’d need and i couldn’t get that. I was thinking about what it would take to scan every library on campus and thinking that would maybe be enough to start. Maybe.

So I just made up probabilities over a tiny subset of language – I think it knew like 150 words and how they all related or could relate to each other.

It was, obviously, a silly little toy.

And even then, when I showed it off, and had my limited little conversations with my very stupid bots, the chair of the department freaked out so bad that he walked out on me, because oh my god the machine was conversing with me.

I mean it. He left the room.

Later – presumably when he recovered his composure – he said it was truly extraordinary and gave me the best possible mark, but he clearly didn’t really want to talk to me about it any further, not that I really tried much – it was last quarter and I was on my way out.

And that was just when it was my little toy.

But even then

E. VEN. THEN.

Even then, as dumb as it was, as limited as it was, it was shriekingly obvious that there had to be fundamental connections to actual understanding for it to have any actual intelligence at all. That you COULD NOT DO IT without that.

Not if you wanted it to actually fucking work.

Not if you wanted its output to have anything to do with reality.

Not if you wanted it to actually fucking think.

(I did try. I had some baby approaches to that, too. They were hopelessly inadequate except for – maybe – establishing a framework in terms of how to figure it out.)

It is a hard problem

and it is an obvious problem.

But now you’ve got these jackholes, these goddamn Blindsight cultists, these AGI spirit dancers and these EA “longtermers” becoming increasingly aware that they are not, in fact, here for the long term, and so in their panic have not only decided that knowledge is an entirely acontextual mechanical process, but that thinking isn’t actually real and that the actual physical universe doesn’t matter, because if you just throw enough stolen words on the fire somehow somewhere MAGIC WILL HAPPEN yielding STEP THREE: PROFIT and they’ll get to dry-hump their confirmation-bias god-computer all the way to Line Goes Up Forever Heaven, and they’re absolutely going to keep pushing this insane calliope of jumped-up spreadsheets until they get there…

…no matter how many people are consumed in their grasping desperation.

They’re pushing this stuff into health care.

Into HEALTH CARE.

Insurance companies are using it to deny claims. Doctors are being told to use it for their own notes and diagnoses…

…and it thinks Godzilla plays baseball, that a fictional character invented a real life piano key tie, and that you can improve your pizza sauce by adding glue.

And they know that.

And they do not fucking care.

That’s the thing.

[link] #tech #writing #tech #writings

solarbird.netwhat drives me mad about LLMs in particular – Solarbird{y|z|yz}, Collected