toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

276
active users

#metacognition

1 post1 participant0 posts today

LLMs become more dangerous as they rapidly get easier to use

This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:

Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).

What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.

This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.

However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.

This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:

The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.

In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.

But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?

https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338

One Useful Thing · Mass IntelligenceBy Ethan Mollick

Watch out for the "illusion of comprehension". Conditions like massed practice or rereading can give a false sense of understanding due to familiarity, but this doesn't guarantee real learning or retention for future exams. #Metacognition

The Mind as Semi-Solid Smoke

This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity. 

On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.

Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 .  Even the simplest of life performing chemotaxis uses the signal-field of food to navigate. 

When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.

From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments.  How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.   

Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see. 

Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.

1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304

2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.

The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?

The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.

We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.

The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.

1/3

AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.

This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:

Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.

Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.

Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:

2/3

Critical reasoning vs Cognitive Delegation

Old School Focus:

Building internal cognitive capabilities and managing cognitive load independently.

Cognitive Delegation Focus:

Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.

We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.

3/3

Replied in thread

@clacke

Re. Not anthropomorphizing LLMs

I'm a sucker for this. I'll apologise to an inanimate object if I walk into it.

I find useful practical tips for myself in following this to be:
1. Use the verb "I prompted" rather than I told or I asked.
2. State that the program "output" rather than it replied.
3. I don't discuss "confabulation" because it's an anthropomorphization (the reality is that the computer program is doing exactly what it is instructed to do by the user), but if I was compelled to anthropomorphize, I would use "confabulation" rather than hallucination.

#LLM#AI#GAN

L’Homme est-il vraiment si exceptionnel ? Réflexions sur la conscience animale

Dans notre 🧠 L'Homme est-il vraiment si exceptionnel ? La science révèle que la conscience et la métacognition existent chez de nombreux animaux. Des rats aux dauphins, en passant par les singes, les preuves s'accumulent. Notre "exceptionnalité" s'effrite, nous invitant à repenser notre relation avec le règne animal. #ConscienceAnimale #Métacognition #ÉthiqueAnimale 🐘🐬🐒 Vers un continuum de…

homohortus31.wordpress.com/202

Homo Hortus · L’Homme est-il vraiment si exceptionnel ? Réflexions sur la conscience animaleDans notre 🧠 L’Homme est-il vraiment si exceptionnel ? La science révèle que la conscience et la métacognition existent chez de nombreux animaux. Des rats aux dauphins, en passant par les sin…