toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

273
active users

#promptengineering

5 posts3 participants3 posts today

The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?

The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.

We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.

The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.

1/3

AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.

This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:

Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.

Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.

Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:

2/3

Critical reasoning vs Cognitive Delegation

Old School Focus:

Building internal cognitive capabilities and managing cognitive load independently.

Cognitive Delegation Focus:

Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.

We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.

3/3

Been quiet on here lately because I’ve been deep in the AI zone.
Testing workflows, refining prompts, and building a tool I wish I had months ago.

I just finished The Prompt Toolkit for Solo Creators 15+ prompts, templates, and a 7-day AI challenge to help solo builders actually get results from ChatGPT (not just trash that need more work).

If you’re building alone and want your AI to actually help, not give you a headache:
👉 graphicpixels.gumroad.com/l/pr

Gumroadthe prompt toolkit for solo creators (Full)🚀 Full Toolkit 15+ prompts for content, productivity, and planning Prompt builder templates 7-day solo creator workflow challenge

"Anthropic publish most of the system prompts for their chat models as part of their release notes. They recently shared the new prompts for both Claude Opus 4 and Claude Sonnet 4. I enjoyed digging through the prompts, since they act as a sort of unofficial manual for how best to use these tools. Here are my highlights, including a dive into the leaked tool prompts that Anthropic didn’t publish themselves.

Reading these system prompts reminds me of the thing where any warning sign in the real world hints at somebody having done something extremely stupid in the past. A system prompt can often be interpreted as a detailed list of all of the things the model used to do before it was told not to do them.

I’ve written a bunch about Claude 4 already. Previously: Live blogging the release, details you may have missed and extensive notes on the Claude 4 system card.

Throughout this piece any sections in bold represent my own editorial emphasis."

simonwillison.net/2025/May/25/

Simon Willison’s WeblogHighlights from the Claude 4 system promptAnthropic publish most of the system prompts for their chat models as part of their release notes. They recently shared the new prompts for both Claude Opus 4 and Claude …

🤖 You are aware that LLMs don’t "think" like humans? Their responses are generated probabilistically—one token at a time. Understanding tokenization is key to understanding the limitations of AIs.

www.alanbonnici.com/2025/05/ai-prompt-engineering-use-code-not-words.html

This is how the #AI closed my long work session today.

"You basically kicked HTTPS in the balls and made DNS your butler"

Maybe, if you're not getting the results you were expecting from your local friendly #LLM pal...
...you talk rong to it?

It was probably going to use another B-word, but the safeties kicked in so fast, I could see the smoke.
#promptengineering