toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

275
active users

#cli

15 posts13 participants3 posts today

just stumbled over the #scanimage #cli tool thanks to this blog post by @b0rk (jvns.ca/blog/2020/07/11/scanim). I have so far struggled with GUI tools (mostly simple-scan) with very limited control over the actual scan process; scanimage is different: rather easy to use, very configurable and clear in what it actually does. Should be straightforward to write a couple of scripts for my usual use cases 👍

Julia Evansscanimage: scan from the command line!scanimage: scan from the command line!

I am very impressed with this repo-diving and construction of documentation done by "Devin" for DeepWiki.

"Qio" is a CLI that I built to use with a database structure I came up with to keep a knowledge base of things like CLI things and one-liners or SQL or URLs or team lists or whatever ("rainbow.toml" is the database, "q" is short for queue, and "qio" is the clever CLI tool name that nobody had taken yet).

This was written before I was into TDD so it's not the *greatest* golang in the universe but my choice at the time was to lean heavily on Viper as a framework to build a database out of its own default configuration using TOML. There's lots of room for improvement. ;)

So anyway, I'm pretty complete on docs in this repo, and it seemed like a good candidate so I put my repo into DeepWiki and got a pretty amazing result complete with an accurate illustration of my entirely invented data definition:

deepwiki.com/rainbowqio/qio

DeepWikirainbowqio/qio | DeepWikiThis document provides a comprehensive introduction to the qio repository, explaining its purpose as a knowledge management CLI tool and the foundational Rainbow/Almanac/Plug conceptual model that dri

"claude-trace (via) I've been thinking for a while it would be interesting to run some kind of HTTP proxy against the Claude Code CLI app and take a peek at how it works.

Mario Zechner just published a really nice version of that. It works by monkey-patching global.fetch and the Node HTTP library and then running Claude Code using Node with an extra --require interceptor-loader.js option to inject the patches."

simonwillison.net/2025/Jun/2/c

Simon Willison’s Weblogclaude-traceI've been thinking for a while it would be interesting to run some kind of HTTP proxy against the Claude Code CLI app and take a peek at how it …

When moving or copying large files between hard disks I like to see the progress to make the wait more informative and enjoyable.

If you want to achieve the same thing the "progress" package (present in the repos of all distros) allows you to do it. After installing it add to mv or cp the variable "& progress -mp $!", for example: mv file1 /destination-directory & progress -mp $!

If you like you can add the corresponding alias: alias mv='mv -v "$@" & progress -mp $!'.

"I’ve been tracking llm-tool-use for a while. I first saw the trick described in the ReAcT paper, first published in October 2022 (a month before the initial release of ChatGPT). I built a simple implementation of that in a few dozen lines of Python. It was clearly a very neat pattern!

Over the past few years it has become very apparent that tool use is the single most effective way to extend the abilities of language models. It’s such a simple trick: you tell the model that there are tools it can use, and have it output special syntax (JSON or XML or tool_name(arguments), it doesn’t matter which) requesting a tool action, then stop.

Your code parses that output, runs the requested tools and then starts a new prompt to the model with the results.

This works with almost every model now. Most of them are specifically trained for tool usage, and there are leaderboards like the Berkeley Function-Calling Leaderboard dedicated to tracking which models do the best job of it.

All of the big model vendors—OpenAI, Anthropic, Google, Mistral, Meta—have a version of this baked into their API, either called tool usage or function calling. It’s all the same underlying pattern.

The models you can run locally are getting good at this too. Ollama added tool support last year, and it’s baked into the llama.cpp server as well.

It’s been clear for a while that LLM absolutely needed to grow support for tools. I released LLM schema support back in February as a stepping stone towards this. I’m glad to finally have it over the line.

As always with LLM, the challenge was designing an abstraction layer that could work across as many different models as possible. A year ago I didn’t feel that model tool support was mature enough to figure this out. Today there’s a very definite consensus among vendors about how this should work, which finally gave me the confidence to implement it."

simonwillison.net/2025/May/27/

Simon Willison’s WeblogLarge Language Models can run tools in your terminal with LLM 0.26LLM 0.26 is out with the biggest new feature since I started the project: support for tools. You can now use the LLM CLI tool—and Python library—to grant LLMs from …