toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

277
active users

#informationtheory

0 posts0 participants0 posts today

#PhysicsJournalClub
"Model-free estimation of the Cramér–Rao bound for deep learning microscopy in complex media"
by I. Starshynov et al.

Nat. Photon. (2025)
doi.org/10.1038/s41566-025-016

As everybody who ever tried to orient themselves while immersed in thick fog knows, scattering scrambles information. The question "how much information is still there?" is not particularly interesting as the answer is "essentially all of it", as elastic scattering can't destroy information. A much more interesting question is "how much information can we retrieve?" In order to even try to give an answer we need to be a bit more specific, so the authors placed a small reflective surface behind a scattering layer and asked how much information about its transverse position could be retrieved. This is a well-posed question, and the answer takes the form of a "Cramér–Rao bound" (en.wikipedia.org/wiki/Cram%C3%).
After estimating this upper bound, the authors investigate how well a trained neural network can do at this task, and show that a specifically built convolutional neural network can almost reach the theoretical bound.

[Conflict of interest: Ilya Starshynov (the first author) did his PhD in my group.]

Join us next Tuesday, 27 May 2025 at 17:00 CET!

🌀 Beyond The Edge Seminar #5
🎙️ Dr Thomas F. Varley (University of Vermont)
🗣️ “Dark information in complex systems”

Discover the synergistic patterns our usual network models overlook, with examples from neuroscience.

🔗 beyondtheedge.network/event/se

www.beyondtheedge.networkBeyondTheEdge Seminar: Thomas F. VarleyRobert F. Varley (University of Vermont) speaks in the BeyondTheEdge Seminar

🤔 Is our universe trapped inside a black hole?
@Spacecom

「 each and every black hole in our universe could be the doorway to another "baby universe." These universes would be unobservable to us because they are also behind an event horizon, a one-way light-trapping point of no return from which light cannot escape, meaning information can never travel from the interior of a black hole to an external observer 」

space.com/space-exploration/ja

Space · Is our universe trapped inside a black hole? This James Webb Space Telescope discovery might blow your mindBy Robert Lea

When reporting a credibility interval, maybe also you, like I, are sometimes undecided between a 95%, a 90%, and an 89% interval (the last is common in the Bayesian literature). Well it turns out that the 89% interval has the following special property – for what it's worth:

Knowing whether the true value is within or without the 89% interval, corresponds to almost exactly *0.5 shannons* of uncertainty (more precisely 0.4999 Sh). That is, the uncertainty is half that of a 50% credibility interval, measured on the log-scale of Shannon information.

The 90% interval corresponds to 0.469 Sh. The 95% one, to 0.286 Sh.

So if one reports 50% and 89% credibility intervals, one is reporting 1 Sh and 0.5 Sh of uncertainty.

The remarks above don't pretend to be more than a curiosity :)

Today 6 Sep 2024 I'll be joining a panel at Ars Electronica (State of the Archive: Current Approaches to Digital Archives and Archives for Digital [Art]), organised by Vanessa Hannesschläger, I’ll announce the Distributed Archive project which allows to simultaneously search the Neural and V2_ websites and archive, an outcome of the The Alex Adriaansens Residency I’ve been awarded in 2023 and join the discussion on “current approaches to digitizing, preserving and (subsequently) presenting cultural heritage” with Katharina Pektor, Sandra Sonnleitner, and Chiara Zuanni. The panel takes place at POSTCITY, First Floor, Lecture Stage, between 15:00 – 16:00.

ars.electronica.art/hope/en/ev

What is the relationship between information, causation, and entropy?

The other day, I was reading a post from Corey S. Powell on how we are all ripples of information. I found it interesting because it resonated with my own understanding of information (i.e. it flattered my biases). We both seem to see information as something active rather than passive. In my case I see it fundamentally related to causation itself, more specifically a snapshot of causal processing. Powell notes that Seth Lloyd has an excellent book on this topic, so I looked it up.

Lloyd’s 2006 book is called Programming the Universe, which by itself gives you an idea of his views. He sees the entire universe as a giant computer, specifically a quantum computer, and much of the book is about making a case for it. It’s similar to the “it from qubit” stance David Chalmers explores in his book Reality+. (I did a series of posts on Chalmers’ book a while back.)

One of the problems with saying the universe is a computer is it invites an endless metaphysical debate, along with narrow conceptions of “computer” leading people to ask things like what kind of hardware the universe might be running on. I’ve come to think a better strategy is to talk about the nature of computation itself. Then we can compare and contrast that nature with the universe’s overall nature, at least to the extent we understand it.

Along those lines, Chalmers argues that computers are causation machines. I think it helps to clarify that we’re talking about logical processing, which is broader than just calculation. I see logical processing as distilled causation, specifically a high degree of causal differentiation (information) at the lowest energy levels currently achievable, in other words, a high information to energy ratio.

The energy point is important, because high causal differentiation tends to be expensive in terms of energy. (Data centers are becoming a major source of energy consumption in the developed world, and although the brain is far more efficient, it’s still the most expensive organ in the body, at least for humans.)

Which is why computational systems always have input/output interfaces that reduce the energy levels of incoming effects from the environment to the levels of their internal processing, and amplify the energy of outgoing effects. (Think keyboards and screens for traditional PCs, or sense organs and muscles for nervous systems.)

Of course, there’s no bright line, no sharp threshold in the information / energy ratio where a system is suddenly doing computation. As a recent Quanta piece pointed out, computation is everywhere. But for most things, like stars, the magnitude of their energy level plays a much larger role in the causal effects on the environment than their differentiation.

However, people like Lloyd or Chalmers would likely point out that the energy magnitude is itself a number, a piece of information, one that has computational effects on other systems. In a simulation of that system, the simulation wouldn’t have the same causal effects on other physical systems as the original, but it would within the environment of the simulation. (Simulated wetness isn’t wet, except for entities in the simulation.)

Anyway, the thing that really caught my eye with Lloyd was his description of entropy. I’ve covered before my struggles with the customary description of entropy as the amount of disorder in a system. Disorder according to who? As usually described, it leaves the question of how much entropy a particular system has as observer dependent, which seems problematic for a fundamental physics concept. My reconciliation of this is to think of entropy as disorder for transformation, or in engineering terms: for work.

Another struggle has been the relationship between entropy and information. I’ve long wanted to say that entropy and information are closely related, if not the same thing. That seems like the lesson from Claude Shannon’s theory of information, which uses an equation similar to Ludwig Boltzmann’s for entropy. Entropy is a measure of the complexity in a system, and higher values result in a system’s energy gradients being fragmented, making much of the energy in the system unavailable for transformation (work), at least without adding additional energy into the system.

However, people like Sean Carroll often argue that a high entropy state is one of low information. Although Carroll does frequently note that there are several conceptions of “information” out there. His response makes sense for what is often called “semantic information”, that is information whose meaning is known and useful to some kind of agent. The equivalence seems more for “physical information”, the broader concept of information as generally used in physics (and causes hand wringing due to the possibility of black holes losing it).

Lloyd seems to be on the same page. He sees entropy as information, although he stipulates that it’s hidden information, or unavailable information (similar to how energy is present but unavailable). But this again seems to result in entropy being observer dependent. If the information is available to you but not me, does that mean the system has higher entropy for me than it does for you? If so, then computers are high entropy systems since none of us have access to most of the current information in the device you’re using right now.

My reconciliation here is to include the observer as part of the accounting. So if a system is in a highly complex state, one you understand but I don’t, then the entropy for the you + system under consideration is lower than the entropy for the me + system combo. In other words, your knowledge, the correlations between you and the system, makes the combined you + system more ordered for transformation than the me + system combo. At least that’s my current conclusion.

But that means for any particular system considered in isolation, the level of entropy is basically the amount of complexity, of physical information it contains. That implies that the ratio I was talking about above, of information to energy, is also of entropy to energy. And another way to refer to these computational systems, in addition to information processing systems, is as entropy processing systems, or entropy transformers.

This might seem powerfully counter intuitive because we’re taught to think of entropy as bad. Computational systems seem to be about harnessing their entropy, their complexity, and making use of it. And we have to remember that these aren’t closed systems. As noted above, they’re systems that require a lot of inbound energy. It’s that supply of energy that enables transformation of their highly entropic states. (It’s worth noting that these systems also produce a lot of additional entropy that requires energy to be removed, such as waste heat or metabolic waste.)

So computers are causation machines and entropy transformers. Which kind of sounds like the universe, but maybe in a very concentrated form. Viewing it this way keeps us more aware of the causal relations not yet captured by current conventional computers. And the energy requirements remind us that computation may be everywhere, but the useful versions only seem to come about from extensive evolution or engineering. As Chalmers notes in his book, highly computational systems don’t come cheap.

What do you think? Are there differences between physical information and entropy that I’m overlooking? And how would you characterize the nature of computation? Does a star, rock, or hurricane compute in any meaningful sense? What about a unicellular organism?

Featured image credit

https://selfawarepatterns.com/2024/07/28/entropy-transformers/