toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

198
active users

#computerscience

23 posts20 participants0 posts today

How much computing power would you have needed circa 2022 to compute 10^28 sums of seven integers? The sums were in the range of 1 million to 100 million so 4 bytes each would be enough. Just the storage space would be a tough problem unless you created some kind of summary table.

Was this the kind of thing you needed to rent time on a cluster for?

For students in grades 10 to 13 in UK schools who are interested in #math and #physics , there is COMPOS, essentially an online STEM club organized by the Uni of Oxford, targeted at students whose school only offers sports clubs. It is in the process of being extended to also cover #biology #chemistry and #computerScience . Registration is open until 28 September. compos.web.ox.ac.uk/

compos.web.ox.ac.ukHomeFind out about the University of Oxford's physics outreach programme

> Researchers claim to have used a #quantumComputer to factor a 2,048-bit #RSA integer.

> But the RSA number evaluated was the product of two prime factors that were too close together.

> As with a parlor magician's card deck that's been stacked for a card trick

> #Quantum #factorization is performed using sleight-of-hand numbers that have been selected to make them very easy to factorize using a #physics experiment

theregister.com/2025/07/17/qua

The Register · Quantum code breaking? You'd get further with an 8-bit computer, an abacus, and a dogBy Thomas Claburn

🖥️ **How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models**

🔗 doi.org/10.48550/arXiv.2507.03.

arXiv logo
arXiv.orgHow Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language ModelsLarge language models (LLMs) exhibit strikingly conflicting behaviors: they can appear steadfastly overconfident in their initial answers whilst at the same time being prone to excessive doubt when challenged. To investigate this apparent paradox, we developed a novel experimental paradigm, exploiting the unique ability to obtain confidence estimates from LLMs without creating memory of their initial judgments -- something impossible in human participants. We show that LLMs -- Gemma 3, GPT4o and o1-preview -- exhibit a pronounced choice-supportive bias that reinforces and boosts their estimate of confidence in their answer, resulting in a marked resistance to change their mind. We further demonstrate that LLMs markedly overweight inconsistent compared to consistent advice, in a fashion that deviates qualitatively from normative Bayesian updating. Finally, we demonstrate that these two mechanisms -- a drive to maintain consistency with prior commitments and hypersensitivity to contradictory feedback -- parsimoniously capture LLM behavior in a different domain. Together, these findings furnish a mechanistic account of LLM confidence that explains both their stubbornness and excessive sensitivity to criticism.