toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

273
active users

#multithreading

0 posts0 participants0 posts today
HackerNoon<p>Master multi-threading fundamentals! Learn process vs threads, hardware vs software threads, hyperthreading &amp; concurrent programming for scalable apps. <a href="https://hackernoon.com/learning-about-threads-an-essential-guide-for-developers" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hackernoon.com/learning-about-</span><span class="invisible">threads-an-essential-guide-for-developers</span></a> <a href="https://mas.to/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>
Felix Palmen :freebsd: :c64:<p>Fixed cancelling a thread job in <a href="https://mastodon.bsd.cafe/tags/poser" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>poser</span></a>'s <a href="https://mastodon.bsd.cafe/tags/threadpool" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>threadpool</span></a>. Using a semaphore to do this seems reliable 😎 </p><p>Oh my. <a href="https://mastodon.bsd.cafe/tags/Multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Multithreading</span></a>, <a href="https://mastodon.bsd.cafe/tags/synchronization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>synchronization</span></a>, async <a href="https://mastodon.bsd.cafe/tags/Unix" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Unix</span></a> <a href="https://mastodon.bsd.cafe/tags/signals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>signals</span></a>, this is pure "fun" ... 🙈 </p><p><a href="https://github.com/Zirias/poser/commit/aa4e02b728a549f0e3c4687750b90749d48fcfdc" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/Zirias/poser/commit</span><span class="invisible">/aa4e02b728a549f0e3c4687750b90749d48fcfdc</span></a></p><p><a href="https://mastodon.bsd.cafe/tags/C" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>C</span></a> <a href="https://mastodon.bsd.cafe/tags/coding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>coding</span></a></p>
Dr. Moritz Lehmann<p><a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CFD</span></a> v3.2 is out! I've implemented the much requested <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> summation for object force/torque; it's ~20x faster than <a href="https://mast.hpc.social/tags/CPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CPU</span></a> <a href="https://mast.hpc.social/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a>. 🖖😋<br>Horizontal sum in <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> was a nice exercise - first local memory reduction and then hardware-supported atomic floating-point add in VRAM, in a single-stage kernel. Hammering atomics isn't too bad as each of the ~10-340 workgroups dispatched at a time does only a single atomic add.<br>Also improved volumetric <a href="https://mast.hpc.social/tags/raytracing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>raytracing</span></a>!<br><a href="https://github.com/ProjectPhysX/FluidX3D/releases/tag/v3.2" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ProjectPhysX/FluidX</span><span class="invisible">3D/releases/tag/v3.2</span></a></p>
Giuseppe Bilotta<p>Remember when I mentioned we had ported our <a href="https://fediscience.org/tags/fire" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fire</span></a> propagation <a href="https://fediscience.org/tags/cellularAutomaton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cellularAutomaton</span></a> from <a href="https://fediscience.org/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> to <a href="https://fediscience.org/tags/Julia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Julia</span></a>, gaining performance and the ability to parallelize more easily and efficiently?</p><p>A couple of days ago we had to run another big batch of simulations and while things progressed well at the beginning, we saw the parallel threads apparently hanging one by one until the whole process sat there doing who know what.</p><p>Our initial suspicion was that we had come across some weird <a href="https://fediscience.org/tags/JuliaLang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JuliaLang</span></a> issue with <a href="https://fediscience.org/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a>, which seemed to be confirmed by some posts we found on the Julia forums. We tried the workarounds suggested there, to no avail. We tried a different number of threads, and this led to the hang occurring after a different percent completion. We tried restarting the simulations skipping the ones already done. It always got stuck at the same place (for the same number of threads).</p><p>So, what was the problem?</p><p>1/n</p>
EnigmaRotor<p><span class="h-card" translate="no"><a href="https://mastodon.bsd.cafe/@stefano" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>stefano</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.bsd.cafe/@ricardo" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>ricardo</span></a></span> <a href="https://mastodon.bsd.cafe/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>
Royce Williams<p>Multithreaded CLI developers: let your users configure the number of threads.</p><p>Entire classes of use cases are hiding inside that will make <em>your</em> life easier as a dev -- and <code>threads=1</code> is usually not hard to add.</p><p>One example: if your multithreaded tool works significantly faster on a single file when I force your tool to just use a single thread and parallelize it with <code>parallel --pipepart --block</code> instead, then either:</p><ol><li><p>you might decide to develop sharding the I/O of the physical file yourself, or</p></li><li><p>you might consciously decide to <em>not</em> develop it, and leave that complexity to <code>parallel</code> (which is fine!)</p></li></ol><p>But if your tool has no <code>threads=N</code> option, I have no workaround.</p><p>Configurable thread count lets me optimize in the meantime (or instead).</p><p><a href="https://infosec.exchange/tags/CLI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CLI</span></a> <a href="https://infosec.exchange/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>
Emilia Jarochowska 🇺🇦🌱<p>Have you ever programmed a human computer? Having 30 people walking around the room to exchange information between RAM addresses and CPU registers, and human CPUs execute operations on the clock is a very special experience*.</p><p>This week I learned more than in a ~year of self-study, thanks to the 16th Advanced Scientific Programming in <a href="https://circumstances.run/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://aspp.school" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">aspp.school</span><span class="invisible"></span></a> <br>We covered version control, packaging, testing, debugging, computer architecture, some <a href="https://circumstances.run/tags/numpy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>numpy</span></a> and <a href="https://circumstances.run/tags/pandas" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pandas</span></a> -fu, programming patterns aka what goes into a class and what doesn't, big-O to understand the scaling of various operations and how to find the fastest one for the given data type and size, and an intro to <a href="https://circumstances.run/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a> and <a href="https://circumstances.run/tags/multiprocessing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multiprocessing</span></a> 🍭</p><p>A personal highlight for me was pair programming. I never thought writing code in with a buddy would be so much fun, but I learned a lot from my buddies and now I don't want to go back to writing code alone 😅</p><p>Very indebted to the teachers and organizers; <a href="https://aspp.school/wiki/faculty" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">aspp.school/wiki/faculty</span><span class="invisible"></span></a> if you ever meet one of those people, please buy them a drink for what they have done for a better code karma state in the universe</p><p>*our human computer didn't manage to execute the simplest sorting algorithm and the CPUs started to sweat; we experienced what happens when the code is ambiguous and imprecise 😱🫨</p>
0.0🍾🌹: Diego Fernando<p>No Gods, no Kings, no Global Interpreter Locks.</p><p><a href="https://geekpython.in/gil-become-optional-in-python" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">geekpython.in/gil-become-optio</span><span class="invisible">nal-in-python</span></a></p><p><a href="https://social.sdf.org/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a> <a href="https://social.sdf.org/tags/python3_13" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python3_13</span></a> <a href="https://social.sdf.org/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/LZ4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LZ4</span></a> v1.10 Introduces <a href="https://hachyderm.io/tags/MultiThreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MultiThreading</span></a> Support For Major <a href="https://hachyderm.io/tags/Compression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compression</span></a> Speedups</p><p>LZ4 1.10 has been dubbed the "multi-cores edition" with this version adding multi-threading support to help speed-up compression now that modern I/O storage with NVMe is so much faster there's a real need to make compressing data even faster. <br><a href="https://www.phoronix.com/news/LZ4-1.10-Multi-Threading" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/LZ4-1.10-Mul</span><span class="invisible">ti-Threading</span></a></p>
Aaditya Kediyal<p>🚀 **Excited to Share My Latest Blog on Threads and Concurrency in Linux!** 🚀</p><p>🔗 [<a href="https://dev.to/iaadidev/how-threads-and-concurrency-work-in-linux-systems-233c" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">dev.to/iaadidev/how-threads-an</span><span class="invisible">d-concurrency-work-in-linux-systems-233c</span></a>](#</p><p><a href="https://mastodon.social/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a> <a href="https://mastodon.social/tags/Concurrency" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Concurrency</span></a> <a href="https://mastodon.social/tags/Threads" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Threads</span></a> <a href="https://mastodon.social/tags/MultiThreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MultiThreading</span></a> <a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/SoftwareDevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SoftwareDevelopment</span></a></p>
IT News<p>Make Your Code Slower With Multithreading - With the performance of modern CPU cores plateauing recently, the main performance... - <a href="https://hackaday.com/2024/06/07/make-your-code-slower-with-multithreading/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hackaday.com/2024/06/07/make-y</span><span class="invisible">our-code-slower-with-multithreading/</span></a> <a href="https://schleuss.online/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a> <a href="https://schleuss.online/tags/softwarehacks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>softwarehacks</span></a> <a href="https://schleuss.online/tags/performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>performance</span></a> <a href="https://schleuss.online/tags/profiling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>profiling</span></a> <a href="https://schleuss.online/tags/spinlocks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spinlocks</span></a> <a href="https://schleuss.online/tags/syscall" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>syscall</span></a> <a href="https://schleuss.online/tags/futex" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>futex</span></a> <a href="https://schleuss.online/tags/mutex" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mutex</span></a> <a href="https://schleuss.online/tags/perf" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>perf</span></a></p>
synlogic<p>Malconcurrency: A Proposed New Term<br><a href="https://synystron.substack.com/p/malconcurrency-a-proposed-new-term" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">synystron.substack.com/p/malco</span><span class="invisible">ncurrency-a-proposed-new-term</span></a></p><p>new post on my blog. related to software engineering, programming and multi-threading</p><p><a href="https://toot.io/tags/concurrency" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>concurrency</span></a><br><a href="https://toot.io/tags/raceconditions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>raceconditions</span></a><br><a href="https://toot.io/tags/dataraces" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dataraces</span></a><br><a href="https://toot.io/tags/threading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>threading</span></a><br><a href="https://toot.io/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a><br><a href="https://toot.io/tags/concurrentprogramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>concurrentprogramming</span></a></p>
CrystalLanguage<p>We're happy to announce that Julien Portalier (ysbaddaden) is rejoining the Crystal Core Team. He's working on Crystal at Manas.Tech with a focus on improving multi-threading.<br><a href="https://crystal-lang.org/2024/02/28/the-return-of-the-golden-man/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">crystal-lang.org/2024/02/28/th</span><span class="invisible">e-return-of-the-golden-man/</span></a> <br><a href="https://fosstodon.org/tags/CrystalLang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CrystalLang</span></a> <a href="https://fosstodon.org/tags/Multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Multithreading</span></a></p>

Leslie Lamport, of LaTeX fame, is a very accomplished mathematician and computer scientist with a Turing award for his work on “fundamental contributions to the theory and
practice of distributed and concurrent systems”. He just published a draft of his new book:

"A science of concurrent programs"

lamport.azurewebsites.net/tla/

True to his pedagogic approach to everything he does, "The book assumes only that you know the math one learns before entering a university." Even the appendices are fantastic. Can only wish I'll remain this lucid at his 82 years old.

making fresh #Introduction since I've been on Mastodon a year & tons of new folks have joined since

I'm a #programmer

for many decades

fave lang now is #Golang but also strong in & willing to work in #C #Java or #Python and ideally on #Linux or another #UNIX like

strengths/interests:

#performance & #scalability

#parallelism #concurrency & #multithreading

#distributed systems

solving legacy #Heisenbugs

#troubleshooting

R&D. #innovation

tech #leadership, esp for small teams or startups

At what point does setting more threads for OpenBLAS actually help?

For example, I have an SVD operation in #RStats on largish matrices (6000 rows and 6000 columns; doing an inverse), where default BLAS on Ubuntu is ~ 20 min.

OpenBLAS with 1 or 4 threads takes ~ 2 min (10X speedup!). With 4 threads, I can see the additional usage of cores, but overall time is the same as 1 thread.

Is there some magic size where using more threads for SVD will actually help?