toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

211
active users

#hpc

7 posts7 participants0 posts today
FCLC<p>For users of <a href="https://mast.hpc.social/tags/OpenFOAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenFOAM</span></a> your most used solver is: <a href="https://mast.hpc.social/tags/hpc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hpc</span></a> <a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CFD</span></a></p>
Tom Hayward<p>Attention Fortran lawyers!</p><p>A colleague posed an interesting one to me (assuming b is a 2D real array containing values):<br>OK:<br>real, dimension(:), allocatable :: a<br>a = b(1, :)</p><p>Error:<br>real, dimension(:), allocatable :: a<br>a = sum(b(2:,:), dim=1)<br>! (can fixed by allocating a)</p><p>Is there a rule about automatic allocation requiring no temporaries? Or compiler bug (in either case?)</p><p><a href="https://mastodon.social/tags/f90" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>f90</span></a> <a href="https://mastodon.social/tags/fortran" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fortran</span></a> <a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a></p>
Janne Moren<p>This is big: <a href="https://fosstodon.org/tags/Spack" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Spack</span></a> has reached 1.0. </p><p>Spack is (effectively) a distro-indepenedent package manager specifically for <a href="https://fosstodon.org/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> clusters. We've been evaluating it for years and we're most likely going to use it for our next machine. </p><p>Really happy to see it reach stable status!</p><p><a href="https://mast.hpc.social/@spack/114885155276285929" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mast.hpc.social/@spack/1148851</span><span class="invisible">55276285929</span></a></p>
Dr. Juande Santander-Vela<p><span class="h-card" translate="no"><a href="https://mastodon.social/@hannorein" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>hannorein</span></a></span> Using also <a href="https://mathstodon.xyz/tags/Lustre" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Lustre</span></a> <a href="https://mathstodon.xyz/tags/LustreFS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LustreFS</span></a> <a href="https://mathstodon.xyz/tags/IO" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IO</span></a> <a href="https://mathstodon.xyz/tags/IOOptimization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IOOptimization</span></a> <a href="https://mathstodon.xyz/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> to raise the amount of eyes on your request.</p>
Hanno Rein<p>I've been thinking about using lustre for my cluster to improve the IO performance. But the documentation is really not that great... For example, I cannot seem to find out if I can install it on debian or if it needs to be some sort of RHEL. Does anyone know of a good up-to-date tutorial or howto? <a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a></p>
ADMIN magazine<p>Google Cloud Managed Lustre, a managed, high-performance parallel file system service, announced by Google and DDN<br><a href="https://www.admin-magazine.com/News/Google-Cloud-Managed-Lustre-Now-Generally-Available?utm_source=mam" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">admin-magazine.com/News/Google</span><span class="invisible">-Cloud-Managed-Lustre-Now-Generally-Available?utm_source=mam</span></a><br><a href="https://hachyderm.io/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://hachyderm.io/tags/EXAscaler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EXAscaler</span></a> <a href="https://hachyderm.io/tags/GoogleCloud" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GoogleCloud</span></a> <a href="https://hachyderm.io/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a> <a href="https://hachyderm.io/tags/simulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>simulation</span></a> <a href="https://hachyderm.io/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a> <a href="https://hachyderm.io/tags/DDN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DDN</span></a></p>
Blosc Development Team<p>🗣️ Announcing Python-Blosc2 3.6.1</p><p>!Unlock new levels of data manipulation with Blosc2! 🚀</p><p>We've introduced a major improvement: powerful fancy indexing and orthogonal indexing for Blosc2 arrays.</p><p>We've tamed the complexity of fancy indexing to make it intuitive, efficient, and consistent with NumPy's behavior. 💪 </p><p>Read all about it on our blog! 📝 <a href="https://www.blosc.org/posts/blosc2-fancy-indexing/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">blosc.org/posts/blosc2-fancy-i</span><span class="invisible">ndexing/</span></a></p><p>Compress Better, Compute Bigger!</p><p><a href="https://fosstodon.org/tags/Blosc2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Blosc2</span></a> <a href="https://fosstodon.org/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://fosstodon.org/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> <a href="https://fosstodon.org/tags/BigData" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BigData</span></a> <a href="https://fosstodon.org/tags/NumPy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NumPy</span></a> <a href="https://fosstodon.org/tags/Performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Performance</span></a> <a href="https://fosstodon.org/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a></p>
Andrew Jones (hpcnotes)<p><a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> Leadership scores so far</p><p>Q1: 97% of people got it correct<br>Q2: 76%<br>Q3: 73%<br>Q4: 84%<br>Q5: 89%<br>Q6: 88%<br>Q7: 65%<br>Q8: 78%<br>Q9: 90%<br>Q10: 70%</p><p>The most selected wrong answer (33% of people) is for Q7.</p><p>Nearly 20% of people got 10/10.</p><p>Still nobody who said they don't need the course got 10/10.</p>
Glenn K. Lockwood<p>I am a sucker for photos of cool <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> infrastructure, and here is a dense GB200 NVL72 cluster going up somewhere in Canada (I think). Impressive to see this many racks in row; the DC must have facility water which is still uncommon in hyperscale.

Source: <a href="https://www.linkedin.com/posts/5cai_heres-a-peek-behind-the-curtain-at-the-early-activity-7350949703842189313-X3p_?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAU98N0BKzpkHNnW4i2sDGnIDTwgK7pQHXc" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">linkedin.com/posts/5cai_heres-</span><span class="invisible">a-peek-behind-the-curtain-at-the-early-activity-7350949703842189313-X3p_?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAU98N0BKzpkHNnW4i2sDGnIDTwgK7pQHXc</span></a></p>
Ludovic Courtès<p><a href="https://toot.aquilenet.fr/tags/JeRecrute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JeRecrute</span></a>, pas moi mais CNRS/Inria, pour faire du <a href="https://toot.aquilenet.fr/tags/Guix" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Guix</span></a> en <a href="https://toot.aquilenet.fr/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> dans une ✨ équipe de rêve ✨ du service public.<br><a href="https://recrutement.inria.fr/public/classic/fr/offres/2025-09146" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">recrutement.inria.fr/public/cl</span><span class="invisible">assic/fr/offres/2025-09146</span></a></p><p><a href="https://toot.aquilenet.fr/tags/LogicielLibre" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LogicielLibre</span></a> <a href="https://toot.aquilenet.fr/tags/ESR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESR</span></a></p>
HPC.social Jobs<p>New <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> Job! ❤️: Title: HPC Software Scientist<br>Location: Stockholm, sweden<br>Employer: KTH Royal Institute of Technology in Stockholm<br>Remote: Remote friendly<br><a href="https://www.kth.se/lediga-jobb/844061?l=en" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">kth.se/lediga-jobb/844061?l=en</span><span class="invisible"></span></a></p>
Nicolas MOUART-DAVID<p>This is already more intelligent : GFlops/Watt.</p><p>"JEDI once again claimed the No. 1 spot – JUPITER Exascale Development Instrument, a system from EuroHPC/FZJ in Germany. JEDI repeated its energy efficiency rating from the last list at 72.73 GFlops/Watt while producing an HPL score of 4.5 PFlop/s. JEDI is a BullSequana XH3000 machine with a Grace Hopper Superchip, an NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, and 19,584 total cores."</p><p><a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mastodon.social/tags/Efficiency" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Efficiency</span></a> <a href="https://mastodon.social/tags/environment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>environment</span></a></p>
Andrew Jones (hpcnotes)<p>Some <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> fun for the week ahead:</p><p>a quiz on HPC leadership topics</p><p><a href="https://forms.office.com/r/vCcEX690h2" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">forms.office.com/r/vCcEX690h2</span><span class="invisible"></span></a></p><p>(and no mention of <a href="https://mast.hpc.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> or <a href="https://mast.hpc.social/tags/quantum" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>quantum</span></a></p>
FCLC<p>First open weights trillion parameter model: <a href="https://moonshotai.github.io/Kimi-K2/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">moonshotai.github.io/Kimi-K2/</span><span class="invisible"></span></a></p><p><a href="https://mast.hpc.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a></p>
Ludovic Courtès<p>Want to join a ✨ dream team ✨ to work with <a href="https://toot.aquilenet.fr/tags/Guix" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Guix</span></a> in <a href="https://toot.aquilenet.fr/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a>? Let’s talk!<br><a href="https://recrutement.inria.fr/public/classic/fr/offres/2025-09146" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">recrutement.inria.fr/public/cl</span><span class="invisible">assic/fr/offres/2025-09146</span></a></p>
Alan Sill<p>Shorter <span class="h-card" translate="no"><a href="https://mast.hpc.social/@glennklockwood" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>glennklockwood</span></a></span> : Academics care about knowledge creation and sharing, and when building computing systems tend to build them to maximize these values. Commercial enterprises care more about profits and build systems to maximize these. Are these approaches incompatible? No, but the window for overlap in <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> and <a href="https://mast.hpc.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> for academia and hyperscale commerce was brief and may have closed. Also you have to decide how to spend your time in this world, and he is moving on. <a href="https://mast.hpc.social/@glennklockwood/114832950520089114" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mast.hpc.social/@glennklockwoo</span><span class="invisible">d/114832950520089114</span></a></p>
Glenn K. Lockwood<p>In the few days I have between jobs, I wanted to share an unvarnished perspective on what I've learned after spending three years working on supercomputing in the cloud. It's hastily written and lightly edited, but I hope others find it interesting: <a href="https://blog.glennklockwood.com/2025/07/lessons-learned-from-three-years-in.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.glennklockwood.com/2025/0</span><span class="invisible">7/lessons-learned-from-three-years-in.html</span></a></p><p><a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a></p>
Elio Campitelli<p>So now that parallel processing in purrr is officially out I wanted to test mirai on an <a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a>. So far, it seems to work relatively painlessly. Just set up your PBS config and mirai::daemons(n) will spin up n jobs.</p><p>Now, is it optimal to have number of "workers" = number of jobs? If each job has 48 CPUs, then each job could easily handle several jobs themselves. Does anyone know if it's possible to run something like "100 workers distributed between 3 remote jobs"?<br><a href="https://mastodon.social/tags/RStats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RStats</span></a></p><p><a href="https://www.tidyverse.org/blog/2025/07/purrr-1-1-0-parallel/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tidyverse.org/blog/2025/07/pur</span><span class="invisible">rr-1-1-0-parallel/</span></a></p>
ADMIN magazine<p>In this week's ADMIN Update newsletter, Jeff Layton looks at how the Linux logger tool can be helpful for HPC admins<br><a href="https://www.admin-magazine.com/Articles/Logger-in-HPC-System-Administration?utm_source=mam" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">admin-magazine.com/Articles/Lo</span><span class="invisible">gger-in-HPC-System-Administration?utm_source=mam</span></a><br><a href="https://hachyderm.io/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://hachyderm.io/tags/admin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>admin</span></a> <a href="https://hachyderm.io/tags/logger" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>logger</span></a> <a href="https://hachyderm.io/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a> <a href="https://hachyderm.io/tags/grep" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>grep</span></a> <a href="https://hachyderm.io/tags/scripts" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>scripts</span></a></p>
dvdgc13<p>Embedding training in digital skills in the university curriculum is what will help to spread the use of <a href="https://mastodon.green/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> in their future.<br>But also, how the machines have been designed so that it becomes more accessible to new users (snellius vs ARCHER2).<br>We didn't learn to use the command line in one day.<br><a href="https://mastodon.green/tags/stepup2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stepup2025</span></a></p>