toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

240
active users

#zluda

0 posts0 participants0 posts today
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>New <a href="https://hachyderm.io/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> 5 Preview Released For <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> On Non-NVIDIA <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a><br>For now this ability to run unmodified CUDA apps on non-<a href="https://hachyderm.io/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> GPUs is focused on <a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> GPUs of the <a href="https://hachyderm.io/tags/Radeon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Radeon</span></a> RX 5000 series and newer, which is AMD Radeon GPUs with <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a>. Besides CUDA code samples, GeekBench has been one of the early targets for testing. <br><a href="https://www.phoronix.com/news/ZLUDA-5-preview.43" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/ZLUDA-5-prev</span><span class="invisible">iew.43</span></a></p>
🧿🪬🍄🌈🎮💻🚲🥓🎃💀🏴🛻🇺🇸<p>Uninstalling the <a href="https://mastodon.social/tags/MicrosoftStore" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MicrosoftStore</span></a> version of <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> and disabling <a href="https://mastodon.social/tags/WindowsDefender" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WindowsDefender</span></a> so the <a href="https://mastodon.social/tags/torch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>torch</span></a> dlls properly install so I can run comfyui-zluda, a fork of <a href="https://mastodon.social/tags/comfyui" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>comfyui</span></a> which uses <a href="https://mastodon.social/tags/zluda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>zluda</span></a> which is a shim for <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> applications to use <a href="https://mastodon.social/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> <a href="https://mastodon.social/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> SDK so I can download <a href="https://mastodon.social/tags/stableDiffusion" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stableDiffusion</span></a> and run it on my <a href="https://mastodon.social/tags/Radeon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Radeon</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> because <a href="https://mastodon.social/tags/Amuse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Amuse</span></a> won't let me generate porn.</p><p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.social/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> Making Progress In 2025 On Bringing <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> To Non-NVIDIA <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a><br>ZLUDA <a href="https://hachyderm.io/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> effort that started half-decade ago as drop-in CUDA implementation for <a href="https://hachyderm.io/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> GPUs and then for several years was funded by #<a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> as a CUDA implementation for <a href="https://hachyderm.io/tags/Radeon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Radeon</span></a> GPUs atop <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> and then open-sourced but then reverted has been continuing to push along a new path since last year. Current take on ZLUDA is a multi-vendor CUDA implementation for non-NVIDIA GPUs for <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> workloads &amp; more.<br><a href="https://www.phoronix.com/news/ZLUDA-Q2-2025-Update" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/ZLUDA-Q2-202</span><span class="invisible">5-Update</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> v4 Released For Initial <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> Support On Non-NVIDIA <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a><br>Back in October ZLUDA decided to pursue a new life as an <a href="https://hachyderm.io/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> multi-GPU CUDA implementation with an emphasis on AI workloads. ZLUDA v4 was released as the first step to that new codebase. But before getting too excited, ZLUDA v4 support is quite limited and the only known intended program to be currently working is Geekbench with its CUDA benchmarks. <br><a href="https://www.phoronix.com/news/ZLUDA-v4-Released" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/ZLUDA-v4-Rel</span><span class="invisible">eased</span></a></p>
Eva Winterschön<p><span class="h-card" translate="no"><a href="https://bsd.network/@dexter" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>dexter</span></a></span> Here's the AMD w/ CUDA support/library/api/shim/translation/thing that came up the other day:</p><p>&gt; ZLUDA is a drop-in replacement for CUDA on non-NVIDIA GPU. ZLUDA allows to run unmodified CUDA applications using non-NVIDIA GPUs with near-native performance.</p><p>- <a href="https://vosen.github.io/ZLUDA/blog/zludas-third-life/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">vosen.github.io/ZLUDA/blog/zlu</span><span class="invisible">das-third-life/</span></a><br>- <a href="https://github.com/vosen/ZLUDA" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/vosen/ZLUDA</span><span class="invisible"></span></a> </p><p><a href="https://mastodon.bsd.cafe/tags/forDiscussion" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>forDiscussion</span></a> <a href="https://mastodon.bsd.cafe/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://mastodon.bsd.cafe/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.bsd.cafe/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.bsd.cafe/tags/amd" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>amd</span></a> <a href="https://mastodon.bsd.cafe/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.bsd.cafe/tags/zluda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>zluda</span></a> <a href="https://mastodon.bsd.cafe/tags/opencl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opencl</span></a> <a href="https://mastodon.bsd.cafe/tags/machineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machineLearning</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> Takes On Third Life: <a href="https://hachyderm.io/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> Multi-GPU <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> Implementation Focused On AI<br>ZLUDA is being rebuilt to focus on multi-GPU vendor support and will take a particular emphasis on <a href="https://hachyderm.io/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> / <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> type workloads. Previously ZLUDA was more focused on enabling professional creator workloads while now it will be more focused on CUDA-based AI/#ML software. The new ZLUDA code will be focused on <a href="https://hachyderm.io/tags/RDNA1" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDNA1</span></a> and newer support along with <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> 6.1+ compute stack support. <br><a href="https://www.phoronix.com/news/ZLUDA-Third-Life" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/ZLUDA-Third-</span><span class="invisible">Life</span></a></p>
ricardo :mastodon:<p><a href="https://fosstodon.org/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> lawyers claw back <a href="https://fosstodon.org/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> compatibility layer <a href="https://fosstodon.org/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> </p><p><a href="https://www.theregister.com/2024/08/09/amd_zluda_take_down/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2024/08/09/amd</span><span class="invisible">_zluda_take_down/</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> asks <a href="https://hachyderm.io/tags/developer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>developer</span></a> to take down <a href="https://hachyderm.io/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a>, dev vows to rebuild<br>Earlier this year, AMD quietly stopped funding ZLUDA, <a href="https://hachyderm.io/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://hachyderm.io/tags/translationlayer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>translationlayer</span></a> project allowed programscompiled for <a href="https://hachyderm.io/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> to run on on Radeon GPU with the <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> software stack.<br>"The code that was previously here has been taken down at AMD's request," the developer wrote. "The code was released with AMD's approval through an email. AMD's legal department now says it's not legally binding"<br><a href="https://www.tomshardware.com/pc-components/gpus/amd-asks-developer-to-take-down-open-source-zluda-dev-vows-to-rebuild-his-project" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tomshardware.com/pc-components</span><span class="invisible">/gpus/amd-asks-developer-to-take-down-open-source-zluda-dev-vows-to-rebuild-his-project</span></a></p>
TheRadio.cc<p>Das Neuste aus der OpenSource-Community könnt ihr direkt bei uns nachhören. U.a. zu <a href="https://social.tchncs.de/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> – <a href="https://social.tchncs.de/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> hat leise ein <a href="https://social.tchncs.de/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> Translation Layer finanziert</p><p><a href="https://theradio.cc/blog/2024/03/11/ll279-quack/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">theradio.cc/blog/2024/03/11/ll</span><span class="invisible">279-quack/</span></a></p><p><a href="https://social.tchncs.de/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://social.tchncs.de/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://social.tchncs.de/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://social.tchncs.de/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://social.tchncs.de/tags/ki" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ki</span></a> <a href="https://social.tchncs.de/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://social.tchncs.de/tags/freesoftware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>freesoftware</span></a> <a href="https://social.tchncs.de/tags/floss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>floss</span></a> <a href="https://social.tchncs.de/tags/foss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>foss</span></a> <a href="https://social.tchncs.de/tags/protocols" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>protocols</span></a></p>
MWibral<p>Wow, this seems quite amazing, especially with performance sometimes being better than pure Rocm: <br><a href="https://www.phoronix.com/review/radeon-cuda-zluda" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/review/radeon-cud</span><span class="invisible">a-zluda</span></a></p><p><a href="https://mastodon.social/tags/rocm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rocm</span></a> <a href="https://mastodon.social/tags/rust" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rust</span></a> <a href="https://mastodon.social/tags/zluda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>zluda</span></a></p>