toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

275
active users

#opencl

0 posts0 participants0 posts today
kandid<p>A little bit like <a href="https://chaos.social/tags/fractalFlame" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fractalFlame</span></a><br>Made with <a href="https://chaos.social/tags/openFrameworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openFrameworks</span></a> and <a href="https://chaos.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a></p>
Janne Moren<p>I wish <a href="https://fosstodon.org/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a> wasn't CUDA/ROCm only :(</p><p>I know I *can* use the nodes at work, but that's not the point. I want to use my own new toy, not somebody else's.</p><p>Any DL framework out there with good support for <a href="https://fosstodon.org/tags/Vulkan" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Vulkan</span></a> or <a href="https://fosstodon.org/tags/Opencl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Opencl</span></a> ?</p>
Dr. Moritz Lehmann<p>My <a href="https://mast.hpc.social/tags/IWOCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IWOCL</span></a> 2025 Keynote presentation is online! 🖖🧐<br>Scaling up <a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CFD</span></a> beyond 100 Billion cells on a single computer - a story about the true cross-compatibility of <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a><br><a href="https://www.youtube.com/watch?v=Sb3ibfoOi0c&amp;list=PLA-vfTt7YHI2HEFrpzPhhQ8PhiztKhHU8&amp;index=1" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/watch?v=Sb3ibfoOi0</span><span class="invisible">c&amp;list=PLA-vfTt7YHI2HEFrpzPhhQ8PhiztKhHU8&amp;index=1</span></a><br>Slides: <a href="https://www.iwocl.org/wp-content/uploads/iwocl-2025-moritz-lehmann-keynote.pdf" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">iwocl.org/wp-content/uploads/i</span><span class="invisible">wocl-2025-moritz-lehmann-keynote.pdf</span></a></p>
Dr. Moritz Lehmann<p>I just uploaded the 5000th <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> hardware report to <span class="h-card" translate="no"><a href="https://mastodon.gamedev.place/@sascha" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>sascha</span></a></span>'s gpuinfo.org database! 🖖🥳 And guess what <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> I reserved the spot for: <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Intel</span></a> Arc B580 <a href="https://mast.hpc.social/tags/Battlemage" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Battlemage</span></a> 🟦<br><a href="https://opencl.gpuinfo.org/displayreport.php?id=5000" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">opencl.gpuinfo.org/displayrepo</span><span class="invisible">rt.php?id=5000</span></a><br>I have contributed 4.2% (211) of all entries. 🖖🫡</p>
AdventureTense<p>If you're using darktable 5.01, the latest update on the Fedora Linux repo (flatpak), may have just added OpenCL support for Radeon GPUs (at least it did for my RX 6600). </p><p>The Flathub version doesn't seem to add OpenCL (currently), so it may be a Fedora thing. </p><p>I have not installed the RPM version so far, so I'm not sure about that package.</p><p><a href="https://mapstodon.space/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> <a href="https://mapstodon.space/tags/Radeon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Radeon</span></a> <a href="https://mapstodon.space/tags/Darktable" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Darktable</span></a> <a href="https://mapstodon.space/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a></p>
Lukas Weidinger<p>I’m thinking of <a href="https://gruene.social/tags/compiling" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>compiling</span></a> <a href="https://gruene.social/tags/darktable" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>darktable</span></a> from source so that it’s better optimized for my processor. <br>Anybody experience with its potential? <a href="https://gruene.social/tags/question" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>question</span></a> <a href="https://gruene.social/tags/followerpower" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>followerpower</span></a></p><p>I’m generally ok with how fast the flatpak runs on my i7-1255 laptop. However, with such an iterative workflow, I feel that one has much to gain with slight improvements via <a href="https://gruene.social/tags/opencl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opencl</span></a> and AVX.</p>
Dr. Moritz Lehmann<p>What an honor to start the&nbsp;<a href="https://mast.hpc.social/tags/IWOCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IWOCL</span></a>&nbsp;conference with my keynote talk! Nowhere else you get to talk to so many <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a>&nbsp;and&nbsp;<a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SYCL</span></a>&nbsp;experts in one room! I shared some updates on my <a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FluidX3D</span></a>&nbsp;<a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CFD</span></a>&nbsp;solver, how I optimized it at the smallest level of a single grid cell, to scale it up on the largest <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Intel</span></a>&nbsp;<a href="https://mast.hpc.social/tags/Xeon6" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Xeon6</span></a>&nbsp;<a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a>&nbsp;systems that provide more memory capacity than any <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a>&nbsp;server. 🖖😃</p>
Dr. Moritz Lehmann<p>Just arrived in wonderful Heidelberg, looking forward to present the keynote talk at <a href="https://mast.hpc.social/tags/IWOCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IWOCL</span></a> tomorrow!! See you there! 🖖😁<br><a href="https://www.iwocl.org/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">iwocl.org/</span><span class="invisible"></span></a> <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a></p>
Giuseppe Bilotta<p>I'm liking the class this year. Students are attentive and participating, and the discussion is always productive.</p><p>We were discussing the rounding up of the launch grid in <a href="https://fediscience.org/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> to avoid the catastrophic performance drops that come from the inability to divide the “actual” work size by anything smaller than the maximum device local work size, and were discussing on how to compute the “rounded up” work size.</p><p>The idea is this: given the worksize N and the local size L, we have to round N to the smallest multiple of L that is not smaller than N. This effectively means computing D = ceili(N/L) and then using D*L.</p><p>There are several ways to compute D, but on the computer, working only with integers and knowing that integer division always rounded down, what is the “best way”?</p><p>D = N/L + 1 works well if N is not a multiple of L, but gives us 1 more than the intended result if N *is* a multiple of L. So we want to add the extra 1 only if N is not a multiple. This can be achieved for example with</p><p>D = N/L + !!(N % L)</p><p>which leverages the fact that !! (double logical negation) turns any non-zero value into 1, leaving zero as zero. So we round *down* (which is what the integer division does) and then add 1 if (and only if) there is a reminder to the division.</p><p>This is ugly not so much because of the !!, but because the modulus operation % is slow.</p><p>1/n</p>
GPUOpen<p>🧐 AMD Radeon GPU Analyzer (RGA) is our performance analysis tool for <a href="https://mastodon.gamedev.place/tags/DirectX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DirectX</span></a>, <a href="https://mastodon.gamedev.place/tags/Vulkan" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Vulkan</span></a>, SPIR-V, <a href="https://mastodon.gamedev.place/tags/OpenGL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenGL</span></a>, &amp; <a href="https://mastodon.gamedev.place/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a>.<br> <br>✨As well as updates for AMD RDNA 4, there's enhancements to the ISA view UI, using the same updated UI as RGP ✨</p><p>More detail: <a href="https://gpuopen.com/learn/rdna-cdna-architecture-disassembly-radeon-gpu-analyzer-2-12/?utm_source=mastodon&amp;utm_medium=social&amp;utm_campaign=rdts" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">gpuopen.com/learn/rdna-cdna-ar</span><span class="invisible">chitecture-disassembly-radeon-gpu-analyzer-2-12/?utm_source=mastodon&amp;utm_medium=social&amp;utm_campaign=rdts</span></a><br>(🧵5/7)</p>
Dr. Moritz Lehmann<p>Here's my <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> implementation: <a href="https://github.com/ProjectPhysX/FluidX3D/blob/master/src/kernel.cpp#L1924-L1993" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ProjectPhysX/FluidX</span><span class="invisible">3D/blob/master/src/kernel.cpp#L1924-L1993</span></a></p>
Dr. Moritz Lehmann<p><a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CFD</span></a> v3.2 is out! I've implemented the much requested <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> summation for object force/torque; it's ~20x faster than <a href="https://mast.hpc.social/tags/CPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CPU</span></a> <a href="https://mast.hpc.social/tags/multithreading" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>multithreading</span></a>. 🖖😋<br>Horizontal sum in <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> was a nice exercise - first local memory reduction and then hardware-supported atomic floating-point add in VRAM, in a single-stage kernel. Hammering atomics isn't too bad as each of the ~10-340 workgroups dispatched at a time does only a single atomic add.<br>Also improved volumetric <a href="https://mast.hpc.social/tags/raytracing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raytracing</span></a>!<br><a href="https://github.com/ProjectPhysX/FluidX3D/releases/tag/v3.2" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ProjectPhysX/FluidX</span><span class="invisible">3D/releases/tag/v3.2</span></a></p>
Dr. Moritz Lehmann<p>My OpenCL-Benchmark now uses the dp4a instruction on supported hardware (<a href="https://mast.hpc.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> Pascal, <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Intel</span></a> <a href="https://mast.hpc.social/tags/Arc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Arc</span></a>, <a href="https://mast.hpc.social/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> RDNA, or newer) to benchmark INT8 tghroughput.<br>dp4a is not exposed in <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> C, but can still be used via inline PTX assembly and compiler pattern recognition. Even Nvidia's compiler will turn the emulation implementation into dp4a, but in some cases does so with a bunch of unnecessary shifts/permutations on inputs, so better use inline PTX directly. 🖖🧐<br><a href="https://github.com/ProjectPhysX/OpenCL-Benchmark/releases/tag/v1.8" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ProjectPhysX/OpenCL</span><span class="invisible">-Benchmark/releases/tag/v1.8</span></a></p>
Alauddin Maulana Hirzan 💻<p>Other things I have tested with FreeBSD: OpenCL with Discrete GPU via PyOpenCL lib</p><p><a href="https://mastodon.bsd.cafe/tags/FreeBSD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FreeBSD</span></a> <a href="https://mastodon.bsd.cafe/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> <a href="https://mastodon.bsd.cafe/tags/PyOpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyOpenCL</span></a></p>
Alauddin Maulana Hirzan 💻<p>aaah, nothing can beat the feel of beefed up FreeBSD with working dGPU.<br>1. OpenCL ✓ <br>2. OBS RenderD129 ✓<br> Thanks to <span class="h-card" translate="no"><a href="https://mastodon.bsd.cafe/@vermaden" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>vermaden</span></a></span> for pointing my fault.</p><p><a href="https://mastodon.bsd.cafe/tags/FreeBSD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FreeBSD</span></a> <a href="https://mastodon.bsd.cafe/tags/amdgpu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>amdgpu</span></a> <a href="https://mastodon.bsd.cafe/tags/opencl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opencl</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NVIDIA</span></a> <a href="https://hachyderm.io/tags/GeForce" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeForce</span></a> <a href="https://hachyderm.io/tags/RTX5090" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RTX5090</span></a> <a href="https://hachyderm.io/tags/Linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linux</span></a> <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> Compute Performance <a href="https://hachyderm.io/tags/Benchmarks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarks</span></a><br>When taking geo mean across 60+ benchmarks of <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> / <a href="https://hachyderm.io/tags/OptiX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OptiX</span></a> / <a href="https://hachyderm.io/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> / <a href="https://hachyderm.io/tags/Vulkan" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Vulkan</span></a> Compute, the GeForce RTX 5090 was delivering 1.42x the performance of GeForce <a href="https://hachyderm.io/tags/RTX4090" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RTX4090</span></a>. On performance-per-Watt GeForce RTX 5090 tended to deliver similar power efficiency to the RTX 4080/4090 graphics cards.<br>GeForce RTX 5090 Founders Edition was running cooler than many of the other Founders Edition graphics cards tested.<br><a href="https://www.phoronix.com/review/nvidia-geforce-rtx5090-linux" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/review/nvidia-gef</span><span class="invisible">orce-rtx5090-linux</span></a></p>
Dr. Moritz Lehmann<p><span class="h-card" translate="no"><a href="https://hachyderm.io/@BenjaminHCCarr" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>BenjaminHCCarr</span></a></span> another article on <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> code portability where people put their heads in the sand and pretend very hard that <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> doesn't exist...<br>OpenCL has solved <a href="https://mast.hpc.social/tags/GPGPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPGPU</span></a> cross-compatibility 16 years ago already and today is in better shape than ever.</p>
HGPU group<p>A comparison of HPC-based quantum computing simulators using Quantum Volume</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> <a href="https://mast.hpc.social/tags/QuantumComputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>QuantumComputing</span></a> <a href="https://mast.hpc.social/tags/Overview" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Overview</span></a></p><p><a href="https://hgpu.org/?p=29643" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29643</span><span class="invisible"></span></a></p>
Ian Brown<p>Finally found the downtime to complete this fantastic survey of managed runtimes (e.g. the JVM) and heterogeneous hardware (e.g. CPUs and GPUs or FPGAs) by <a href="https://mastodon.online/users/snatverk" rel="nofollow noopener noreferrer" target="_blank">@snatverk@mastodon.online</a>, <a href="https://mastodon.sdf.org/users/thanos_str" rel="nofollow noopener noreferrer" target="_blank">@thanos_str@mastodon.sdf.org</a>, and <a href="https://mastodon.online/users/kotselidis" rel="nofollow noopener noreferrer" target="_blank">@kotselidis@mastodon.online</a>. </p> <p>Required reading for those who want a look at the future of software development.</p> <p><a href="https://books.hccp.org/hashtag/177" rel="nofollow noopener noreferrer" target="_blank">#TornadoVM</a> <a href="https://books.hccp.org/hashtag/184" rel="nofollow noopener noreferrer" target="_blank">#JOCL</a> <a href="https://books.hccp.org/hashtag/175" rel="nofollow noopener noreferrer" target="_blank">#OpenCL</a> <a href="https://books.hccp.org/hashtag/176" rel="nofollow noopener noreferrer" target="_blank">#CUDA</a></p><p>(comment on <a href="https://books.hccp.org/book/31088" rel="nofollow noopener noreferrer" target="_blank">"Programming Heterogeneous Hardware via Managed Runtime Systems"</a>)</p>
Dr. Moritz Lehmann<p><a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Intel</span></a> Arc B580 <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a> specs:<br>- Windows: <a href="https://opencl.gpuinfo.org/displayreport.php?id=4564" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">opencl.gpuinfo.org/displayrepo</span><span class="invisible">rt.php?id=4564</span></a><br>- Linux: <a href="https://opencl.gpuinfo.org/displayreport.php?id=4562" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">opencl.gpuinfo.org/displayrepo</span><span class="invisible">rt.php?id=4562</span></a></p>