A Cult AI Computer’s Boom and Bust:
I am aware that CUDA isn’t a language. But
A Cult AI Computer’s Boom and Bust:
I am aware that CUDA isn’t a language. But
Ask HN: How to learn CUDA to professional level
Discussion: https://news.ycombinator.com/item?id=44216123
Open source at last: Warp, Nvidia's Python framework for CUDA
Following criticism from the community, Nvidia has decided to switch to the Apache 2 license with the Warp framework.
Endlich Open Source: Warp, das Python-Framework von Nvidia für CUDA
Nach Kritik aus der Community hat sich Nvidia entschlossen, mit dem Warp-Framework in die Apache-2-Lizenz zu wechseln.
Breaking the AI Infrastructure Matrix: Modular's Revolutionary Approach
As AI development accelerates, the complexities of traditional frameworks like CUDA are becoming increasingly burdensome. Modular aims to redefine AI compute by democratizing access and simplifying th...
https://news.lavx.hu/article/breaking-the-ai-infrastructure-matrix-modular-s-revolutionary-approach
Like many #Linux users, I use that #OS to extend old #PC #hardware lifecycle. My #Dell #Optiplex 9020 includes a Pascal based #GPU.
My issue is I cannot install a more modern GPU because of physical constraints with the chassis itself.
>Maxwell, Pascal, and Volta architectures are now feature-complete with no further enhancements planned. ... Users should plan migration .., as future toolkits will be unable to target [these] GPUs. 1/
Kuwait ha ufficialmente dichiarato il mining di criptovalute "illegale e non autorizzato",il consumo energetico nella zona è diminuito del 55% in una sola settimana.
.
#kuwait #energy #electricity #MiningNews #mining
.
Quante #cuda da utilizzare per la ricerca scientifica, utilizzate invece per il #denaro.
.
#foldingathome #boinc #Berkley #scientificresearch
Faster sorting with SIMD CUDA intrinsics (2024)
Link: https://winwang.blog/posts/bitonic-sort/
Discussion: https://news.ycombinator.com/item?id=43898717
There is a certain irony in observing top of the line chatbots that run on #GPUs fail to program the #GPU using @openmp_arb (it is even funnier seeing this play out in #CUDA but the results of the interaction are #NSFW)
There is a certain irony in observing top of the line chatbots that run on #GPUs fail to program the #GPU using @openmp_arb (it is even funnier seeing this play out in #CUDA but the results of the interaction are #NSFW)
1 hour interview with AMD’s Lisa Su by @karaswisher obviously mostly about AI; dick jokes and all but not a single question on why ROCm sucks when compared to Nvidia’s CUDA!
Probably the only question relevant to AMD and AI.
PyGraph: Robust Compiler Support for CUDA Graphs in PyTorch
Link: https://arxiv.org/abs/2503.19779
Discussion: https://news.ycombinator.com/item?id=43786514
CubeCL: GPU Kernels in Rust for CUDA, ROCm, and WGPU
Link: https://github.com/tracel-ai/cubecl
Discussion: https://news.ycombinator.com/item?id=43777731
FreeBSD CUDA drm-61-kmod
"Just going to test the current pkg driver, this will only take a second...", the old refrain goes. Surely, it will not punt away an hour or so of messing about in loader.conf on this EPYC system...
- Here are some notes to back-track a botched/crashing driver kernel panic situation.
- Standard stuff, nothing new over the years here with loader prompt.
- A few directives are specific to this system, though may provide a useful general reference.
- The server has an integrated GPU in addition to nvidia pcie, so a module blacklist for the "amdgpu" driver is necessary (EPYC 4564P).
Step 1: during boot-up, "exit to loader prompt"
Step 2: set/unset the values as needed at the loader prompt
unset nvidia_load
unset nvidia_modeset_load
unset hw.nvidiadrm.modeset
set module_blacklist=amdgpu,nvidia,nvidia_modeset
set machdep.hyperthreading_intr_allowed=0
set verbose_loading=YES
set boot_verbose=YES
set acpi_dsdt_load=YES
set audit_event_load=YES
kern.consmsgbuf_size=1048576
set loader_menu_title=waffenschwester
boot
Step 3: login to standard tty shell
Step 4: edit /boot/loader.conf (and maybe .local)
Step 5: edit /etc/rc.conf (and maybe .local)
Step 6: debug the vast output from kern.consmsgbuf logs
Rust CUDA Project
Link: https://github.com/Rust-GPU/Rust-CUDA
Discussion: https://news.ycombinator.com/item?id=43654881
Nvidia adds native Python support to CUDA
Link: https://thenewstack.io/nvidia-finally-adds-native-python-support-to-cuda/
Discussion: https://news.ycombinator.com/item?id=43581584
Ask HN: Why hasn't AMD made a viable CUDA alternative?
Discussion: https://news.ycombinator.com/item?id=43547309