You are viewing a single thread.
View all comments View context
1 point

At the moment I just don’t. I got kobolcpp to run through distrobox / boxbuddy but I can’t get it to compile with rocm, so I can only use CPU generation, which is abysmally slow. Might go back to NovelAI when they release their new model if I can’t find a solution.

permalink
report
parent
reply
1 point

What card do you use? I have a 6700XT and getting anything with ROCM running for me requires that I pass the HSA_OVERRIDE_GFX_VERSION=10.3.0 environmental variable to the related process, otherwise it just refuses to run properly. I wonder if it might be something similar for you too?

permalink
report
parent
reply
1 point

5500 here. I can’t use any recent rocm version because the GFX override I use is for a card that apparently has a couple more instructions and the newer kernels instantly crash with an illegal operation exception.

I found a build someone made buried in a docker image and it indeed does work, without override, for the 5500 but it’s using all generic code for the kernels and is like 4x slower than the ancient version.

What’s ultimately the worst thing about this isn’t that AMD isn’t supporting all cards for rocm – it’s that the support is all or nothing. There’s no “we won’t be spending time on this but it passes automated tests so ship it” kind of thing. “oh the new kernels broke that old card tough luck you don’t get new kernels”.

So in the meantime I’m living with the occasional (every couple of days?) freeze when using rocm because I can’t reasonably upgrade. Not just the driver crashes, the kernel tries to restart it, the whole card needs a reset before doing anything but display a vga console.

permalink
report
parent
reply
1 point

Yeah, I definitely am not a fan of how AMD handles rocm - there’s so many weird cases of “Well this card should work with rocm, but… [insert some weird quirk that you have to do, like the one I mentioned, or what you’ve run into]”.

Userspace/consumer side I enjoy AMD, but I fully understand why a lot of devs don’t make use of rocm and why Nvidia has such a tight hold on things in the GPU compute world with CUDA.

permalink
report
parent
reply
1 point

6650 XT. Honestly no idea. When I run make LLAMA_HIPBLAS=1 GPU_TARGETS=gfx1032 -j$(nproc) in the Fedora distrobox on kobolcpp it throws a bunch of

fatal error: 'hip/hip_fp16.h' file not found
   36 | #include <hip/hip_fp16.h>

errors and koboldcpp does not give an option to use Vulkan.

permalink
report
parent
reply
1 point

Ah, strange. I don’t suppose you specifically need a Fedora container? If not, I’ve been using this Ubuntu based distrobox container recipe for anything that requires ROCM and it has worked flawless for me.

If that still doesn’t work (I haven’t actually tried out kobolcpp yet), and you’re willing to try something other than kobolcpp, then I’d recommend the text-generation-webui project which supports a wide array of model types, including the GGUF types that Kobolcpp utilizes. Then if you really want to get deep into it, you can even pair it with SillyTavern (it is purely a frontend for a bunch of different LLM backends, text-generation-webui is one of the supported ones)!

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 5.9K

    Posts

  • 126K

    Comments