LocalLLaMA
Mixel
•
Now
•
100%
best method do use amd GPU for inference on linux
So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I'm not sure if my vram is enough for that Since now I always sticked with llamacpp since it's quiet easy to setup Does anyone have any suggestion?
Comments 9