Iris Coleman
Sep 27, 2024 10:16
Ollama makes it simpler to run Meta’s Llama 3.2 mannequin regionally on AMD GPUs, providing assist for each Linux and Home windows methods.
Operating massive language fashions (LLMs) regionally on AMD methods has develop into extra accessible, because of Ollama. This information will give attention to the most recent Llama 3.2 mannequin, printed by Meta on September 25, 2024. Meta’s Llama 3.2 goes small and multimodal with 1B, 3B, 11B, and 90B fashions. Right here’s how one can run these fashions on numerous AMD {hardware} configurations and a step-by-step set up information for Ollama on each Linux and Home windows Working Programs on Radeon GPUs.
Supported AMD GPUs
Ollama helps a spread of AMD GPUs, enabling their product on each newer and older fashions. The checklist of supported GPUs by Ollama is out there right here.
Set up and Setup Information for Ollama
Linux
System Necessities: Ubuntu 22.04.4 AMD GPUs with the most recent AMD ROCm™ software program put in Set up ROCm 6.1.3 following the offered directions Set up Ollama by way of a single command Obtain and run the Llama 3.2 mannequin:
Home windows
System Necessities: Home windows 10 or larger Supported AMD GPUs with the motive force put in For Home windows set up, obtain and set up Ollama from right here. As soon as put in, open PowerShell and run:
You could find the checklist of all accessible fashions from Ollama right here.
Conclusion
The in depth assist for AMD GPUs by Ollama demonstrates the rising accessibility of operating LLMs regionally. From consumer-grade AMD Radeon™ RX graphics playing cards to high-end AMD Intuition™ accelerators, customers have a variety of choices to run fashions like Llama 3.2 on their very own {hardware}. This versatile strategy to enabling modern LLMs throughout the broad AI portfolio permits for better experimentation, privateness, and customization in AI purposes throughout numerous sectors.
Picture supply: Shutterstock







