Thursday, October 16, 2025
No Result
View All Result
Ajoobz
Advertisement
  • Home
  • Bitcoin
  • Crypto Updates
    • Crypto Updates
    • Altcoin
    • Ethereum
    • Crypto Exchanges
  • Blockchain
  • NFT
  • DeFi
  • Web3
  • Metaverse
  • Scam Alert
  • Regulations
  • Analysis
Marketcap
  • Home
  • Bitcoin
  • Crypto Updates
    • Crypto Updates
    • Altcoin
    • Ethereum
    • Crypto Exchanges
  • Blockchain
  • NFT
  • DeFi
  • Web3
  • Metaverse
  • Scam Alert
  • Regulations
  • Analysis
No Result
View All Result
Ajoobz
No Result
View All Result

NVIDIA Enhances Llama 3.1 405B Performance with TensorRT Model Optimizer

1 year ago
in Blockchain
Reading Time: 3 mins read
0 0
A A
0
Home Blockchain
Share on FacebookShare on TwitterShare on E-Mail




Lawrence Jengar
Aug 29, 2024 16:10

NVIDIA’s TensorRT Mannequin Optimizer considerably boosts efficiency of Meta’s Llama 3.1 405B massive language mannequin on H200 GPUs.





Meta’s Llama 3.1 405B massive language mannequin (LLM) is reaching new ranges of efficiency because of NVIDIA’s TensorRT Mannequin Optimizer, based on the NVIDIA Technical Weblog. The enhancements have resulted in as much as a 1.44x improve in throughput when operating on NVIDIA H200 GPUs.

Excellent Llama 3.1 405B Inference Throughput with TensorRT-LLM

TensorRT-LLM has already delivered exceptional inference throughput for Llama 3.1 405B because the mannequin’s launch. This was achieved by means of numerous optimizations, together with in-flight batching, KV caching, and optimized consideration kernels. These strategies have accelerated inference efficiency whereas sustaining decrease precision compute.

TensorRT-LLM added assist for the official Llama FP8 quantization recipe, which calculates static and dynamic scaling elements to protect most accuracy. Moreover, user-defined kernels similar to matrix multiplications from FBGEMM are optimized through plug-ins inserted into the community graph at compile time.

Boosting Efficiency As much as 1.44x with TensorRT Mannequin Optimizer

NVIDIA’s customized FP8 post-training quantization (PTQ) recipe, out there by means of the TensorRT Mannequin Optimizer library, enhances Llama 3.1 405B throughput and reduces latency with out sacrificing accuracy. This recipe incorporates FP8 KV cache quantization and self-attention static quantization, decreasing inference compute overhead.

Desk 1 demonstrates the utmost throughput efficiency, displaying important enhancements throughout numerous enter and output sequence lengths on an 8-GPU HGX H200 system. The system options eight NVIDIA H200 Tensor Core GPUs with 141 GB of HBM3e reminiscence every and 4 NVLink Switches, offering 900 GB/s of GPU-to-GPU bandwidth.




Most Throughput Efficiency – Output Tokens/Second8 NVIDIA H200 Tensor Core GPUs


Enter | Output Sequence Lengths
2,048 | 128
32,768 | 2,048
120,000 | 2,048


TensorRT Mannequin Optimizer FP8
463.1
320.1
71.5


Official Llama FP8 Recipe
399.9
230.8
49.6


Speedup
1.16x
1.39x
1.44x

Desk 1. Most throughput efficiency of Llama 3.1 405B with NVIDIA inside measurements

Equally, Desk 2 presents the minimal latency efficiency utilizing the identical enter and output sequence lengths.




Batch Measurement = 1 Efficiency – Output Tokens/Second8 NVIDIA H200 Tensor Core GPUs


Enter | Output Sequence Lengths
2,048 | 128
32,768 | 2,048
120,000 | 2,048


TensorRT Mannequin Optimizer FP8
49.6
44.2
27.2


Official Llama FP8 Recipe
37.4
33.1
22.8


Speedup
1.33x
1.33x
1.19x

Desk 2. Minimal latency efficiency of Llama 3.1 405B with NVIDIA inside measurements

These outcomes point out that H200 GPUs with TensorRT-LLM and TensorRT Mannequin Optimizer are delivering superior efficiency in each latency-optimized and throughput-optimized eventualities. The TensorRT Mannequin Optimizer FP8 recipe additionally achieved comparable accuracy with the official Llama 3.1 FP8 recipe on the Massively Multitask Language Understanding (MMLU) and MT-Bench benchmarks.

Becoming Llama 3.1 405B on Simply Two H200 GPUs with INT4 AWQ

For builders with {hardware} useful resource constraints, the INT4 AWQ method in TensorRT Mannequin Optimizer compresses the mannequin, permitting Llama 3.1 405B to suit on simply two H200 GPUs. This methodology reduces the required reminiscence footprint considerably by compressing the weights all the way down to 4-bit integers whereas encoding activations utilizing FP16.

Tables 4 and 5 present the utmost throughput and minimal latency efficiency measurements, demonstrating that the INT4 AWQ methodology gives comparable accuracy scores to the Llama 3.1 official FP8 recipe from Meta.




Most Throughput Efficiency – Output Tokens/Second2 NVIDIA H200 Tensor Core GPUs


Enter | Output Sequence Lengths
2,048 | 128
32,768 | 2,048
60,000 | 2,048


TensorRT Mannequin Optimizer INT4 AWQ
75.6
28.7
16.2

Desk 4. Most throughput efficiency of Llama 3.1 405B with NVIDIA inside measurements




Batch Measurement = 1 Efficiency – Output Tokens/Second2 NVIDIA H200 Tensor Core GPUs


Enter | Output Sequence Lengths
2,048 | 128
32,768 | 2,048
60,000 | 2,048


TensorRT Mannequin Optimizer INT4 AWQ
21.6
18.7
12.8

Desk 5. Minimal latency efficiency of Llama 3.1 405B with NVIDIA inside measurements

NVIDIA’s developments in TensorRT Mannequin Optimizer and TensorRT-LLM are paving the way in which for enhanced efficiency and effectivity in operating massive language fashions like Llama 3.1 405B. These enhancements supply builders extra flexibility and cost-efficiency, whether or not they have intensive {hardware} assets or extra constrained environments.

Picture supply: Shutterstock



Source link

Tags: 405BEnhancesLlamaModelNVIDIAOptimizerperformanceTensorRT
Previous Post

SolForge Fusion Debuts on Solana, Unlocking New Web3 Features

Next Post

Bitcoin (BTC) Strategy Working for El Salvador, Says President Nayib Bukele

Related Posts

Bank of England Plans Short-Term Cap on Stablecoins
Blockchain

Bank of England Plans Short-Term Cap on Stablecoins

3 hours ago
Robinhood’s Strategy for Engaging Next-Gen Investors
Blockchain

Robinhood’s Strategy for Engaging Next-Gen Investors

11 hours ago
Bitcoin’s Power Lies in Real Energy, Not Printed Cash
Blockchain

Bitcoin’s Power Lies in Real Energy, Not Printed Cash

1 day ago
NVIDIA Enhances Qubit Research with cuQuantum in QuTip and scQubits
Blockchain

NVIDIA Enhances Qubit Research with cuQuantum in QuTip and scQubits

2 days ago
Announcement – The Certified Blockchain Product Manager (CBPM)™ Certification Launched
Blockchain

Announcement – The Certified Blockchain Product Manager (CBPM)™ Certification Launched

2 days ago
Polymarket Bets on Nobel Winner Under Official Scrutiny
Blockchain

Polymarket Bets on Nobel Winner Under Official Scrutiny

2 days ago
Next Post
Bitcoin (BTC) Strategy Working for El Salvador, Says President Nayib Bukele

Bitcoin (BTC) Strategy Working for El Salvador, Says President Nayib Bukele

Trump Keeps Teasing His New Crypto Project, but Details Remain Scant

Trump Keeps Teasing His New Crypto Project, but Details Remain Scant

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

[ccpw id="587"]
  • Disclaimer
  • Cookie Privacy Policy
  • Privacy Policy
  • DMCA
  • Terms and Conditions
  • Contact us
Contact us for business inquiries: cs@ajoobz.com

Copyright © 2023 Ajoobz.
Ajoobz is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • Bitcoin
  • Crypto Updates
    • Crypto Updates
    • Altcoin
    • Ethereum
    • Crypto Exchanges
  • Blockchain
  • NFT
  • DeFi
  • Web3
  • Metaverse
  • Scam Alert
  • Regulations
  • Analysis

Copyright © 2023 Ajoobz.
Ajoobz is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In