Fireworks /
Llama 3.1 405B Instruct
accounts/fireworks/models/llama-v3p1-405b-instruct
LLMChat
foo bar baz
On-demand deployments
On-demand deployments allow you to use Llama 3.1 405B Instruct on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.