All integrations
Inference Provider

SIMBA + Together AI

Hosted open-source LLMs at competitive pricing.

Together AI API docs

Together AI hosts a wide range of open-source LLMs with an OpenAI-compatible API. Use it as your agent's LLM backend when you want open-weight models without running infrastructure.

What agents can do

  • Llama, Mixtral, Qwen, DeepSeek, and more
  • OpenAI-compatible chat API
  • Dedicated endpoints for enterprise

Common workflows

Open-weight agents

Teams that prefer open-source foundation models for portability or cost.

Setup

  1. 1
    Create a Together API key.
  2. 2
    Add the Together integration in SIMBA.
  3. 3
    Set the agent's LLM provider to Together.

Frequently asked questions

Which models are best for voice agents?

Llama 3.1 70B Instruct and Qwen 2.5 72B are strong defaults. For latency, Llama 3.1 8B is notably faster.

Connect Together AI in the dashboard

Bring your own credentials. SIMBA stores them server-side and your agents call Together AI during conversations.

More inference provider integrations