All integrations
Inference Provider
SIMBA + Samba Nova Cloud
Very-high-throughput LLM inference on Samba Nova hardware.
Samba Nova Cloud API docsSamba Nova Cloud offers high tokens-per-second on open-weight models. Good fit for agents with dense turn-taking where token throughput drives perceived latency.
What agents can do
- High-throughput open-source model inference
- OpenAI-compatible API
Common workflows
High-turn-density agents
Back-and-forth conversations where throughput matters more than first-token latency.
Setup
- 1Request API access in Samba Nova Cloud.
- 2Add the Samba Nova integration in SIMBA.
- 3Point your agent's LLM provider at Samba Nova.
Frequently asked questions
Do they support function calling?
On recent model versions, yes. Check Samba Nova docs for the model you pick.
Connect Samba Nova Cloud in the dashboard
Bring your own credentials. SIMBA stores them server-side and your agents call Samba Nova Cloud during conversations.