Supported LLMs
Chat and Commands
Cody supports a variety of cutting-edge large language models for use in Chat and Commands, allowing you to select the best model for your use case.
Newer versions of Sourcegraph Enterprise, starting from v5.6, it will be even easier to add support for new models and providers, see Model Configuration for more information.
Provider | Model | Free | Pro | Enterprise | ||||
---|---|---|---|---|---|---|---|---|
OpenAI | gpt-3.5 turbo | ✅ | ✅ | ✅ | ||||
OpenAI | gpt-4 | - | - | ✅ | ||||
OpenAI | gpt-4 turbo | - | ✅ | ✅ | ||||
OpenAI | gpt-4o | - | ✅ | ✅ | ||||
Anthropic | claude-3 Haiku | ✅ | ✅ | ✅ | ||||
Anthropic | claude-3 Sonnet | ✅ | ✅ | ✅ | ||||
Anthropic | claude-3.5 Sonnet | ✅ | ✅ | ✅ | ||||
Anthropic | claude-3 Opus | - | ✅ | ✅ | ||||
Mistral | mixtral 8x7b | ✅ | ✅ | - | ||||
Mistral | mixtral 8x22b | ✅ | ✅ | - | ||||
Ollama | variety | experimental | experimental | - | ||||
Google Gemini | 1.5 Pro | ✅ | ✅ | ✅ (Beta) | ||||
Google Gemini | 1.5 Flash | ✅ | ✅ | ✅ (Beta) | ||||
To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.
Autocomplete
Cody uses a set of models for autocomplete which are suited for the low latency use case.
Provider | Model | Free | Pro | Enterprise | ||||
---|---|---|---|---|---|---|---|---|
Fireworks.ai | DeepSeek-V2 | ✅ | ✅ | ✅ | ||||
Fireworks.ai | StarCoder | - | - | ✅ | ||||
Anthropic | claude Instant | - | - | ✅ | ||||
Google Gemini (Beta) | 1.5 Flash | - | - | ✅ | ||||
Ollama (Experimental) | variety | ✅ | ✅ | - | ||||
The default autocomplete model for Cody Free and Pro user is DeepSeek-V2. Enterprise users get StarCoder as the default model.
Read here for Ollama setup instructions. For information on context token limits, see our documentation here.