Configurationmodels
Custom Models
Add custom providers and models (Ollama, vLLM, LM Studio, proxies) via ~/.indusagi/agent/models.json.
Table of Contents
- Basic Example
- Supported APIs
- Provider Configuration
- Model Configuration
- Overriding Built-in Providers
- OpenAI Compatibility
Basic Example
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "llama-3.1-8b",
"name": "Llama 3.1 8B (Local)",
"contextWindow": 128000,
"maxTokens": 32000
}
]
}
}
}
The file reloads each time you open /model. Edit during session; no restart needed.
Supported APIs
| API | Description |
|---|---|
openai-completions |
OpenAI Chat Completions (most compatible) |
openai-responses |
OpenAI Responses API |
anthropic-messages |
Anthropic Messages API |
google-generative-ai |
Google Generative AI |
Set api at provider level (default for all models) or model level (override per model).
Provider Configuration
| Field | Description |
|---|---|
baseUrl |
API endpoint URL |
api |
API type (see above) |
apiKey |
API key (see value resolution below) |
headers |
Custom headers (see value resolution below) |
authHeader |
Set true to add Authorization: Bearer <apiKey> automatically |
models |
Array of model configurations |
Value Resolution
The apiKey and headers fields support three formats:
- Shell command:
"!command"executes and uses stdout"apiKey": "!security find-generic-password -ws 'anthropic'" "apiKey": "!op read 'op://vault/item/credential'" - Environment variable: Uses the value of the named variable
"apiKey": "MY_AINDUSAGI_KEY" - Literal value: Used directly
"apiKey": "sk-..."
Custom Headers
{
"providers": {
"custom-proxy": {
"baseUrl": "https://proxy.example.com/v1",
"apiKey": "MY_AINDUSAGI_KEY",
"api": "anthropic-messages",
"headers": {
"x-portkey-api-key": "PORTKEY_AINDUSAGI_KEY",
"x-secret": "!op read 'op://vault/item/secret'"
},
"models": [...]
}
}
}
Model Configuration
| Field | Required | Description |
|---|---|---|
id |
Yes | Model identifier |
name |
No | Display name |
api |
No | Override provider's API for this model |
contextWindow |
No | Context window size in tokens |
maxTokens |
No | Maximum output tokens |
reasoning |
No | Supports extended thinking |
input |
No | Input types: ["text"] or ["text", "image"] |
cost |
No | {"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0} |
Overriding Built-in Providers
Route a built-in provider through a proxy without redefining models:
{
"providers": {
"anthropic": {
"baseUrl": "https://my-proxy.example.com/v1"
}
}
}
All built-in Anthropic models remain available. Existing OAuth or API key auth continues to work.
To fully replace a built-in provider with custom models, include the models array:
{
"providers": {
"anthropic": {
"baseUrl": "https://my-proxy.example.com/v1",
"apiKey": "ANTHROPIC_AINDUSAGI_KEY",
"api": "anthropic-messages",
"models": [...]
}
}
}
OpenAI Compatibility
For providers with partial OpenAI compatibility, use the compat field:
{
"providers": {
"local-llm": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"compat": {
"supportsUsageInStreaming": false,
"maxTokensField": "max_tokens"
},
"models": [...]
}
}
}
| Field | Description |
|---|---|
supportsStore |
Provider supports store field |
supportsDeveloperRole |
Use developer vs system role |
supportsReasoningEffort |
Support for reasoning_effort parameter |
supportsUsageInStreaming |
Supports stream_options: { include_usage: true } (default: true) |
maxTokensField |
Use max_completion_tokens or max_tokens |
openRouterRouting |
OpenRouter routing config passed to OpenRouter for model/provider selection |
Example:
{
"providers": {
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "OPENROUTER_AINDUSAGI_KEY",
"api": "openai-completions",
"models": [
{
"id": "openrouter/anthropic/claude-3.5-sonnet",
"name": "OpenRouter Claude 3.5 Sonnet",
"compat": {
"openRouterRouting": {
"order": ["anthropic"],
"fallbacks": ["openai"]
}
}
}
]
}
}
}
