config.yaml
specification. Assistants can be loaded from the Hub or locally
.continue
folder (~/.continue
on Mac, %USERPROFILE%\.continue
) within .continue/assistants
. The name of the file will be used as the display name of the assistant, e.g. My Assistant.yaml
/.continue/assistants
folder, with the same naming conventionconfig.json
, which is deprecated. View the Migration
Guide.name
, version
, and config.yaml
schema
for the assistantconfig.yaml
syntax, a block consists of the same top-level properties as assistants (name
, version
, and schema
), but only has ONE item under whichever block type it is.
Examples of blocks and assistants can be found on the Continue hub.
Assistants can either explicitly define blocks - see Properties below - or import and configure existing hub blocks.
owner-slug/block-or-assistant-slug
, where an owner can be a user or organization (For example, if you want to use the OpenAI 4o Model block, your slug would be openai/gpt-4o
). These blocks are pulled from https://hub.continue.dev.
Blocks can be imported into an assistant by adding a uses
clause under the block type. This can be alongside other uses
clauses or explicit blocks of that type.
For example, the following assistant imports an Anthropic model and defines an Ollama DeepSeek one.
Assistant models section
.continue
folder. This folder can be located at either the root of your workspace (these will automatically be applied to all assistants when you are in that workspace) or in your home directory at ~/.continue
(these will automatically be applied globally).
Place your YAML files in the following folders:
Assistants:
.continue/assistants
- for assistants.continue/rules
- for rules.continue/models
- for models.continue/prompts
- for prompts.continue/context
- for context providers.continue/docs
- for docs~~~~.continue/data
- for data.continue/mcpServers
- for MCP Servers${{ secrets.SECRET_NAME }}
) can read secret values:.env
located in the global .continue
folder (~/.continue/.env
).env
file located at the root of the current workspace.secrets.SECRET_NAME
.
override
. For example:
Assistant config.yaml
config.yaml
.
All properties at all levels are optional unless explicitly marked as required.
The top-level properties in the config.yaml
configuration file are:
name
(required)version
(required)schema
(required)models
context
rules
prompts
docs
mcpServers
data
name
name
property specifies the name of your project or configuration.
config.yaml
version
version
property specifies the version of your project or configuration.
schema
schema
property specifies the schema version used for the config.yaml
, e.g. v1
models
models
section defines the language models used in your configuration. Models are used for functionalities such as chat, editing, and summarizing.
Properties:
name
(required): A unique name to identify the model within your configuration.
provider
(required): The provider of the model (e.g., openai
, ollama
).
model
(required): The specific model name (e.g., gpt-4
, starcoder
).
apiBase
: Can be used to override the default API base that is specified per model
roles
: An array specifying the roles this model can fulfill, such as chat
, autocomplete
, embed
, rerank
, edit
, apply
, summarize
. The default value is [chat, edit, apply, summarize]
. Note that the summarize
role is not currently used.
capabilities
: Array of strings denoting model capabilities, which will overwrite Continue’s autodetection based on provider and model. Supported capabilities include tool_use
and image_input
.
maxStopWords
: Maximum number of stop words allowed, to avoid API errors with extensive lists.
promptTemplates
: Can be used to override the default prompt templates for different model roles. Valid values are chat
, edit
, apply
and autocomplete
. The chat
property must be a valid template name, such as llama3
or anthropic
.
chatOptions
: If the model includes role chat
, these settings apply for Chat and Agent mode:
baseSystemMessage
: Can be used to override the default system prompt for Chat mode.embedOptions
: If the model includes role embed
, these settings apply for embeddings:
maxChunkSize
: Maximum tokens per document chunk. Minimum is 128 tokens.maxBatchSize
: Maximum number of chunks per request. Minimum is 1 chunk.defaultCompletionOptions
: Default completion options for model settings.
contextLength
: Maximum context length of the model, typically in tokens.maxTokens
: Maximum number of tokens to generate in a completion.temperature
: Controls the randomness of the completion. Values range from 0.0
(deterministic) to 1.0
(random).topP
: The cumulative probability for nucleus sampling.topK
: Maximum number of tokens considered at each step.stop
: An array of stop tokens that will terminate the completion.reasoning
: Boolean to enable thinking/reasoning for Anthropic Claude 3.7+ models.reasoningBudgetTokens
: Budget tokens for thinking/reasoning in Anthropic Claude 3.7+ models.requestOptions
: HTTP request options specific to the model.
timeout
: Timeout for each request to the language model.
verifySsl
: Whether to verify SSL certificates for requests.
caBundlePath
: Path to a custom CA bundle for HTTP requests.
proxy
: Proxy URL for HTTP requests.
headers
: Custom headers for HTTP requests.
extraBodyProperties
: Additional properties to merge with the HTTP request body.
noProxy
: List of hostnames that should bypass the specified proxy.
clientCertificate
: Client certificate for HTTP requests.
cert
: Path to the client certificate file.key
: Path to the client certificate key file.passphrase
: Optional passphrase for the client certificate key file.context
context
section defines context providers, which supply additional information or context to the language models. Each context provider can be configured with specific parameters.
More information about usage/params for each context provider can be found here
Properties:
provider
(required): The identifier or name of the context provider (e.g., code
, docs
, web
)name
: Optional name for the providerparams
: Optional parameters to configure the context provider’s behavior.rules
name
(required): A display name/title for the rulerule
(required): The text content of the ruleglobs
(optional): When files are provided as context that match this glob pattern, the rule will be included. This can be either a single pattern (e.g., "**/*.{ts,tsx}"
) or an array of patterns (e.g., ["src/**/*.ts", "tests/**/*.ts"]
).prompts
docs
name
(required): Name of the documentation site, displayed in dropdowns, etc.
startUrl
(required): Start page for crawling - usually root or intro page for docs
maxDepth
: Maximum link depth for crawling. Default 4
favicon
: URL for site favicon (default is /favicon.ico
from startUrl
).
useLocalCrawling
: Skip the default crawler and only crawl using a local crawler.
mcpServers
name
(required): The name of the MCP server.command
(required): The command used to start the server.args
: An optional array of arguments for the command.env
: An optional map of environment variables for the server process.cwd
: An optional working directory to run the command in. Can be absolute or relative path.connectionTimeout
: An optional connection timeout number to the server in milliseconds.data
name
(required): The display name of the data destination
destination
(required): The destination/endpoint that will receive the data. Can be:
.jsonl
filesschema
(required): the schema version of the JSON blobs to be sent. Options include 0.1.0
and 0.2.0
events
: an array of event names to include. Defaults to all events if not specified.
level
: a pre-defined filter for event fields. Options include all
and noCode
; the latter excludes data like file contents, prompts, and completions. Defaults to all
apiKey
: api key to be sent with request (Bearer header)
requestOptions
: Options for event POST requests. Same format as model requestOptions.
Example:
config.yaml
configuration file:
config.yaml
%YAML 1.1
is needed, here’s an example of a config.yaml
configuration file using anchors:
config.yaml