config.yaml
, to replace
config.json
. See the config.yaml
reference and migration guide
here.config.json
. The config schema code is found in extensions/vscode/config_schema.json
.
All properties at all levels are optional unless explicitly marked required
models
title
(required): The title to assign to your model, shown in dropdowns, etc.
provider
(required): The provider of the model, which determines the type and interaction method. Options inclued openai
, ollama
, xAI
, etc., see IntelliJ suggestions.
model
(required): The name of the model, used for prompt template auto-detection. Use AUTODETECT
special name to get all available models.
apiKey
: API key required by providers like OpenAI, Anthropic, Cohere, and xAI.
apiBase
: The base URL of the LLM API.
contextLength
: Maximum context length of the model, typically in tokens (default: 2048).
maxStopWords
: Maximum number of stop words allowed, to avoid API errors with extensive lists.
template
: Chat template to format messages. Auto-detected for most models but can be overridden. See intelliJ suggestions.
promptTemplates
: A mapping of prompt template names (e.g., edit
) to template strings. See the Deep Dives section for customization details.
completionOptions
: Model-specific completion options, same format as top-level completionOptions
, which they override.
systemMessage
: A system message that will precede responses from the LLM.
requestOptions
: Model-specific HTTP request options, same format as top-level requestOptions
, which they override.
apiType
: Specifies the type of API (openai
or azure
).
apiVersion
: Azure API version (e.g., 2023-07-01-preview
).
engine
: Engine for Azure OpenAI requests.
capabilities
: Override auto-detected capabilities:
uploadImage
: Boolean indicating if the model supports image uploads.tools
: Boolean indicating if the model supports tool use.profile
: AWS security profile for authorization.modelArn
: AWS ARN for imported models (e.g., for bedrockimport
provider).region
: Region where the model is hosted (e.g., us-east-1
, eu-central-1
).tabAutocompleteModel
models
. Can be an array of models or an object for one model.
Example
config.json
tabAutocompleteOptions
disable
: If true
, disables tab autocomplete (default: false
).maxPromptTokens
: Maximum number of tokens for the prompt (default: 1024
).debounceDelay
: Delay (in ms) before triggering autocomplete (default: 350
).maxSuffixPercentage
: Maximum percentage of prompt for suffix (default: 0.2
).prefixPercentage
: Percentage of input for prefix (default: 0.3
).template
: Template string for autocomplete, using Mustache templating. You can use the {{{ prefix }}}
, {{{ suffix }}}
, {{{ filename }}}
, {{{ reponame }}}
, and {{{ language }}}
variables.onlyMyCode
: If true
, only includes code within the repository (default: true
).embeddingsProvider
provider
(required): Specifies the embeddings provider, with options including transformers.js
, ollama
, openai
, cohere
, gemini
, etcmodel
: Model name for embeddings.apiKey
: API key for the provider.apiBase
: Base URL for API requests.requestOptions
: Additional HTTP request settings specific to the embeddings provider.maxEmbeddingChunkSize
: Maximum tokens per document chunk. Minimum is 128 tokens.maxEmbeddingBatchSize
: Maximum number of chunks per request. Minimum is 1 chunk.region
: Specifies the region hosting the model.profile
: AWS security profile.completionOptions
completionOptions
apply to all models, unless overridden at the model level.
Properties:
stream
: Whether to stream the LLM response. Currently only respected by the anthropic
and ollama
providers; other providers will always stream (default: true
).temperature
: Controls the randomness of the completion. Higher values result in more diverse outputs.topP
: The cumulative probability for nucleus sampling. Lower values limit responses to tokens within the top probability mass.topK
: The maximum number of tokens considered at each step. Limits the generated text to tokens within this probability.presencePenalty
: Discourages the model from generating tokens that have already appeared in the output.frequencyPenalty
: Penalizes tokens based on their frequency in the text, reducing repetition.mirostat
: Enables Mirostat sampling, which controls the perplexity during text generation. Supported by Ollama, LM Studio, and llama.cpp providers (default: 0
, where 0
= disabled, 1
= Mirostat, and 2
= Mirostat 2.0).stop
: An array of stop tokens that, when encountered, will terminate the completion. Allows specifying multiple end conditions.maxTokens
: The maximum number of tokens to generate in a completion (default: 2048
).numThreads
: The number of threads used during the generation process. Available only for Ollama as num_thread
.keepAlive
: For Ollama, this parameter sets the number of seconds to keep the model loaded after the last request, unloading it from memory if inactive (default: 1800
seconds, or 30 minutes).numGpu
: For Ollama, this parameter overrides the number of gpu layers that will be used to load the model into VRAM.useMmap
: For Ollama, this parameter allows the model to be mapped into memory. If disabled can enhance response time on low end devices but will slow down the stream.reasoning
: Enables thinking/reasoning for Anthropic Claude 3.7+ models.reasoningBudgetTokens
: Sets budget tokens for thinking/reasoning in Anthropic Claude 3.7+ models.requestOptions
timeout
: Timeout for each request to the LLM (default: 7200 seconds).
verifySsl
: Whether to verify SSL certificates for requests.
caBundlePath
: Path to a custom CA bundle for HTTP requests - path to .pem
file (or array of paths)
proxy
: Proxy URL to use for HTTP requests.
headers
: Custom headers for HTTP requests.
extraBodyProperties
: Additional properties to merge with the HTTP request body.
noProxy
: List of hostnames that should bypass the specified proxy.
clientCertificate
: Client certificate for HTTP requests.
cert
: Path to the client certificate file.key
: Path to the client certificate key file.passphrase
: Optional passphrase for the client certificate key file.reranker
name
(required): Reranker name, e.g., cohere
, voyage
, llm
, huggingface-tei
, bedrock
params
:
model
: Model nameapiKey
: Api keyregion
: Region (for Bedrock only)docs
title
(required): Title of the documentation site, displayed in dropdowns, etc.
startUrl
(required): Start page for crawling - usually root or intro page for docs
maxDepth
: Maximum link depth for crawling. Default 4
favicon
: URL for site favicon (default is /favicon.ico
from startUrl
).
useLocalCrawling
: Skip the default crawler and only crawl using a local crawler.
slashCommands
name
: The command name. Options include “issue”, “share”, “cmd”, “http”, “commit”, and “review”.description
: Brief description of the command.step
: (Deprecated) Used for built-in commands; set the name for pre-configured options.params
: Additional parameters to configure command behavior (command-specific - see code for command)config.json
to make them visible:
/share
outputDir
parameter to specify where you want to the markdown file to be saved.
/cmd
/commit
/http
/issue
/onboard
customCommands
name
: The name of the custom command.prompt
: Text prompt for the command.description
: Brief description explaining the command’s function.contextProviders
params
.
Properties:
name
: Name of the context provider, e.g. docs
or web
params
: A context-provider-specific record of params to configure the context behavioruserToken
systemMessage
experimental
experimental
:
defaultContext
: Defines the default context for the LLM. Uses the same format as contextProviders
but includes an additional query
property to specify custom query parameters.=
modelRoles
:
inlineEdit
: Model title for inline edits.applyCodeBlock
: Model title for applying code blocks.repoMapFileSelection
: Model title for repo map selections.quickActions
: Array of custom quick actions
title
(required): Display title for the quick action.prompt
(required): Prompt for quick action.sendToChat
: If true
, sends result to chat; else inserts in document. Default is false
.contextMenuPrompts
:
comment
: Prompt for commenting code.docstring
: Prompt for adding docstrings.fix
: Prompt for fixing code.optimize
: Prompt for optimizing code.modelContextProtocolServers
: See Model Context Protocol
config.json
settings are no longer stored in config and have been moved to be editable through the User Settings Page. If found in config.json
, they will be auto-migrated to User Settings and removed from config.json
.
allowAnonymousTelemetry
: This value will be migrated to the safest merged value (false
if either are false
).
promptPath
: This value will override during migration.
disableIndexing
: This value will be migrated to the safest merged value (true
if either are true
).
disableSessionTitles
/ui.getChatTitles
: This value will be migrated to the safest merged value (true
if either are true
). getChatTitles
takes precedence if set to false
tabAutocompleteOptions
useCache
: This value will override during migration.disableInFiles
: This value will be migrated to the safest merged value (arrays of file matches merged/deduplicated)multilineCompletions
: This value will override during migration.experimental
useChromiumForDocsCrawling
: This value will override during migration.readResponseTTS
: This value will override during migration.ui
- all will override during migration
codeBlockToolbarPosition
fontSize
codeWrap
displayRawMarkdown
showChatScrollbar