config.yml
Global configuration options for Genius
Default config.yml
# *****************************************
# * Genius Configuration *
# *****************************************
# For reference, see https://fletchly.github.io/genius-wiki/docs/setup/configuration/config-yml
# Properties for the agent's appearance and behavior
genius:
# The name used when displaying messages from the agent in chat
agentName: Genius
# The prefix used when displaying messages from the agent in chat
agentPrefix: π‘
# The prefix used when displaying messages from players in chat
playerPrefix: π€
# Ollama client configuration
ollama:
# The base URL for the Ollama API. If using a cloud model, this should be set to https://ollama.com/.
# Defaults to the Ollama API's default listening location. For more info, see https://github.com/fletchly/genius/wiki/hosting
baseUrl: http://localhost:11434/
# Your key for the Ollama cloud API. This only needs to be changed if you are using an Ollama cloud model.
# For more info, see https://github.com/fletchly/genius/wiki/hosting
apiKey: local
# The name of the model to use for response generation.
# If you are using Ollama cloud, the model must be an ollama cloud model
model: deepseek-v3.1:671b
# Controls how random or deterministic Genius's output is.
# Lower values make responses more predictable and precise,
# while higher values make them more creative and varied.
temperature: 0.5
# Top-K sampling limits the modelβs next-token choices to the K most likely options, filtering out all others.
# This reduces randomness while still allowing some creativity compared to always picking the single most probable token.
topK: 40
# Top-P controls how many likely tokens the model considers when generating text.
# Lower values make output more focused and predictable, while higher values allow more randomness and creativity.
topP: 0.85
# The number of predictions (max output tokens) controls how much text the model is allowed to generate.
# A lower limit keeps responses short and concise, while a higher limit allows longer, more detailed answers.
numPredict: 400
# Context database configuration
context:
# Maximum number of messages per player to store in the database at one time
# A higher limit will allow players to continue conversations for longer.
maxPlayerMessages: 20
# Don't change this
configVersion: 1
geniusβ
Properties for the agent's appearance and behavior
genius.agentNameβ
The name used when displaying messages from the agent in chat
- Type:
string - Default:
Genius
genius.agentPrefixβ
The prefix used when displaying messages from the agent in chat
- Type:
string - Default:
π‘
genius.playerPrefixβ
The prefix used when displaying messages from the agent in chat
- Type:
string - Default:
π€
ollamaβ
Ollama client configuration
ollama.baseUrlβ
The base URL for the Ollama API. If using a cloud model, this should be set to https://ollama.com/. Defaults to the Ollama API's default listening location. For more info, see the hosting guide
- Type:
string - Default:
http://localhost:11434/
ollama.apiKeyβ
Your key for the Ollama cloud API. This only needs to be set if you are using an Ollama cloud model. For more info, see the hosting guide
- Type:
string - Default:
(blank)
ollama.modelβ
The name of the model to use for response generation. If you are using Ollama cloud, the model must be an ollama cloud model
- Type:
string - Default:
deepseek-v3.1:671b
ollama.temperatureβ
Controls how random or deterministic Genius's output is. Lower values make responses more predictable and precise, while higher values make them more creative and varied.
- Type:
number - Default:
0.5
ollama.topKβ
Top-K sampling limits the modelβs next-token choices to the K most likely options, filtering out all others. This reduces randomness while still allowing some creativity compared to always picking the single most probable token.
- Type:
integer - Default:
40
ollama.topPβ
Top-P controls how many likely tokens the model considers when generating text. Lower values make output more focused and predictable, while higher values allow more randomness and creativity.
- Type:
number - Default:
0.85
ollama.numPredictβ
The number of predictions (max output tokens) controls how much text the model is allowed to generate. A lower limit keeps responses short and concise, while a higher limit allows longer, more detailed answers.
- Type:
integer - Default:
400
contextβ
Conversation context configuration
context.maxPlayerMessagesβ
Maximum number of messages to store per player. A higher limit will allow players to continue conversations for longer, however this could cause performance impacts with high player counts.
- Type:
integer - Default:
20