config.yml
Global configuration options for Genius
Default config.yml
# *****************************************
# * Genius Configuration *
# *****************************************
# For reference, see https://github.com/fletchly/genius/wiki/Configuration
# Properties for the agent's appearance and behavior
genius:
# The name for the Genius agent to use in chat when responding to messages
agentName: Genius
# Ollama client configuration
ollama:
# The base URL for the Ollama API. If using a cloud model, this should be set to https://ollama.com/.
# Defaults to the Ollama API's default listening location. For more info, see https://github.com/fletchly/genius/wiki/hosting
baseUrl: http://localhost:11434/
# Your key for the Ollama cloud API. This only needs to be set if you are using an Ollama cloud model.
# For more info, see https://github.com/fletchly/genius/wiki/hosting
apiKey: API_KEY
# The name of the model to use for response generation.
# If you are using Ollama cloud, the model must be an ollama cloud model
model: deepseek-v3.1:671b
# Controls how random or deterministic Genius's output is.
# Lower values make responses more predictable and precise,
# while higher values make them more creative and varied.
temperature: 0.5
# Top-K sampling limits the model’s next-token choices to the K most likely options, filtering out all others.
# This reduces randomness while still allowing some creativity compared to always picking the single most probable token.
topK: 40
# Top-P controls how many likely tokens the model considers when generating text.
# Lower values make output more focused and predictable, while higher values allow more randomness and creativity.
topP: 0.85
# The number of predictions (max output tokens) controls how much text the model is allowed to generate.
# A lower limit keeps responses short and concise, while a higher limit allows longer, more detailed answers.
numPredict: 400
# Conversation context configuration
context:
# Maximum number of messages to store per player
# A higher limit will allow players to continue conversations for longer,
# however this could cause performance impacts with high player counts.
maxPlayerMessages: 20
genius
Properties for the agent's appearance and behavior
genius.agentName
The name for the Genius agent to use in chat when responding to messages
- Type:
string - Default:
Genius
ollama
Ollama client configuration
ollama.baseUrl
The base URL for the Ollama API. If using a cloud model, this should be set to https://ollama.com/. Defaults to the Ollama API's default listening location. For more info, see the hosting guide
- Type:
string - Default:
http://localhost:11434/
ollama.apiKey
Your key for the Ollama cloud API. This only needs to be set if you are using an Ollama cloud model. For more info, see the hosting guide
- Type:
string - Default:
(blank)
ollama.model
The name of the model to use for response generation. If you are using Ollama cloud, the model must be an ollama cloud model
- Type:
string - Default:
deepseek-v3.1:671b
ollama.temperature
Controls how random or deterministic Genius's output is. Lower values make responses more predictable and precise, while higher values make them more creative and varied.
- Type:
number - Default:
0.5
ollama.topK
Top-K sampling limits the model’s next-token choices to the K most likely options, filtering out all others. This reduces randomness while still allowing some creativity compared to always picking the single most probable token.
- Type:
integer - Default:
40
ollama.topP
Top-P controls how many likely tokens the model considers when generating text. Lower values make output more focused and predictable, while higher values allow more randomness and creativity.
- Type:
number - Default:
0.85
ollama.numPredict
The number of predictions (max output tokens) controls how much text the model is allowed to generate. A lower limit keeps responses short and concise, while a higher limit allows longer, more detailed answers.
- Type:
integer - Default:
400
context
Conversation context configuration
context.maxPlayerMessages
Maximum number of messages to store per player. A higher limit will allow players to continue conversations for longer, however this could cause performance impacts with high player counts.
- Type:
integer - Default:
20