{% extends "base.html" %} {% block title %}Settings — df-storyteller{% endblock %} {% block content %}

Settings

Configure your storytelling experience

{# ==================== Game Path ==================== #}
Disable AI generation. The UI becomes a structured journal for player-written entries.
{# ==================== AI Provider ==================== #}

AI Provider

Leave blank for provider default.
Run ollama pull modelname to download.
Recommended by VRAM:
4-6 GB — gemma3:4b, llama3.2:3b, phi4-mini:3.8b
8 GB — gemma3:12b, llama3.1:8b, mistral-nemo:12b
12-16 GB — mistral-small:24b, gemma3:27b
24+ GB — qwen2.5:32b, gemma3:27b, command-r:35b
{{ (config.llm.ollama.num_ctx / 1024) | int }}K
How much text the model can see. Higher = better context but more VRAM.
{# ==================== Generation Style ==================== #}

Generation Style

Precise Creative {{ '%.2f' % config.llm.temperature }}
Controls randomness. Lower = more predictable, higher = more creative. Default: 0.80
Focused Diverse {{ '%.2f' % config.llm.top_p }}
Controls word choice diversity. 1.0 = consider all options (default). Lower = only most likely words.
None Strong {{ '%.2f' % config.llm.repetition_penalty }}
Discourages repeated phrases. 1.0 = off (default). Increase if stories feel repetitive.
Added to every story prompt. Guide the AI's tone, focus, or style.
{# ==================== Output Lengths ==================== #}

Output Lengths

Control how long each type of generated text should be. ~1000 tokens = ~250 words.
{{ config.story.chronicle_max_tokens }}
{{ config.story.biography_max_tokens }}
{{ config.story.saga_max_tokens }}
{{ config.story.gazette_max_tokens }}
{{ config.story.chat_summary_max_tokens }}
{{ config.story.quest_generation_max_tokens }}
{{ config.story.quest_narrative_max_tokens }}
{# end llm-settings #}
{% if saved %}

Settings saved successfully.

{% endif %} {% if legends_file %}

Legends Data

{{ legends_file }}

{% endif %}
{% endblock %}