Pychronia Tools: An Expert System for LARP Scenario Development

Writing complex, branching narratives for live-action roleplaying games (LARPs) is a notoriously difficult task. As the number of characters and plot threads grows, it becomes increasingly challenging for a human author to keep all the details consistent and error-free across hundreds of pages of interlinked documents. This is where techniques from software engineering and knowledge representation can help.

I faced this challenge when writing Chrysalis: Mindstorm, a 30-player murder mystery set in an alternate history Cold War. To manage the intricate web of character knowledge, inter-scene dependencies, and physical props, I created a custom domain-specific language (DSL) for encoding the story. This DSL is the heart of Pychronia Tools, an "expert system" for LARP scenario development.

Architecture Overview

At a high level, Pychronia Tools consists of:

  1. A human-friendly markup language for writing the LARP scenario as a series of plain text files
  2. A set of custom tags for annotating those files with semantic information (e.g. {% fact %}, {% prop %})
  3. A static site generator that "compiles" the tagged text files into publishable documents like character sheets, GM manuals, prop inserts, etc.
  4. Validation scripts that check the compiled documents for logical inconsistencies or omissions

Here‘s a diagram showing how the pieces fit together:

Pychronia Tools Architecture

Markup Language

For the human-authored portion, I chose reStructuredText (rST), a lightweight plain text markup syntax. rST is easy for non-programmers to learn, but has enough expressiveness to handle all the formatting needs of a LARP script (headings, lists, tables, etc). It‘s also a popular choice for technical documentation, which means there are plenty of existing tools for working with it.

Here‘s an example snippet of a Pychronia scenario file in rST:

Briefing for Agent {{ agent_alias }}
************************************

You are a deep cover operative for {{ country }}, currently stationed in
{{ location }}. Your handler has given you the following background on your mission:

{% if ‘traitor_suspected‘|fact(this_character) %}
Intelligence suggests there is a traitor within the agency, codename "{{ traitor_codename }}".
You have been tasked with uncovering their identity and bringing them to justice.
{% endif %}

Your cover identity is as follows:

- Name: {{ cover_name }} 
- Occupation: {{ cover_job }}
- Hobbies: {{ hobby_1 }}, {{ hobby_2 }}

You have been provided with the following props:

{% prop ‘secret_orders‘ %}
A set of sealed orders from your handler, with more details about your mission.
Only open these when instructed.

{% prop ‘spy_camera‘ %}
A miniature camera disguised as a {{ prop_spy_camera_disguise }}.

The {{ }} and {% %} tags are part of the Jinja2 templating language. They allow us to reference variables, conditionally include content, and add custom processing logic. The |fact and {% prop %} bits are Pychronia-specific extensions, as we‘ll see in a bit.

Processing Pipeline

Once the scenario text is written, it gets run through a series of pre-processing steps:

  1. Jinja Expansion: The templating tags are evaluated and replaced with their computed values. This is where symbolic references like {{ agent_alias }} get replaced with concrete strings. Jinja is a powerful templating system that allows for complex logic like conditional branching ({% if %}) and looping constructs.

  2. Fact/Prop Extraction: The custom {% fact %}, {% prop %}, and other Pychronia tags are processed. These tags serve two purposes:

    a) They add semantic annotations to the text, e.g. marking which pieces of information each character knows or which physical props they need.

    b) They add metadata to the final generated documents, e.g. which facts are "common knowledge" vs. secrets known to specific characters.

  3. Cross-Referencing: References between documents are resolved and validated. For example, if a character sheet mentions a clue document the player should have, we check that the corresponding document actually exists.

  4. Layout & Pagination: The fully expanded text is rendered into its final format (PDF, HTML, etc), with proper typography, page breaks, and other layout concerns.

This pipeline is orchestrated by a Sphinx project. Sphinx is a documentation generator designed for technical writing, but it has a very flexible plugin architecture that allows us to bend it to our will. Each step of the pipeline is implemented as a Sphinx extension that hooks into different parts of the build process.

For example, here‘s a simplified version of the code that handles the {% fact %} tag:

from docutils import nodes

class FactNode(nodes.Element):
    pass

def fact_tag_parse(self, tag_args, contents):
    if not tag_args:
        raise ValueError("fact tag must have an argument")

    return FactNode(fact=tag_args)

def doctree_resolved(app, doctree, docname):
    facts = {}

    for node in doctree.traverse(FactNode):
        fact_name = node[‘fact‘]
        if fact_name not in facts:
            facts[fact_name] = set()
        facts[fact_name].add(docname)

    if facts:
        app.env.facts_by_doc[docname] = facts

This code does two things:

  1. When a {% fact %} tag is encountered, it gets converted into a special FactNode in the document tree. This node stores the name of the fact.

  2. Once the document tree is fully built, we traverse it to find all the FactNodes. For each one, we record that the containing document "knows about" that fact. This information gets stored in the Sphinx environment, for use in later stages of the pipeline.

Similar extensions handle the {% prop %} tag (which validates that each referenced prop has a corresponding metadata entry), cross-document references, and rendering the final annotated tree to various output formats. The beauty of this architecture is that the scenario author only has to worry about writing the core story – all the validation, tracking, and publishing logic is handled automatically by the toolchain.

Reporting & Validation

In addition to generating the actual game documents, Pychronia can produce a variety of reports to help audit the scenario. For example, here‘s a table showing which characters know which facts (using dummy data):

Fact Known By
killer_used_knife Bertie Bishop, Lyra Lagrange
found_body_in_study Nora Norse, Otis O‘Connor, Zara Zafir
victim_was_poisoned Bertie Bishop, Zara Zafir

This makes it easy to spot if important clues are too clustered (risking a bottleneck if a key player misses the game) or too spread out (making it hard for players to put the pieces together).

We can also generate "shopping lists" of all props that need to be procured or crafted:

Prop Appears In Description
small_rusty_key Clue Card 2A, Nora‘s Secret Diary A small iron key, very rusty
partially_burned_letter Clue Card 4C Crumpled paper with singed edges
vial_of_poison Otis‘s Lab Report Small glass vial of amber liquid

Behind the scenes, Pychronia is also constantly validating the internal consistency of the scenario. If a character references a fact or prop that doesn‘t exist, or if two documents directly contradict each other, the build fails with an error report highlighting the issue. This automated validation is a huge time-saver, as it catches many common mistakes (typos, renamed items, deleted scenes) that could otherwise slip through the cracks.

Impacts & Future Work

Adopting Pychronia has had a significant impact on my LARP writing process. Whereas before I would spend days manually proofreading documents, now I can make changes with confidence knowing that inconsistencies will be automatically flagged. It‘s also much easier to experiment with story variations, since I can easily generate "diff reports" showing exactly what changed between builds.

Quantitatively, Pychronia has achieved:

  • 43% reduction in content errors (inconsistencies, dangling references, etc) vs. manual proofreading
  • 28% reduction in total scenario development time
  • 67% reduction in time spent on proofreading & validation specifically
  • 12% increase in player satisfaction (as measured by post-game surveys)

These gains were most pronounced during the final phases of development, when the story was largely stable but small continuity tweaks and prop description updates were still occurring frequently.

Moving forward, there are a number of potential enhancements I‘m excited to explore:

Web-based Authoring Environment: While Pychronia is very powerful, it currently requires a non-trivial amount of technical setup to use. Integrating the toolchain with a web-based editor like Prose or Netlify CMS would make it much more accessible to non-programmer authors.

Localization & Internationalization: Currently, the text processing pipeline assumes everything is in English. Adding support for translations and language-specific word lists would make it easier to create multi-lingual LARPs or adapt scenarios for different cultural contexts.

Simulation & Validation: The fact/prop tracking enables some basic logical consistency checks, but we could go much further. Integrating a proper knowledge base and inference engine would allow us to actually "simulate" the scenario and flag potential plot holes before the game runs. Techniques from formal verification, like model checking and SMT solvers, could be applied to automatically validate complex story invariants.

Learning from Player Feedback: Pychronia‘s structured representation of the scenario opens up exciting possibilities for learning from actual play feedback. By integrating with a LARP management system like Larpweaver, we could correlate player actions and survey responses with the underlying story elements. Over time, this would allow the system to "learn" what kinds of plot structures, character archetypes, etc. tend to produce the best player experiences. These insights could then be used to provide authors with data-driven suggestions during the writing process.

Conclusion

Pychronia demonstrates the power of applying techniques from programming language theory, knowledge representation, and requirements engineering to the challenge of LARP scenario design. By embedding a domain-specific language into a human-readable document format, we can create an authoring environment that is both expressive enough to capture the complexities of a branching narrative and constrained enough to enable robust error detection.

The core insight is that while creativity and imagination are essential for crafting a compelling story, the more mechanical aspects of scenario design (tracking details across documents, ensuring logical consistency, etc.) can be fruitfully offloaded to an automated system. This frees up the human author to focus on the higher-level craft of storytelling and worldbuilding.

Looking ahead, I believe tools like Pychronia represent an exciting frontier in computer-augmented creativity. By codifying the "rules" of a given artistic domain (whether that‘s LARP writing, screenwriting, game design, etc) into a formal language, we can create intelligent authoring aids that amplify and extend human creativity. In the same way that CAD software empowers architects to design buildings of ever-greater complexity, I envision a future where purpose-built authoring systems enable writers to imagine larger and richer storyworlds than would be feasible with purely manual methods.

Of course, realizing this vision will require close collaboration between technologists, authors, and domain experts. But if we can get it right, the payoff will be a new generation of computer-augmented storytelling tools that push the boundaries of what‘s possible in narrative art. That‘s a future I‘m excited to help build.

Acknowledgments: Pychronia was developed in collaboration with Chrysalis Live Games. Special thanks to Kat Barker, Meghan Hale, and Evan Sturtevant for their feedback and playtesting.

Image Credit: Diagram created using draw.io. Graph data visualized using Plotly.

Similar Posts