RPG Playbook is an AI powered tool designed to enhance table-top RPG games. Using a combination of LLM and diffusion models, in conjunction with traditional software, the app allows users to create bespoke game assets in real time.
Running tabletop RPG games is an art that demands creativity, quick thinking, and meticulous planning. Game masters juggle campaign design, non-player character management, mechanical upkeep, and real-time narrative adjustments—a challenging feat even for seasoned storytellers.
Enter RPG Playbook: your AI-powered assistant designed to elevate your game mastering experience. We harness cutting-edge AI technology to generate unique characters, items, creatures, and locations in seconds, dramatically reducing your creative workload.
With just a few clicks, game masters can conjure new and exciting assets on the fly, seamlessly adapting to players' choices and enriching the narrative in real-time. Our flexible AI can create a vast array of original content suitable for virtually any RPG setting or genre.
RPG Playbook doesn't stop at individual assets. Organize your creations into collections and weave them into intricate, captivating campaigns. Whether you're a novice GM or a veteran storyteller, RPG Playbook empowers you to craft unforgettable adventures with ease and confidence.
When we embarked on developing RPG Playbook, we initially created a proof of concept using OpenAI's API, leveraging GPT-3.5 Turbo for text generation and DALL-E for image creation. However, as we progressed, we encountered several challenges that prompted us to reconsider our approach.
The costs associated with OpenAI's services proved to be prohibitively high for our project's scale. Additionally, we faced reliability issues that could potentially impact user experience. Perhaps most critically, we realized that relying solely on OpenAI's ecosystem made us vulnerable to changes in their policies or services
In response to these challenges, we pivoted our technology stack. For image generation, we transitioned to Stable Diffusion, implementing a custom-trained SDXL model tailored to our specific needs. On the text generation front, we opted for a locally running version of Meta's Llama 3 Instruct model.
This shift has not only addressed our initial concerns but has also given us greater control over our app's core functionalities. By bringing key components in-house, we've enhanced our ability to fine-tune performance, ensure reliability, and maintain independence from third-party services.