Viewing a single comment thread. View all comments

Sirisian t1_jclhe7i wrote

The interesting thing to mention with these tests is they aren't using a fine-tuned model. With GPT-4's multimodal configuration one could fine-tune a system to digest the DnD manuals for all their text and images to give the system a deeper understanding and set of constraints. One could imagine including a lot of rules into such a system.

The article also mentions context window issues where the AI forgets things. You can ask it to summarize important events every once in a while so that it remembers things. (Essentially it brings it into the context again reinforcing the information). The GPT-4 context is 8K tokens, but the API has a 32K token version. If someone was building a DnD dungeon master with the API they'd probably perform this summarization operation automatically with a tailored input.