How to Turn AI into your research assistant, writing companion, plot summariser, dialogue mapper, literary analyst and editor, all in one
Most of us started using AI like we’d use a search engine. We asked topic questions:
- ‘Who was the President of France in 1939?’
- ‘Which are the most violent districts in Los Angeles?’
- ‘How much does a teacher earn?’
Over the last year or two, our queries have moved more towards ‘task execution’:
- ‘Research why Madagascar is such a poor country and the extent to which colonial rule is to blame.’
- ‘Write a short, plain-English explanation of why running is good for mental health.’
- ‘Explain all the ways soldiers recognise the ranks of different officers in the US Army?’
Experts tell us that getting the most from modern AIs requires ‘context engineering‘: the deliberate practice of assembing all the information an AI needs to give us back what we want. This can include the prompt itself, background information uploaded in files, sources to be looked-up for further information and specific instructions on process, guardrails, rigour and style.
Whilst that might sound a lot, modern AIs claim they have an almost infinite capacity for consuming and processing information. The truth is that they kind of do but not really. Let’s tease this apart.
The information capacity of an AI is measured by the number of ‘tokens’ it can hold in its ‘context window’. Tokens are how AIs encode language – the word unhappiness, for example, will be encoded in three tokens:
- ‘happy / happi’ the feeling of pleasure or contentment,
- ‘ness’ the state of being and
- ‘un’ the opposite or converse.
When the first popular AI was introduced in 2020 (GPT-3) it had a context window of 2048 tokens (on average it takes 100 tokens to encode 75 words, so the capacity of GPT-3 was 1,500 words). At the time of writing (December 2025), the context windows are 200,000 tokens for Anthropic’s Claude, 400,000 tokens for OpenAI’s GPT and 1,000,000+ tokens for Google’s Gemini.
Whilst they can hold this much information, they can’t necessarily make sense of it. AIs suffer from a ‘lost in the middle’ problem. They will make perfect sense of the first few thousand tokens and the last few thousand tokens but anything between them will be ignored or is more likely to be misunderstood (human memory works similarly – see the serial position effect).
The solution that I’ve been working with for the past two years is the super-prompt
What is a Super-Prompt?
A super-prompt is a comprehensive set of instructions—often thousands of words long—that configures an AI to act as a specialist for a specific task.
Think of the difference between asking a builder, “Can you build me a house?” versus handing them a complete set of architectural blueprints.
- A simple prompt asks for a result.
- A super-prompt provides the rules, constraints, theory, and step-by-step process to generate that result.
When you use a super-prompt, you aren’t just asking the AI to “be creative.” You are embedding rich context directly into its working memory, effectively turning a generalist AI into a specific expert—like a researcher or a copy editor—for the duration of your conversation.
Why Writers Should Use Them
1. You Control the “Rulebook”
Commercial AIs run on hidden “system prompts” that define their default behavior (usually to be helpful, polite, and decline unacceptable requests). A super-prompt layers your own specific instructions on top of that.
For example, my Narrative Structure Super-Prompt doesn’t just ask for feedback; it forces the AI to analyse your work using specific frameworks like Shawn Coyne’s Story Grid and John Yorke’s Into the Woods. It ensures the AI evaluates your scene based on established craft—like “inciting incidents” and “value shifts”—rather than random opinion.
2. Transparency (The “Glass Box” Approach)
Many writers use “Custom GPTs” from an app store. The problem is that these are “black boxes”—you can’t see the rules they are following.
A super-prompt is completely transparent. It is a human-readable document that you can open, read, audit, and edit. If the AI is being too harsh or missing the point, you can tweak the instructions immediately.
3. Consistency
By providing a structured process, super-prompts reduce the variability (or “credible slop”) often seen in AI responses. They ensure the AI follows the exact same analytical steps for Chapter 1 as it does for Chapter 50.
How to Use a Pre-Written Super-Prompt
I publish my super-prompts under a Creative Commons license, meaning you are free to use and adapt them. Here is the workflow for using a complex prompt, like my Narrative Structure Analysis tool:
- Upload: All modern AIs (Claude, ChatGPT, Co-Pilot or Gemini) have the ability to upload entire documents and this is what you need to do with a pre-written super-prompt.
- The Handshake: I would always start by checking that the AI has been able to read the uploaded document. Ask ‘Are you able to read the super-prompt I just uploaded’. A good super-prompt will then start working through its instructions and will ask you questions in return.
- The Interaction: Once started, the AI will guide you throughout the entire process, or, if a very lengthy process, it may guide you to the end of phase 1 and give instructions for moving to phase 2.
Pro-Tip: Don’t paste your entire novel at once. Super-prompts work best when focused on specific sections (like a single chapter or a 15,000-word sequence) to ensure you don’t suffer any lost-in-the-middle problems (as described above).
You could write your own super-prompt, although it is a substantial undertaking. Try reading through the entirety of a few super-prompts to get the hang of the language, the process and the inputs and outputs.
Next Steps
You can find my Narrative Structure Super-Prompt [link] ready to copy and paste. Use it to diagnose a troublesome scene, or simply to see how a super-prompt is constructed so you can build your own.
This content is based on the research behind my book “AI-Augmented Decisions” and my professional work on Context Engineering.