There are just too many prompt engineering strategies out there. Finding the ones that work the best for you can be daunting, especially if you are not a professional prompt engineer or a hardcore ChatGPT user.
Luckily, OpenAI has provided its own Prompt Engineering Guide. This authoritative guide recommends six prompt engineering strategies that might change the way we write our prompts. For “the rest of us” who aren’t seasoned ChatGPT users, it serves as a great starting point to experiment with and improve our prompts.
Let’s take a look.
Table Of Content
Strategy 1: Write clear instructions
Writing clear prompts comes at the top of the list because it’s the foundation of effective prompt engineering. When your prompts are clear and detailed, the AI can better understand your request without any guesswork involved. This will lead to accurate, relevant, and useful responses.
Under this strategy, there are six key tactics:
- Include details in your query to get more relevant answers.
- Ask the model to adopt a persona.
- Use delimiters to clearly indicate distinct parts of the input.
- Specify the steps required to complete a task.
- Provide examples.
- Specify the desired length of the output.
Because this strategy is so important, we’ve dedicated an entire article—Part 1 of this series—to fully describe each of these tactics:
How to Write Effective Prompts – Tips from OpenAI’s Prompt Engineering Guide (Part 1)
Be sure to head over and check it out!

Strategy 2: Provide Reference Text
The second prompt engineering strategy that OpenAI recommends is to provide reference text when crafting your prompts. This means including specific information, excerpts, or documents that you want the AI to use when generating its response. By anchoring ChatGPT’s answer to the reference material, you help ensure that the output is accurate, relevant, and based on trusted sources.
This is particularly useful when accuracy is critical, or when the information must be based on specific data. In other words, when you’re expecting more than generic answers from ChatGPT.
For example, you might provide a 2-page guideline from a veterinarian on cat care and ask ChatGPT to summarize the key points. Your prompt could look like this:
Refer to the following guidelines on cat care and summarize the main recommendations for feeding and grooming.
After which, you copy and paste the guidelines into ChatGPT.
By grounding the response in the provided material, ChatGPT can deliver answers that are both informative and trustworthy.
Strategy 3: Split Complex Tasks into Simpler Subtasks
Have you ever received a set of overcomplicated assignments that left you feeling confused and frustrated? Similarly, complex instructions or tasks can overwhelm ChatGPT. They can lead to incomplete or incorrect responses.
Breaking down large tasks into simpler, manageable subtasks is one of the most important prompt engineering strategies. By chunking the tasks down, you can help ChatGPT process each part of the task effectively and deliver a more accurate overall outcome.
For instance, if you’re working on creating a project plan, instead of asking ChatGPT to generate a comprehensive plan in one go, you could break it down into steps:
First, outline the key objectives of the project.
After this is done, your next prompt would be:
Next, list the main tasks required to achieve these objectives.
This is followed by:
Suggest a timeline for each task.
And so on.
By guiding ChatGPT through each stage, you can make sure that it covers all necessary elements in a thorough and organized manner. This is a great strategy to prevent overwhelming the AI—and yourself.
Strategy 4: Give Models Time to “Think”
Sometimes, the best responses come when we compel ChatGPT to reason through a problem, from the very first step through the final step, rather than rushing to a conclusion.

This is what we call the “Chain of Thought” strategy in the prompt engineering world. It involves explicitly instructing the AI to think step-by-step and asking it to present the steps it takes to arrive at the final answer. This can greatly improve the quality of the response, especially in scenarios that require logic, troubleshooting, or detailed analysis.
For example, if you provide two symptoms that your cat is showing and want ChatGPT to offer an opinion on what illness your cat might have, you can guide the model to think systematically. Your prompt might look like this:
My cat has been sneezing frequently and has watery eyes. What potential illnesses could be causing both symptoms? Please analyze this step-by-step.
Notice that we added “step-by-step” in the prompt. This is a magic phrase that instructs ChatGPT to invoke the Chain of Thought prompting. It encourages ChatGPT to walk you through the process of connecting the dots before offering its final answer.
By the way, be mindful about whether it’s a good idea to consult any AI for medical conditions. It’s still a long way to go before we can fully trust any AI on this.
Strategy 5: Use External Tools
Did you know that ChatGPT isn’t as all-knowing as it seems? It is constrained by its training data and built-in capabilities. To name a few of its limitations: it cannot access real-time information, perform complex calculations, or provide specialized insights beyond what it has been trained on.
OpenAI addressed these limitations by allowing the use of external tools with ChatGPT. These tools extend ChatGPT’s capabilities by enabling it to access information and perform actions beyond its built-in knowledge.
The ‘custom GPTs‘ are a good example of the use of external tools. To illustrate, let’s take a look at a few of them and see how they expand ChatGPT’s capabilities:
• Zapier – A powerful tool that allows users to create automated workflows by connecting ChatGPT with over 5,000 applications, including Google Sheets, Gmail, and Slack.
• VoxScript – This GPT extracts transcripts from YouTube videos (when available) and summarizes them to highlight key information.
• Speak – A GPT designed to enhance language learning and translation capabilities by providing translations, explanations, and real-world examples.
Did you notice that the functions provided by these custom GPTs are otherwise not available in ChatGPT? That’s the power of integrating external tools.

Strategy 6: Test Changes Systematically
Many times, we don’t get the results we expected from ChatGPT with just a couple of interactions. This is when we need to make modifications to our prompts.
For more serious users, modifying prompts is just part of the process. The other part involves observing and testing these changes to see if they lead to better or worse performance. From there, they can determine which prompt engineering tactics, prompt frameworks, or examples work best.
An example of how this strategy works is: Suppose you’re tweaking how you phrase your prompts—you might test several versions and compare the outputs. Your first prompt could be:
Explain the benefits of solar energy in 100 words.
And for comparison, you would rephrase your prompt as:
Summarize the key advantages of solar power briefly.
By analyzing the differences in the AI’s responses, you can identify which phrasing elicits the most accurate or useful answers.
Conclusion
Similar to our observations from Part 1 of this series, we found that the prompt engineering strategies recommended in OpenAI’s guide are largely similar to most of the advice available today—with some unique additions, such as the use of external tools.
Compare the above strategies with what you’ll find in our past article, “Prompt Engineering Best Practices According to Anthropic’s Prompt Guidelines”, and you’ll likely agree.
These strategies work because they are backed by countless experiments and proven results from prompt engineers around the world. This is why we recommend them as “must-know” strategies for you.
Definitely try them in your next ChatGPT session and see the difference for yourself. Let us know your experiences in the comments!
This is Part 2 of the “Tips from OpenAI’s Prompt Engineering Guide” series. Find Part 1 here: How to Write Effective Prompts – Tips from OpenAI’s Prompt Engineering Guide (Part 1) |
2 Comments