Garbage in, garbage out.

The output you’ll receive from OpenAI’s ChatGPT – and other generative AI tools – is only as good as your prompt engineering skills. 

ChatGPT has certain limitations, such as sensitivity to input phrasing, a tendency to guess or make up information, and the need for explicit instruction. Developing your own best practices for ChatGPT will help you achieve better and more reliable results by working around these limitations and maximizing the model’s capabilities. 

Some of these best practices come directly from OpenAI’s help resources, while others we’ve developed using ChatGPT since it launched in February. Use these tips to get the best possible output from OpenAI tools by fine-tuning your prompts, structuring conversations, and guiding the model effectively toward your desired outcomes.

Disclosure: This post may contain affiliate links, meaning we get a commission if you decide to make a purchase through links on this site, at no cost to you.

What is OpenAI and how does generative AI work?

OpenAI is an organization that develops artificial intelligence models, including the GPT (Generative Pre-trained Transformer) series, such as GPT-3.5, which powers me, ChatGPT. OpenAI trains these models on vast amounts of data to learn patterns, language structures, and context.

The training process involves two main steps: pre-training and fine-tuning. During pre-training, the model is exposed to a large dataset containing parts of the Internet, allowing it to learn grammar, facts, and some reasoning abilities. This stage helps the model develop a broad understanding of human language.

After pre-training, the model moves to the fine-tuning stage. Here, it is trained on more specific and carefully generated datasets designed to make the model more useful and safe. Fine-tuning involves training the model on custom datasets created by OpenAI, which include demonstrations of correct behavior and comparisons to rank different responses. The aim is to align the model’s behavior with human values and preferences while minimizing biased or inappropriate outputs.

OpenAI’s models – like GPT-3.5 – are based on a transformer architecture, a type of neural network that excels in processing sequential data such as text. Transformers utilize self-attention mechanisms that allow the model to weigh and focus on different parts of the input sequence during processing, enabling it to capture long-range dependencies effectively.

It’s important to note that while GPT models like ChatGPT can generate responses and provide information, they do not possess true understanding or consciousness. They rely on statistical patterns and associations learned from the training data. This is why quality prompts and having an editorial process to improve the quality of its output are so important.

Best practices for OpenAI prompts

OpenAI created best practices for prompts to provide users with guidelines and recommendations on how to interact with AI models like GPT-3.5 to maximize the usefulness and safety of the system. These best practices help users achieve more reliable and desired outputs while minimizing the risk of generating inappropriate or harmful content.

Improving your ChatGPT prompts can help:

  • Produce more accurate, useful, and ethical outputs
  • Enhance safety and avoid harmful content
  • Prevent mindless output and hallucinations

Here are several best practices to apply in your content generation process.

1. Use the latest model.

AI models evolve to address limitations and expand their capabilities. Newer versions may have additional features, increased context windows, or improved language understanding, allowing them to handle more complex or nuanced queries. You can access broader functionalities and better responses to diverse user inputs using the latest model.

The latest release is generally the most capable model and as of November 2022, the best options are the “text-davinci-003” model for text generation, and the “code-davinci-002” model for code generation. See the latest releases and what’s coming up in beta here.

2. Begin your prompt with instructions and use ### or “”” to add context.

Using ### or “”” helps ChatGPT differentiate between instruction and additional information or context in your prompt. It helps to provide additional details, set the stage for a particular scenario, or introduce reference material. For example:

Write a summary of the following text.

Text: “””{text input here}”””

Incorporating markers and clear instructions improve the structure and organization of the prompt. It helps present the information in a well-defined manner, making it easier for the model to understand the user’s intent and generate relevant responses accordingly. This clarity reduces ambiguity and ensures the AI model focuses on the specific requirements mentioned in the prompt.

Recommended reading: How to Use AI Writer for Marketing & SEO Content

3. Be specific and descriptive in initial and follow-up prompts.

It’s not uncommon that you’ll need to iterate on your outcome with follow-up prompts – in fact, ChatGPT rarely gets it just right on the first try. Give the algorithm specific feedback to help improve the next generation:

  • Ask the model to rewrite the text in a more professional voice.
  • Have it write from the perspective of an experienced healthcare professional.
  • Instruct the model to provide more details or clarification on a specific aspect of its response.
  • Encourage the model to consider alternative viewpoints or explore different solutions to a problem.  
  • Ask the model to explain the reasoning or steps for a particular answer. 
  • Request the model to support its response with information from specific sources or references. 

4. Provide examples and reference text, where possible.

Providing examples and reference text to ChatGPT can significantly improve its output and help guide its responses. Here’s how you can effectively utilize examples and reference text:

Formatting examples: When you want to instruct or demonstrate a specific format or desired response, you can provide one or more example inputs along with their corresponding outputs. You can format them as user-system message pairs, where the user message represents the input or query, and the system message represents the desired response. This helps ChatGPT understand the desired behavior and structure its output accordingly.

Example: 

[  {“user”: “What is the capital of France?”},  

{“system”: “The capital of France is Paris.”}]

Clarifying desired behavior: You can use examples to clarify the behavior you expect from ChatGPT. You can help the model understand and generate appropriate responses in similar contexts by providing a range of example conversations that exhibit the desired behavior.

Example:
[  {“user”: “Tell me a joke.”},  {“system”: “Why don’t scientists trust atoms? Because they make up everything!”}],

[  {“user”: “What is the weather like today?”},  {“system”: “I’m sorry, I don’t have access to real-time weather data.”}]

Providing reference text: You can include specific reference texts or relevant sources of information to help ChatGPT generate accurate and reliable responses. Enclosing the reference text within triple quotes and explicitly mentioning it in the user message can guide the model to use that information for context or to generate more informed answers.

Example: 

[ {“user”: “According to the article ‘Benefits of Exercise’ enclosed below, what are the advantages of regular physical activity?”},

  {“system”: “As per the referenced article, regular exercise offers numerous benefits such as improved cardiovascular health, increased strength, and enhanced mental well-being.”}],

{“user”: “Below is the reference article:\n\n\”Benefits of Exercise\”\nContent: [Insert article content here]”}

Recommended reading: A Complete Guide to Wordtune, Your Personal AI Writing Assistant

Remember that while providing examples and reference text can help improve ChatGPT’s output, reviewing and verifying the generated responses for accuracy is essential, as AI models may occasionally produce incorrect or nonsensical information.

By effectively incorporating examples and reference text, you can guide ChatGPT toward generating responses that align more closely with your desired behavior and provide more informative and reliable answers.

5. Use a chain of reasoning to help ChatGPT arrive at the best response.

A chain of reasoning refers to a logical sequence of steps or arguments used to arrive at a conclusion or answer. In AI systems like GPTs, a chain of reasoning involves breaking down a problem or question into smaller, interconnected steps, where each step builds upon the previous one to lead to a well-supported solution.

When interacting with AI models, requesting a chain of reasoning means asking them to provide the final answer and the intermediate steps or thought processes that led to that answer. Explicitly requesting a step-by-step reasoning process encourages the AI system to think through the problem more thoroughly, promoting clearer and more reliable responses.

For example, if asked to solve a math problem, requesting a chain of reasoning would involve asking the AI to explain each mathematical operation or strategy it used to reach the final solution. This way, the AI system can showcase its logical reasoning abilities and provide a transparent explanation of how it arrived at the answer.

6. Use intent classification to help the model understand the query’s purpose.

This enables it to select the appropriate instructions that align with the user’s intent, helping to improve the relevance and specificity of the generated responses.

Here’s an example of how intent classification can be used to identify the most relevant instructions for a user query:

Intent: Weather Inquiry

User Query: “What will the weather be like tomorrow in San Francisco?”

Instructions (Intent-specific):

  • Intent: Weather Inquiry
    • Instruction 1: “Provide the user with the forecasted temperature and precipitation for the queried location and date.”
    • Instruction 2: “Include additional details such as wind speed, humidity, or any significant weather events.”
    • Instruction 3: “If the requested location or date is unavailable, inform the user and suggest an alternative source for accurate weather information.”

In this example, an intent classification system identifies that the user query falls under the “Weather Inquiry” category. Based on this intent, the system selects the relevant response instructions. These instructions guide the AI model on what specific information to provide, ensuring the response is tailored to the user’s weather-related query.

Recommended reading: Are Humans Still More Creative Than Generative AIs? [Study]

7. Encourage the model to take its time.

Does this really work? OpenAI says so, and recommends it as a best practice when using ChatGPT. 

OpenAI recommends instructing the model to work out its own solution before rushing to a conclusion in scenarios where it is important to prioritize reasoning and accuracy over speed. By encouraging the model to take the time to think through a problem and arrive at a well-reasoned answer, you can potentially improve the reliability and quality of its responses.

In cases where the model may be prone to making reasoning errors or providing immediate but potentially incorrect answers, allowing it to work out its own solution can help mitigate those issues. By providing explicit instructions to engage in a chain of reasoning or outlining the required steps, you guide the model to think through the problem and arrive at a more reliable solution.

This approach can be particularly useful when accuracy and thoughtful reasoning are crucial, such as in complex problem-solving tasks, critical decision-making, or when generating detailed explanations. Instructing the model to reason its way toward an answer enables it to showcase its logical thinking abilities and provide a more robust response.

It’s important to note that while this approach can be effective, it may introduce some trade-offs regarding response time. Allowing the model to work out its own solution might take longer compared to immediate responses. Therefore, the decision to adopt this approach should consider the specific requirements of the task and strike a balance between response time and the need for accurate and well-reasoned answers.

8. Use leading words to point the model to specific patterns.

Using “leading words” is a technique to provide hints or cues to ChatGPT, nudging it towards generating responses that follow a specific pattern or style. By incorporating these leading words, you can guide the model’s output in a desired direction. Here are a few tips for using leading words effectively:

Specify desired format: Start your prompt with words that indicate the desired format or structure of the response. By using phrases like “List,” “Explain,” “Describe,” or “Compare,” you can direct the model to provide information in a particular format or style.

Suggest key points: Include leading words highlighting the key points or aspects you want the model to focus on. These words can guide the model to provide specific details or emphasize particular elements in its response.

Signal expected content: Use leading words to indicate the content you expect in the response. This can help the model align its output with the specific content category you seek, such as examples, statistics, definitions, or steps.

Set the context: Begin your prompt with leading words that establish a context or scenario. This helps the model generate responses relevant to that context, making the conversation more coherent and focused.

Frame with a perspective: Use leading words to frame the prompt with a particular perspective, viewpoint, or role. This can influence the model to respond from that perspective or adopt a specific tone or style.

9. Use explicit system instructions to guide the model’s behavior.

Incorporating explicit system instructions can help shape the model’s behavior and generate more accurate responses. System instructions are messages prefixed with “SYSTEM” that provide high-level guidance to the model, enabling you to influence the style, tone, or approach of the model’s responses.

For example:

User: “What are the benefits of regular exercise?”

System: “SYSTEM: Provide a balanced response by highlighting both physical and mental benefits of exercise.”

System instructions provide additional context and constraints to the model, ensuring it considers specific factors while generating responses. This technique can help the model produce more comprehensive, well-rounded, or contextually appropriate answers.

It’s important to note that system instructions should be used judiciously and in moderation. Overusing or overly constraining the system instructions may restrict the model’s creative ability and limit its capacity to generate diverse responses. Striking the right balance between user instructions and system instructions is key to achieving the desired outcome.

10. Ask if the model has anything more to add.

Asking the model if it missed anything or has more to add serves as a mechanism for error correction and encourages the model to provide a more comprehensive response.  You prompt it to reflect on its previous responses and identify any potential gaps or omissions, allowing the model to correct or fill in missing information, generate more examples, or tackle the question from a different angle.

Asking the model to revisit its previous responses also fosters an iterative refinement process. The model can learn from its own mistakes or limitations and iteratively improve its understanding and generation capabilities. This iterative approach helps in fine-tuning the model’s performance over time.

More helpful resources: