Skip to main content

Prompt engineering is a strategy that involves defining the input that a user provides to the LLM and the output that the LLM is to generate in response. Prompt engineering can be challenging, but it’s not hard. 

Here are some high-level tips for getting started with prompt engineering with ChatGPT: 

Context: Give as much context as you can to ChatGPT about the situation in which your prompts and completions will be used. Do this often, as it will help ChatGPT better comprehend the meaning of the intent of your request and generate more relevant completions.

Examples: Include examples of the kinds of completions you would like to receive when you create prompts.

Clear and Specific: You will receive better and more relevant completions from ChatGPT if your prompts are clear and specific in what they are asking for. E.g. Instead of asking ChatGPT to “write a blog post on productivity,” ask it to “write an informative blog post giving three productivity tips for remote work.” Unclear prompts can potentially result in ambiguous responses.

Avoid Ambiguity: For example, “Tell me about dogs,” is ambiguous; better: “Describe in detail the characteristics, behavior, and care required for domestic dogs.

Keep it natural: Write your prompts in a natural, conversational tone in language that is easily understandable by ChatGPT. Avoid the use of technical jargon or complex sentence structures which might inadvertently confuse the AI model.

ChatGPT Getting Started Tips
Photo by Andrew Neel on Pexels.com

Detailed approaches to get the most from ChatGPT

Use System Messages: Begin interactions with a system message that sets the stage. For instance, if you’re using ChatGPT for medical information, start with a message like, “You are an assistant providing medical advice.” Do not be afraid of being very specific with the domain and set up expectations. In UX design we would call the setup here a “persona”. Give ChatGPT a specific persona we are targeting with the role, expertise level, the audience we are targeting, the tone of voice we expect, etc.

Sequential Queries: When asking multiple questions, ensure they follow a logical sequence. This helps the model understand the progression of the conversation. Longer prompts can be broken down into steps. For example, it can help ChatGPT if you specify that the prompt is three steps long, and how ChatGPT should review and respond based on each step.

Explicitly State the Domain: If you’re discussing a specific topic, mention it. For example, “In the context of digital marketing, explain SEO.” Help ChatGPT understand the domain-specific goals you have in your prompt. We’ve found that it helps to provide ChatGPT a goal within the domain that targets your objective. For example, if you are writing a blog post targeting executive leadership teams within a specific domain let ChatGPT know.

Reference Previous Interactions: If you’ve had prior interactions with the model, refer back to them. “Continuing from our discussion about solar energy, tell me about solar panels.” This can be very helpful if you are working on multiple instances of a task. For example, we are optimizing blog posts for a company and there are 20 blogs we are reviewing and want reviewed and optimized in a similar way.

Provide Background Information: If a query might be ambiguous, give a brief background. “I’m working on a history project about ancient civilizations. Tell me about the Mayans.” We’ve found that asking ChatGPT to interview you is also an intuitive way to give it the background we expect it to understand. For example, “ask me 10 questions to help you understand our background and goals.” Effectively, ChatGPT then is prompting you, which is a lot easier in our experience than trying to feed it information in an unstructured way.

Specify the Desired Format: If you want information in a particular manner, state it. “List down the steps in chronological order” or “Provide a summary.” This is an area that ChatGPT is expanding with charts, tables, diagrams, JSON, and other formats available. For copywriting tasks you can also precisely specify your output goals, like this:

MUST FOLLOW THESE INSTRUCTIONS IN THE ARTICLE:

1. Make sure you are using the Focus Keyword in the SEO Title.

2. Use The Focus Keyword inside the SEO Meta Description.

3. Make Sure The Focus Keyword appears in the first 10% of the content.

4. Make sure The Focus Keyword was found in the content

5. Make sure Your content is 2000 words long.

6. Must use The Focus Keyword in the subheading(s).

7. Make sure the Keyword Density is 1.30

8. Must Create At least one external link in the content.

9. Must use a positive or a negative sentiment word in the Title.

10. Must use a Power Keyword in the Title.

11. Must use a Number in the Title.

Use Keywords Strategically: If discussing a niche topic, sprinkle in some related keywords to give the model hints. “For a vegetarian, protein-rich diet, suggest some plant-based foods.”

Clarify Ambiguities: If a term or phrase has multiple meanings, clarify it. “I’m interested in Java, the programming language, not the island.”

Limit the Scope: If you’re looking for specific information, narrow down the scope of your query. “In the context of 20th-century literature, tell me about Ernest Hemingway.”

Feedback Loop: If the model’s response isn’t aligned with your expectations, provide feedback and rephrase your query with more context. We’ve found that with coding tasks it is very helpful to always include in the prompt a feedback cycle. For example, we asked ChatGPT to code schema markups for guided recipes. We always asked ChatGPT to review the schema for optimization opportunities and errors. Interestingly, we found that providing specific checks helps even more. For example, “review the schema markup for Google Rich Text requirements, and review the schema for Schema.org requirements” leads to different tolerances and checks.

Advanced ChatGPT Prompting

Limit Potential Biases: If you’re concerned about receiving a biased answer, you can instruct the model. For example, “Provide an unbiased overview of X.”

Ask for Sources or Verification: If you’re using tools like BrowserOp, you can ask the model to “Provide sources for the information” or “Verify the information from a trusted website.”

Iterative Questioning: Instead of asking everything in one go, break down your questions. Start with a general question and then delve deeper based on the model’s response.

Use Temperature and Max Tokens: If you’re using the OpenAI API, you can adjust the temperature (which controls randomness) and max tokens (which limits response length). A lower temperature like 0.2 makes the output more deterministic, while a higher value like 0.8 makes it more random.


Discover more from AdTelic

Subscribe to get the latest posts sent to your email.

Discover more from AdTelic

Subscribe now to keep reading and get access to the full archive.

Continue reading