At the mesh point of human ingenuity and artificial intelligence, the importance of appropriately structured prompts is frequently underestimated. Within this dynamic (and, at times, delicate) ecosystem, the meticulous craftmanship of prompts serves as the linchpin, orchestrating a seamless collaboration between human cognition and machine learning algorithms. Not unlike to a conductor directing an ensemble, judicious prompt structuring lays the foundation for AI systems to synchronize with human intent, thereby facilitating the realization of innovative endeavors. Given the large number of interactions with Large Language Models (LLMs) based on 1:1 digital chats, it is important to carefully prompt gen AI models to generate accurate and tailored outputs.

Gartner predicts that more than 80% of enterprises will have used generative artificial Intelligence (gen AI) or deployed gen AI-enabled applications in production environments by 2026, up from less than 5% in 2023.[1] As gen AI adoption continues to accelerate, understanding proper prompt engineering structures and techniques is becoming more and more important.

With this in mind, we are going to discuss the criticality of the structure of AI prompting to the accuracy of AI outputs. Specifically, we discuss how defining objectives, assigning roles, providing context, specifying the output format, and reviews each play a role in crafting effective prompts.  

@Indian_Bronson. “salmon swimming in a river.” 15 Mar. 2023. X(Twitter), https://twitter.com/Indian_Bronson/status/1636213844140851203/photo/2. Accessed 3 Apr. 2024

Interacting with LLMs through a chat bot function may result in frustrations as users are faced with outputs that are not on par with their expectations. However, the more detail and clarity given to the model, the more resources it will have to understand and execute the task properly. In this context, “detail and clarity” means:

    1. Defining the objective

    1. Assigning Roles and Providing context

    1. Specifying the output format

    1. Reviewing & Refining

1. Define the Objective
Some good questions to ask oneself before providing a prompt to the gen AI include: What needs to be done? What tone does it have to be in? What format do we need? A 2023 Standford University study found that models are better at using relevant information that occurs at the very beginning or the end of the request.[2] Therefore, it is important to generate prompts that are context rich, and concise. 

2. Assign Roles and Provide Context
Arguably the most important part of prompting, providing context is critical because gen AI machines cannot infer meanings beyond the given prompts. Machines also lack the years of experience necessary to grasp the sense of what is needed and what is not without some explicit direction. The following principles are important to bear in mind:

Precision and Personalization: Providing detailed context and a clear role enables the AI system to deliver responses that are both accurate and tailored to individual user needs, preferences, and the specificity of the situation.

Delimiters like XML tags: & angle brackets: <> are a great way to separate instructions, data, and examples from one another. Think of XML tags as hash tagging on social media.

For example:

 

I want to learn about Mortgage Finance and its history

What are some key institutions in the industry?

 

Efficiency and Clarity in Communication: By understanding its expected role, whether as a consultant, educator, or support assistant, an AI application can adjust its communication style, level of detail, and prioritization accordingly. This alignment not only streamlines interactions but also ensures that the dialogue is efficiently directed towards achieving the user’s goals, minimizing misunderstandings and maximizing productivity.

Appropriateness and Ethical Engagement: Knowledge of the context in which it operates, and the nuance of its role allows an AI to navigate sensitive situations with caution, ensuring that responses are both appropriate and considerate. Moreover, this awareness aids in upholding ethical standards in an AI’s responses — crucial for maintaining user trust and ensuring a responsible use of technology.

3. Specify the output format
In crafting a prompt for AI text generation, specifying the output format is crucial to ensuring that the generated output is not only relevant, but also suitable for the intended purpose and audience or stakeholders. To this end:

  • Provide clear instructions that include details of the text’s purpose, the audience it’s intended for, and any specific points or information that should be included. Clear instructions help prevent ambiguity and ensure that the AI produces relevant and coherent output.
  • Set the desired tone, language, and topics so that the output is properly tailored to a business need or setting, whether it is an informative email or a summary of a technical report. Outlining specific topics in combination with language and tone setting aids in generating output that resonates with the stakeholders at the appropriate level of formality and delegates the correct purpose of such output to the end user.
  • Define constraints (length, count, tools, terminology) to help guide the AI’s text generation process within predetermined boundaries. These constraints ensure that the generated output meets the task’s requirements and is consistent with existing systems or workflows. It also minimizes review time and reduces the possibility of submitting additional prompts.

    • Supply output examples. This is a great way to encompass all the above tricks for specifying the output format. Examples serve as reference points for style, structure, and content, helping the AI understand the desired outcome more effectively. By providing a tangible example to the gen AI, a user increases the likelihood of achieving a satisfactory result that aligns with expectations.

4. Review & Refine
Last, but nevertheless important, is to review the prompt before submitting it to the gen AI. Check for consistency of terminology and technical terms usage throughout the prompt and formatting, such as tags and bullet points, to avoid confusion in the responses. Make sure the prompt follows logical flow, avoids repetition and unnecessary information to maintain the desired level of specificity and to avoid skewing the response onto the undesired path.

As users navigate the complexities of AI integration, remembering these prompting structures ensures maximization of AI’s potential while mitigating risks associated with misinformation.

Contact us to learn more about how we are helping our clients harness AI’s capabilities, informed by a strategic and mindful approach.


[1] “Gartner Says More than 80% of Enterprises Will Have Used Generative AI Apis or Deployed Generative AI-Enabled Applications by 2026.” Gartner, 11 Oct. 2023, www.gartner.com/en/newsroom/press-releases/2023-10-11-gartner-says-more-than-80-percent-of-enterprises-will-have-used-generative-ai-apis-or-deployed-generative-ai-enabled-applications-by-2026.

[2] Liu, Nelson F., et al. Lost in the Middle: How Language Models Use Long …, July 2023, cs.stanford.edu/~nfliu/papers/lost-in-the-middle.arxiv2023.pdf.