Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Blog Archives

AI Prompt Structuring — Does it Even Matter?

At the mesh point of human ingenuity and artificial intelligence, the importance of appropriately structured prompts is frequently underestimated. Within this dynamic (and, at times, delicate) ecosystem, the meticulous craftmanship of prompts serves as the linchpin, orchestrating a seamless collaboration between human cognition and machine learning algorithms. Not unlike to a conductor directing an ensemble, judicious prompt structuring lays the foundation for AI systems to synchronize with human intent, thereby facilitating the realization of innovative endeavors. Given the large number of interactions with Large Language Models (LLMs) based on 1:1 digital chats, it is important to carefully prompt gen AI models to generate accurate and tailored outputs.

Gartner predicts that more than 80% of enterprises will have used generative artificial Intelligence (gen AI) or deployed gen AI-enabled applications in production environments by 2026, up from less than 5% in 2023.[1] As gen AI adoption continues to accelerate, understanding proper prompt engineering structures and techniques is becoming more and more important.

With this in mind, we are going to discuss the criticality of the structure of AI prompting to the accuracy of AI outputs. Specifically, we discuss how defining objectives, assigning roles, providing context, specifying the output format, and reviews each play a role in crafting effective prompts.  

@Indian_Bronson. “salmon swimming in a river.” 15 Mar. 2023. X(Twitter), https://twitter.com/Indian_Bronson/status/1636213844140851203/photo/2. Accessed 3 Apr. 2024

Interacting with LLMs through a chat bot function may result in frustrations as users are faced with outputs that are not on par with their expectations. However, the more detail and clarity given to the model, the more resources it will have to understand and execute the task properly. In this context, “detail and clarity” means:

    1. Defining the objective

    1. Assigning Roles and Providing context

    1. Specifying the output format

    1. Reviewing & Refining

1. Define the Objective
Some good questions to ask oneself before providing a prompt to the gen AI include: What needs to be done? What tone does it have to be in? What format do we need? A 2023 Standford University study found that models are better at using relevant information that occurs at the very beginning or the end of the request.[2] Therefore, it is important to generate prompts that are context rich, and concise. 

2. Assign Roles and Provide Context
Arguably the most important part of prompting, providing context is critical because gen AI machines cannot infer meanings beyond the given prompts. Machines also lack the years of experience necessary to grasp the sense of what is needed and what is not without some explicit direction. The following principles are important to bear in mind:

Precision and Personalization: Providing detailed context and a clear role enables the AI system to deliver responses that are both accurate and tailored to individual user needs, preferences, and the specificity of the situation.

Delimiters like XML tags: & angle brackets: <> are a great way to separate instructions, data, and examples from one another. Think of XML tags as hash tagging on social media.

For example:

 

I want to learn about Mortgage Finance and its history

What are some key institutions in the industry?

 

Efficiency and Clarity in Communication: By understanding its expected role, whether as a consultant, educator, or support assistant, an AI application can adjust its communication style, level of detail, and prioritization accordingly. This alignment not only streamlines interactions but also ensures that the dialogue is efficiently directed towards achieving the user’s goals, minimizing misunderstandings and maximizing productivity.

Appropriateness and Ethical Engagement: Knowledge of the context in which it operates, and the nuance of its role allows an AI to navigate sensitive situations with caution, ensuring that responses are both appropriate and considerate. Moreover, this awareness aids in upholding ethical standards in an AI’s responses — crucial for maintaining user trust and ensuring a responsible use of technology.

3. Specify the output format
In crafting a prompt for AI text generation, specifying the output format is crucial to ensuring that the generated output is not only relevant, but also suitable for the intended purpose and audience or stakeholders. To this end:

  • Provide clear instructions that include details of the text’s purpose, the audience it’s intended for, and any specific points or information that should be included. Clear instructions help prevent ambiguity and ensure that the AI produces relevant and coherent output.
  • Set the desired tone, language, and topics so that the output is properly tailored to a business need or setting, whether it is an informative email or a summary of a technical report. Outlining specific topics in combination with language and tone setting aids in generating output that resonates with the stakeholders at the appropriate level of formality and delegates the correct purpose of such output to the end user.
  • Define constraints (length, count, tools, terminology) to help guide the AI’s text generation process within predetermined boundaries. These constraints ensure that the generated output meets the task’s requirements and is consistent with existing systems or workflows. It also minimizes review time and reduces the possibility of submitting additional prompts.

    • Supply output examples. This is a great way to encompass all the above tricks for specifying the output format. Examples serve as reference points for style, structure, and content, helping the AI understand the desired outcome more effectively. By providing a tangible example to the gen AI, a user increases the likelihood of achieving a satisfactory result that aligns with expectations.

4. Review & Refine
Last, but nevertheless important, is to review the prompt before submitting it to the gen AI. Check for consistency of terminology and technical terms usage throughout the prompt and formatting, such as tags and bullet points, to avoid confusion in the responses. Make sure the prompt follows logical flow, avoids repetition and unnecessary information to maintain the desired level of specificity and to avoid skewing the response onto the undesired path.

As users navigate the complexities of AI integration, remembering these prompting structures ensures maximization of AI’s potential while mitigating risks associated with misinformation.

Contact us to learn more about how we are helping our clients harness AI’s capabilities, informed by a strategic and mindful approach.


[1] “Gartner Says More than 80% of Enterprises Will Have Used Generative AI Apis or Deployed Generative AI-Enabled Applications by 2026.” Gartner, 11 Oct. 2023, www.gartner.com/en/newsroom/press-releases/2023-10-11-gartner-says-more-than-80-percent-of-enterprises-will-have-used-generative-ai-apis-or-deployed-generative-ai-enabled-applications-by-2026.

[2] Liu, Nelson F., et al. Lost in the Middle: How Language Models Use Long …, July 2023, cs.stanford.edu/~nfliu/papers/lost-in-the-middle.arxiv2023.pdf.


Snowflake and the Future of Data Sharing Across Financial Institutions

The digitization of the financial services industry has opened countless doors to streamlining operations, building customer bases, and more accurately modeling risk. Capitalizing on these opportunities, however, requires financial institutions to address the immense data storage and sharing requirements that digitization requires.  

Recognizing this need, Snowflake has emerged as an industry-leading provider of cloud-computing services for the financial industry. According to estimates, some 57 percent of financial service companies in the Fortune 500 have partnered with Snowflake to address their data needs.1 In this article, we highlight some of Snowflake’s revolutionary data sharing capabilities that have contributed to this trend and RiskSpan’s decision to become a Snowflake partner.     

Financial institutions contemplating migration to the cloud are beset by some common concerns. Chief among these are data sharing capabilities and storage costs. Fortunately, Snowflake is well equipped to address both. 

Data Sharing Between Snowflake Customers

Ordinarily, sharing information across institutions inflates storage costs and imposes security and data integrity concerns.  

Snowflake’s Secure Data Sharing eliminates these concerns because no physical data transfer occurs between accounts. When one Snowflake customer desires to share data with another Snowflake customer, a services layer and metadata store facilitate all sharing activities. As a result, shared data does not occupy any storage in the institution consuming the data, nor does it impact that institution’s monthly data storage expenses. Data consumers are only charged for the compute resources, such as virtual warehouses, they use to query the shared data.  

The setup for Secure Data Sharing is streamlined and straightforward for data providers, while consuming institutions can access shared data almost instantaneously.   

Organizations can easily: 

  • Establish a share from a database within their account, granting access to specified objects within that database.  
  • Share data across multiple databases, provided all databases are under the same account.  
  • Add, remove, and edit access for all users. 

Data Sharing with Non-Snowflake Customers

For institutions desiring to share data with non-Snowflake customers, Snowflake offers an alternative secure data sharing method, known as a “reader account.” Reader accounts offer an efficient and cost-effective solution for data sharing without requiring consumers to register for Snowflake. They are associated exclusively with the provider’s account that established them. Data providers share databases with reader accounts, but each reader account can only access data from its originating provider account. Individuals using a reader account can perform queries on shared data but are restricted from carrying out DML operations, such as data loading, insertions, updates, and other data manipulations. These accounts serve as cost-effective solutions for organizations seeking to limit the number of more expensive user profiles. 

Secure Sharing with Data Clean Rooms

Clean room managed accounts are another way for Snowflake customers to share data with non-Snowflake customers. Data clean rooms are created by data providers to avoid privacy concerns when sharing their data. This is accomplished by allowing data consumers to compile aggregated results and analysis without permitting access to query the original raw data. Data providers can granularly control how their data is accessed and the types of analysis that can be run using their data. The data is encrypted and uses differential privacy techniques for further protection.   

How Can RiskSpan Help?

Knowing that you want to be on Snowflake isn’t always enough. Getting there can be the hardest part, and many organizations face challenges migrating from legacy systems and lack the expertise to fully utilize new technology after implementation. RiskSpan has partnered with numerous companies to help guide them towards a sustainable framework that holistically addresses all their data needs. No matter where the organization is within their data journey, RiskSpan has the expertise to help overcome the challenges associated with the new technology.    

RiskSpan is equipped to help institutions with the following as they embark on their Snowflake migration journey: 

  • End-to-end migration services, including architecture design, setting up the Snowflake environment, and properly validating the new platform.   
  • Adaptive project management. 
  • Data governance including the creation of a data catalog, tracing data lineage, and compliance and security requirements. 
  • Establishing data warehouses and data pipelines to facilitate collaboration and analysis. 
  • Creating security protocols including role-based access controls, disaster recovery solutions, and ensuring the utmost protection of personally identifiable information.   
  • Optimizing extract, transform and load solutions   

Snowflake’s data sharing capabilities offer an innovative solution for businesses looking to leverage real-time data without the hassle of traditional data transfer methods. These features not only enhance operational efficiency but also provide the scalability and security necessary for handling extensive datasets in a cloud environment.

Contact us with any questions or to discuss how Snowflake can be tailored to your specific needs.


Get Started
Log in

Linkedin   

risktech2024