Purpose
This task is used to send a prompt to an AI service and receive the results.
Disclaimer: AI models can be unpredictable and are subject to variation in response, consistency, and reliability. AI models are not guaranteed to return accurate data.
Category Location: All, Data Services/Enrichment
Prerequisites
This task needs to be enabled in Openprise to be available to use. You must be a System Administrator in Openprise to enable.
Navigate to Administration and select System Settings
Select the Features tab
Click on the AskAI feature to turn it on and Click Save
Field Description
- Select data service: select the data service you've created in Openprise. Available data services are listed at the bottom of this article.
- Construct prompt string: use a combination of job attributes and text to construct a prompt to send to the AI service. Select attributes by typing @ in the box, and then select the attribute from the pick-list provided. You must include an attribute in the prompt string to get a result.
- Test Prompt (see below): select this option to test how the LLM responds to your prompt.
- Check for Prompt Injection: If this option is selected, an additional call to the AI data service specified in the task will be performed in which the provided text is evaluated for instances of prompt injection. There are 2 options: Remove prompt injection text and continue OR Do not process records flagged as having possible prompt injection.
There will be 4 new attributes created (xx indicates AI model used): xx_error, xx_success, xx_prompt_injection_test, xx__contains_prompt_injection. Example of Remove prompt injection text and continue
Example of Do not process records flagged as having possible prompt injection
- Prompt string attribute (optional): select the attribute that will be populated with the prompt string to get a result.
- Add Attribute (optional): create a new attribute that can be used for the Prompt String Attribute or Complete Result Attribute.
- Complete result attribute (optional): select an attribute from the dropdown to contain the complete model response as it is returned from the LLM. The attribute can only be a text-type attribute.
- Attribute (optional): select individual attributes to capture parts of the model response.
- Description: Type a description for each attribute.
NOTE: While both “Complete result attribute” and “Individual result attributes” fields are optional, at least one of the two must be specified in order to save the task.
Test Prompt
The Test Prompt pop-up consists of four components:
- Use Complete Result Attribute: If this checkbox is selected, in the “Response” section, you will see the complete model response as it would be returned to the “Complete Result Attribute” specified in the main task configuration.
- Use Individual Result Attributes: If this checkbox is selected, in the “Response” section, you will see the Individual Result Attributes and their corresponding values as they would be returned when the task is completed.
- Construct prompt string: Use a combination of job attributes and text to construct a prompt to send to the model. Select attributes by typing “@” in the box, and then selecting the attribute from the pick-list provided. You must include an attribute in the prompt string to provide sample data.
-
Response: When “Test Prompt” is selected, the results from the configuration you have entered in the pop-up will be displayed here. If you have both “Use Complete Result Attribute” and “Use Individual Result Attributes” selected, both results will appear here once a response is returned.
-
-
- NOTE: If both options are selected, it may take longer for the results to be returned.
-
-
Once a checkbox is selected and you have constructed a prompt string, the Sample Data section in the top left corner will be made visible.
- Sample Data Attribute: For every attribute that you specify in the “Construct prompt string” box (using “@” to select/specify attributes), a section will appear wherein you can enter a sample data value for that attribute. If multiple attributes are specified in the prompt, there will be a section for each wherein you can manually specify test data.
- Sample Data Value: type in sample data that best relates to the sample data attribute(s).
- Test Prompt: Test the prompt and corresponding results configuration that you have created in the pop-up window. Results will be displayed in the "Response" window.
- Populate Task: once you are happy with the response, select this option to apply configurations directly to the AskAI task template.
- Select Reset Test to reset the Test Prompt pop-up window.
- Select Close to close the Test Prompt pop-up window.
Advanced Configurations
- Max tokens: Specify the maximum tokens to include in the response. Any response longer than the maximum will result in an error. The largest value accepted is 4000, the minimum value is 1, the default is 2000. We recommend using the minimum number of tokens that are needed to complete your request to reduce model hallucinations. This configuration will NOT be considered if the data service uses a grounded model as they do not support specified token limitations.
-
Temperature: Specify a value for temperature. Acceptable values are from 0 (default) to 1. The lower the number, the more consistent the responses will be when given the same prompt.
-
-
- OpenAI GPT’s reasoning models do not support specifying temperature so this value will not be considered when you are using one of those models. It will not affect your ability to receive a response if this parameter is present.
-
-
Output Status Attributes
Use the following Openprise fields to review model outputs:
-
_success: Whether or not the request was successful in generating a response from the model.
-
- NOTE: If “Individual Result Attributes” are selected and the model is unable to return the response in the desired format, this will result in an error.
-
- _usage: This value represents the number of tokens used by the AI provider to generate a response. Note that for certain providers this may include “reasoning” tokens and may not represent the tokens that are used strictly for the final response itself.
- _timestamp: When the request was sent to the model provider.
- _citations: If a web-search enabled model was selected for use in the data service that the Ask AI task uses (i.e.: Gemini 2.5 with Web Search), a list of citations will be provided indicating what sources the model used to retrieve the returned information.
- _error: If an error occurred during the processing of the request, the text of the error message for a specific record will be displayed here.
- _insufficient_quota: If a quota is configured for the data service, this attribute will indicate whether or not the request was able to be executed given the established quota limits.
- _model_used: Model used within the data service that was specified when calling the AI provider.
Ask AI Responses
If the model response exceeds the ES character limit, the response will be truncated and "[TRUNCATED]" will be appended to the end of the response. The error field "_error" will display a message stating that the response was truncated because it exceeded the ES character limit.
NOTE: We suggest you use a chatbot to construct a prompt outside of Openprise before configuring this task. When constructing the prompt, try to provide specific instructions on how the LLM should format its response. Doing so means you'll have an easier time processing the response from this task template.
Example Use Cases:
- The Ask AI template is helpful when you have unstructured data that you want to summarize. An example is to send Ask AI the notes from a meeting or task and summarize the outcome or next steps.
- The Ask AI template is helpful when you have data values in an inconsistent format and want to extract specific values from the data. An example is to use Ask AI to pull specific information from the response of a Google Search query.
- Another example is to process job titles that can't be segmented using Openprise's Open Data. These titles may be in a language that isn't easily translated, or has terms that are not in the Open Data files. In this case, construct your prompt to Ask AI to classify the title and assign it the best category for job level, and provide the specific levels you need. For these prompts, it is often helpful to include a category of Other so titles containing junk data can be categorized as best as possible.
Prompt Construction Best Practices
To promote consistency and accuracy of the responses returned, there are several best practices you should consider following:
- Include clear, detailed instructions and examples of what information you expect the model will ingest and what it should return in response.
- Examples provide the model with additional context with which to understand what is being asked. This is referred to as multi-shot prompting. Please refer to the articles linked below to get a better understanding of how this works and how it can be used in practice:
- Outline in detail (or in your examples) exactly how you expect the response to look for the given input.
- It can be beneficial to include words such as “please” in your prompt.
Examples of Effective Prompts:
- Add 2 days to the following date in the format MM-DD-YYY: {{Date}}. For example, if the given date is 01-01-2102, the response you return should be 01-03-2102. Do not include any additional text in the response, only the date value.
- If the word or phrase after the ':' can be found within the following list ['Hello', 'Goodbye', 'What's up?'], return TRUE and if not return FALSE: {{String}}. For example, if the phrase is 'This is a string!', return FALSE. Please only return the boolean value TRUE or FALSE with no additional text.
- Change the digits before the first '.' in this IP to be '000'. This is the IP you need to change {{IP String}}. For example, if the IP is 0.0.0.0, the result you return should be 000.0.0.0 Do not return anything other than the modified IP value. Only return the IP address with no additional text.
Available Data Services