Conversational System Prompt
A conversational system prompt is used to calculate a response the agent should have, given the current state of the conversation. Its the equivalent to applying a system prompt to the current conversation state, but with all tools disabled so we only get the agents response.
Options
| The name of the model that will be used to process the prompt. |
| Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider. |
| The maximum number of tokens to be included in the completion from the LLM provider. |
| The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only |
| The maximum number of conversation events to include in the conversation history data fed to the bot. |
Output
| The text containing the LLM’s response to be said back to the user |
| The raw text of the prompt that was sent up to the LLM provider. |
Properties
type | LLM |
needs conversation | true |
uses content template | true |
uses options template | true |
customizable output schema | false |