Conversational Structured Output Prompt
A structured output prompt is a type of prompt that produces a JSON object that follows some specific defined structure. In the conversational variant, the entire history of the conversation is fed to the LLM as input to produce the object. The content template becomes the system prompt fed to the model. The model then produces a JSON object matching the given schema.
Options
| The name of the model that will be used to process the prompt. |
| Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider. |
| The maximum number of tokens to be included in the completion from the LLM provider. |
| The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only |
| The maximum number of conversation events to include in the conversation history data fed to the bot. |
Output
The output JSON object will be in whatever schema you define within the output-schema editor.
Properties
type | LLM |
needs conversation | true |
uses content template | true |
uses options template | true |
customizable output schema | true |