/
Conversational Structured Output Prompt

Conversational Structured Output Prompt

A structured output prompt is a type of prompt that produces a JSON object that follows some specific defined structure. In the conversational variant, the entire history of the conversation is fed to the LLM as input to produce the object. The content template becomes the system prompt fed to the model. The model then produces a JSON object matching the given schema.

Options

model_name

The name of the model that will be used to process the prompt.

temperature

Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider.

max_tokens

The maximum number of tokens to be included in the completion from the LLM provider.

cache_variants

The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only 1 variant is kept. But if your prompt is meant to do something like creative story writing or brainstorming, then you may want to increase the number of variants to say 100 or 10,000, effectively eliminating the cache system.

max_history_events

The maximum number of conversation events to include in the conversation history data fed to the bot.

Output

The output JSON object will be in whatever schema you define within the output-schema editor.

Properties

type

LLM

needs conversation

true

uses content template

true

uses options template

true

customizable output schema

true

 

Related content