Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 2 Next »

A structured output prompt is a type of prompt that produces a JSON object that follows some specific defined structure. In the one-off variant, the only data fed to the model is the prompt itself, which is prepared as a user message. The model then produces the target JSON object.

Options

model_name

The name of the model that will be used to process the prompt.

temperature

Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider.

max_tokens

The maximum number of tokens to be included in the completion from the LLM provider.

cache_variants

The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only 1 variant is kept. But if your prompt is meant to do something like creative story writing or brainstorming, then you may want to increase the number of variants to say 100 or 10,000, effectively eliminating the cache system.

Output

The output JSON object will be in whatever schema you define within the output-schema editor.

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.