/
Ranked Selection One Off Prompt

Ranked Selection One Off Prompt

A ranked selection is a type of prompt where we would like an LLM to select from among several options, but we would also like to get some sort of numerical indication of confidence. Note that we can not get any sort of statistically valid probability out of the model. The models tend to be wildly over-confident in their choices when using this type of prompting. Nevertheless, you can get a good indication if the model is having a hard time deciding between several options based on this prompt.

Options

options

A list of strings representing each of the options that the AI is allowed to choose between

model_name

The name of the model that will be used to process the prompt.

temperature

Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider.

max_tokens

The maximum number of tokens to be included in the completion from the LLM provider.

cache_variants

The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only 1 variant is kept. But if your prompt is meant to do something like creative story writing or brainstorming, then you may want to increase the number of variants to say 100 or 10,000, effectively eliminating the cache system.

Output

text

This will be a string containing which of the options which was selected by the model

ranked_options

This will be a list of strings, containing all of the options sorted according to the level of confidence the model has that said option is the correct answer

ranked_indexes

This will be a list of integers which contain indexes into in the original options list, sorted in the same order as ranked_options, with the highest selected answers index first

confidence_scores

This will be a dictionary containing each of the options as keys, mapped to the calculated confidence score as the value

Properties

type

LLM

needs conversation

false

uses content template

true

uses options template

true

customizable output schema

false

Related content