A ranked selection is a type of prompt where we would like an LLM to select from among several options, but we would also like to get some sort of numerical indication of confidence. Note that we can not get any sort of statistically valid probability out of the model. The models tend to be wildly over-confident in their choices when using this type of prompting. Nevertheless, you can get a good indication if the model is having a hard time deciding between several options based on this prompt.
Options
| A list of strings representing each of the options that the AI is allowed to choose between |
| The name of the model that will be used to process the prompt. |
| Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider. |
| The maximum number of tokens to be included in the completion from the LLM provider. |
| The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only |
Output
| This will be a string containing which of the |
| This will be a list of strings, containing all of the |
| This will be a list of integers which contain indexes into in the original |
| This will be a dictionary containing each of the options as keys, mapped to the calculated confidence score as the value |
Add Comment