A ranked selection is a type of prompt where we would like an LLM to select from among several options, but we would also like to get some sort of numerical indication of confidence.
Options
| The name of the model that will be used to process the prompt. |
| Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider. |
| The maximum number of tokens to be included in the completion from the LLM provider. |
| The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only |
| The maximum number of conversation events to include in the conversation history data fed to the bot. |
Output
| The text containing the LLM’s response to be said back to the user |
| The raw text of the prompt that was sent up to the LLM provider. |
Add Comment