Options
| The name of the model that will be used with the prompt |
| Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider. |
| The maximum number of tokens to be included in the response. |
| The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only |