Skip to end of metadata
Go to start of metadata
You are viewing an old version of this content. View the current version.
Compare with Current
View Version History
« Previous
Version 3
Next »
Options
model_name
| The name of the model that will be used to process the prompt. |
temperature
| Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider. |
max_tokens
| The maximum number of tokens to be included in the completion from the LLM provider. |
cache_variants
| The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only 1 variant is kept. But if your prompt is meant to do something like creative story writing or brainstorming, then you may want to increase the number of variants to say 100 or 10,000 , effectively eliminating the cache system. |
Add Comment