Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

model_name

The name of the model that will be used to process the prompt.

temperature

Temperature to use with the model. The exact mathematical definition of temperature can vary depending on the model provider.

max_tokens

The maximum number of tokens to be included in the completion from the LLM provider.

cache_variants

The number of different variations to keep in the cache for this prompt. When the input data to the LLM is exactly the same, the prompt can be ‘cached’. By default only 1 variant is kept. But if your prompt is meant to do something like creative story writing or brainstorming, then you may want to increase the number of variants to say 100 or 10,000, effectively eliminating the cache system.

max_history_events

The maximum number of conversation events to include in the conversation history data fed to the bot.

custom_actions

Custom actions to include in the tool selection. These must be provided with a smart_chain_binding_name that indicates which smart chain to execute for the action. The 'text' field on the output from the smart-chain will be used as the action result.

disabled_action_ids

Actions to exclude from the tool selection, referenced by their action_ids. Can include actions that are defined in the custom_actions section.

disabled_module_ids

Agent Modules to exclude from the tool selection, referenced by their module_ids. Can include module_ids that are only defined in the custom_actions section.

Output

text

The text containing the LLM’s response to be said back to the user

prompt

The raw text of the prompt that was sent up to the LLM provider.