Documentation

Revealing Variable Values

You can insert the value of any variable into the prompting for any stage or for agent-wide prompting.

  • To insert the variable into a prompt, type the # (pound or hash) key which will bring up the list of variables available, allowing you to select the specific variable you wish to insert.

  • The placeholder inserted into the prompting will be replaced with the current value of that variable at the time of each turn, ensuring that the value shown to the LLM will be up-to-date.

  • You can embed values of variables to either parameterise the prompting or to make that data available to the agent to inform their behaviour.

Examples:

  • You could pass in the user’s preferred language, and then parameterise the prompting with something like “always respond in the user’s preferred language: #language" (the value for that particular session will be inserted here).

  • Another example might be a shopping list that you have collected from the user earlier in the conversation, then later you want the LLM to refer back to it. To achieve this, you could insert the following into the prompting:

    ‘This is the user’s current shopping list: #shopping_list’

To improve the LLM's understanding of the purpose of your variable, it's best practice when embedding variables into prompting to accompany them with explanatory text making it clear what the purpose of exposing the value of the variable is with respect to the specific context and goal of that stage within the Agent.

On this page