Prompting
Prompting is the mechanism that allows the author to instruct the LLM how to behave and how to perform the tasks required of it.
-
Prompting consists of plain language detailing the task at hand, how to respond and so on.
-
All aspects of leveraging the ability of the LLM to respond and make decisions for the author are communicated through prompting.
-
There are many prompt engineering techniques to enable better accuracy or more considered responses from the LLM. Almost all of them will be useful when used with iostack.
If you have no experience with prompt engineering, fear not!
Creating effective LLM prompts is almost always a process of writing plain, clear language referencing the task and objectives at hand directly i.e. there is no complex abstraction required.
Most clear prompts work most of the time - finessing them for greater accuracy in responses is a learned skill - akin to working with a gifted child - but you will be able to observe impressive Agent behaviours with basic beginner prompting.
How does iostack generate the final prompts presented to the LLM?
The prompts presented to the LLM are synthesized for each stage of your agent from two sources:
-
Two Agent-wide components - a prefix prompt and a suffix prompt.
-
The Stage's specific prompting component
The Agent wide prompt components are used to bracket the Stage's component to form the resulting prompts.
Aspects of the Agent's state (its variables) that have been specified for showing to the LLM are also embedded in each Stage's prompting.
This synthesis enables consistent Agent-wide behaviour to be specified once eg. the basic character of the Agent while also specifying Stage specific prompting.
The synthesis also allows for specifying how aspects of the Agent's state can be made available for update across all Stages of the Agent - or for showing it some aspects of the Agent's state at every turn for every Stage.