Documentation

Overview

iostack is a platform for building and deploying AI conversation agents that can predictably follow an authored conversational process to achieve specific outcomes.

Why use iostack?

iostack allows you to specify a predictable conversational process, maintaining a high level of control over simple or complex multi-step or non-linear complex conversational processes.

Your agent will be able to navigate its way through your process while remaining coherent and robust to distractions, collecting and managing data for your use case.

How does iostack differ from legacy chatbot tools?

iostack differs from legacy chatbot authoring tools by leveraging the full power of Large Language Models (LLM’s) to allow them to respond, extract data and navigate themselves around a conversation structure.

Leveraging the intelligence of the LLM instead of relying on rules and heuristics means that you no longer have to perform traditional chatbot authoring tasks such as configuring classical NLP entity/intent detection, building complex conditional logic and ensuring correct handling for all conversational alternatives.

The result is a far less brittle and much more robust conversational agent that is orders of magnitude easier - and way more fun - to build.

How does iostack differ from contemporary LLM Agentic tools?

Contemporary tools and processes for authoring or creating agentic systems assume that the user asks a well-formed, complex query and that the agent will then execute an unsupervised, potentially complex and inevitably slow sequence of steps to fulfil that request.

This may involve gathering extra data or knowledge from a knowledge base, calling functions, etc.

The autonomous agent, guided by its prompting, makes its own decisions about what tools and functions to use, what knowledge base to retrieve from etc.

iostack, in contrast, gives the author much more control by allowing them to specify a conversational structure that simultaneously limits the behaviour and responses of the LLM to that structure while still allowing the LLM to make its own decisions from the available options within the structure.

The structure in other words provides guide rails that constrain the set of possible paths that the agent can follow, but it does not inhibit the ability of the LLM to choose a conversational path nor does it restrict the rich expressive power of the LLM and its inherent ability to maintain coherence and robustness in the face of distractions, digressions and trivia.

In addition, breaking processes up into discrete prompt context stages offers the opportunity for authors to provide more salient prompting, specific RAG retrieval, scoped down data management, and closer control of what gets shown to the LLM, reducing the possibility that the LLM will start generating out-of-distribution tokens i.e. hallucinating.

What skills do I need to use iostack?

iostack is a predominantly ‘no-code’ solution, so you will not need to know how to code.

If you already know how to write prompts for an LLM, you will find the way that it leverages the power of LLMs fairly intuitive, but even if you do not have prompt engineering experience, our experience has been that you will rapidly become productive.

This is primarily because effective prompting does not rely on abstractions. Instead it will quickly become obvious that straightforward, plain language is the most effective way to communicate with the LLM to perform the process steps and data extraction you require.

On this page