The Prospera Labs Flexi-Match Knowledge Base is a flexible system that allows your agent to access a large database of knowledge. The Knowledge Base can be used for multiple different use-cases - from answering questions about products and services, through to analyzing documents or situations against best-practices that have been uploaded.
Loading In Knowledge
There are many different ways to load in knowledge to your agent, which will depend a lot on the different use-cases for your agent.
Uploading Documents
One of the simplest ways to load in information is to import documents prepared in conventional Office software, such as PDF or Microsoft Word documents.
You can upload documents by pressing the Import Document
button in the top right.
This will bring you to the bulk import page where you can then upload documents either by dragging and dropping, or by pressing the upload button and selecting the files.
Please note! There have been some bugs reported with the bulk document upload page, particularly if you are uploading large numbers of documents at once. If you need to upload a large number of documents, it is recommended to work directly with the Prospera Labs team to find a solution.
Uploading Web Pages
You can similarly import information from web pages. This is done in the Imported Webpage
menu.
Then press the “Import Web Page URL” button.
That will bring you to this page. Its a bit terse but you can paste in the URL here:
How It Works
Here we will explain the basic process of how the knowledge base works in the default Q&A setup.
Document Ingestion
When any type of document is taken into the system, we go through the following process:
Convert into raw text
Break the text into small sections based on the semantics and headers of the document
For each section, we generate a list of questions that the section of text answers. E.g. almost like Jeopardy, we are going backwards from the answer (contained in the document) to questions
We fetch the embedding vector for each of the questions
Create a summary of the entire document, referred to as the “Qualifying Text” internally because it helps qualify whether a particular knowledge chunk is relevant
Write the knowledge chunks out to the database
Querying
When you query the knowledge base, the system goes through the following steps:
The original raw query is transformed into a clean query. For the default knowledge base configuration, this means taking an ambiguous sentence, like
location of food
and turning it into the format of a question, such asWhere is the food located?
so that the text matches the format of the questions that were generated by the ingestion engineWe lookup the embedding vector for the transformed query
We use the embedding vector to perform a query on the knowledge base and find the top K knowledge chunks whose
matching text
(a.k.a the generated question) is the closest to the query textWe use a reranking algorithm to rerank the matched knowledge chunks against the query
We discard all except the top 5 matching knowledge chunks
The matching knowledge chunks are returned. For your standard AI Agents, these matching knowledge chunks are then fed into the conversation history for the agent as a tool-result, which the agent can then use to formulate and write its response
Customizing
The knowledge base can be connected to many different sources of data, combining them together into a homogenous knowledge system. Sources of data include documents and web-pages that you import through the user interface. But may also include custom data schemas that you have created yourself, containing information that the bot has extracted from conversations (this allows the agent to have a ‘memory’ in-between multiple sessions). Eventually we may even allow third-party API services to become sources of data for the knowledge base.
Types of smart chains for knowledge base customization
Each source of data can be customized independently. There are six different smart-chains that you can modify to change the behavior of the knowledge base. Those are:
Chunker chain - The chunker chain is responsible for taking a large, long document and breaking it up into individual sections of content. The default version just breaks a document into sections based on format meaning, e.g. using the existing headers and paragraphs as section breaks
Matching Text Chain - The matching text chain is responsible for taking a chunk of content, and transforming them into bits of text that are embedded and matched against when querying. The default form that matching texts take are questions, e.g. “What kind of food is available in Alaska?”. However, its not necessary and some use-cases of the knowledge base require querying based on other formats of text.
Qualifying Text Chain - The qualifying text chain is responsible for taking the chunk of content, along with the document as a whole, and creating a summary of the document as a whole along with how the text of the specific knowledge chunk fits into that larger document. E.g. if the whole document is on the subject of “Arctic Cuisine”, then the qualifying text might be “The section describes Alaskan cuisine within the context of a larger document describing various arctic cuisines”. The qualifying text is used to provide contextual information so that the LLM can correctly interpret the content of the knowledge chunk, which loses some meaning when separated from the larger document
Query Transformation Chain - The query transformation chain is responsible for taking the query provided by the user, which might take a variety of different forms, and transform it into the same format that is produced by the Matching Text Chain. By default the matching texts take the format of a question, so the default query transformation chain is designed to take whatever you type in and transform it also into a question. E.g. it might take an abstract bit of text like “location of food” and turn it into a proper question format of “Where can the food be found?”
Reranking Chain - The reranking chain is responsible for taking knowledge chunks that have been returned by the core Knowledge Base, and then using algorithms to assign a score between 0 and 1 as to how well the given knowledge chunk matches the query. By default, that means the reranking system is effectively responsible for determining if a given knowledge chunk actually answers the question provided by the user.
Filtering Chain - The final filtering chain is responsible for removing any knowledge chunks that should not be returned. The filtering chain sets a cut-off for how good the match and rerank scores need to be in order for the knowledge chunks to be returned at all. In many situations, it’s better for the database not to return anything if the knowledge chunks it found do not match closely enough the query provided by the user. So the filtering chain is where you can customize the business logic. The default filtering chain sets a cutoff for both the match score and the rerank score at 0.5
Pre-made Smart Chains for Knowledge Base
Identity Chains
For each of these smart chains, you can find an “identity” version of the chain that does nothing. Those can be observed if you search for the word “identity” in the smart chains.
These identity chains just pass their input data right on through to their output data.
Identity Chunker Chain - Keeps the entire content text as a single large block of text without breaking it apart
Identity Matching Text - Uses the entire content text as the matching text without transforming it
Identity Qualifying Text - Uses the entire content text as the qualifying text without transforming it
Identity Query Transformer - Passes through the users query verbatim without transforming it
Identity Reranker - Assigns the highest rerank score of 1.0 to every knowledge chunk regardless of its contents
Identity Filterer - Does not actually filter anything and just passes through all of the knowledge chunks through to its output
Premade Document Processing Chains
These are pre-made smart chains designed for processing long-form documents in a default Q&A style database. They can be found by searching for smart-chains with the prefix knowledge_base_document_
in the table view.
Document Chunker - the default document chunker will take the input text and break it apart into sections based on lines and semantic and format information. The goal of the default chunker is to try to break apart the document into small sections that are roughly analogous to what the original author might have viewed the sections of the document to be. E.g. wherever the original author put headers and sections
IMPORTANT! The input document must contain newline characters. TODO: Fix this by automatically wrapping input text in the default chunker.
Document Qualifying Text - the default document qualifying text smart chain produces a summary of the entire document to use as the qualifying text for all the separate knowledge chunks which resulted from the document.
Document Matching Text - the default matching text smart chain is designed to take a chunk of content and try to come up with the questions that said content is answering. E.g. it works almost like jeopardy, trying to take an answer and go backwards to the questions. It is designed to generate as many questions as possible for the given knowledge chunk as it can. Additionally, it is supposed to take into account the contextual information provided by the qualifying text.
Premade Query Processing Chains
These are pre-made smart chains that are designed for processing a query for a default Q&A style knowledge base. These pre-made smart chains can be found if you search for the prefix knowledge_base_basic_
Basic Query Transformer - The basic query transformer is designed for Q&A style knowledge bases. It will take whatever text was provided by the user for the query, e.g. “location of food”, and then turning it into a question in the same format that the
Document Matching Text
smart chain produces. e.g. “Where can I find the food?” By turning a potentially ambiguous statement into a specific question in the same format as the matching texts, we give our system the best chance of finding the correct knowledge chunks in the database.Basic Reranker - The basic reranker (currently not visible in the above screenshot) is designed for Q&A style knowledge bases. It takes the content of the knowledge chunk, and then uses a
Ranked Selection
smart chain step in order to assess how well the content of the knowledge chunk matches the question. By using the LLM predicted probabilities, it is able to come up with a final ranking score between 0 and 1. The scoring formula is calibrated so that a score of 0.5 corresponds roughly to a document that the model thinks has a 50-50% chance of being the answer to the users questionBasic Filterer - The basic filterer is designed for Q&A style knowledge bases. It will remove any knowledge chunks that have a match score lower then 0.5 or a rerank score lower then 0.5. This typically only eliminates knowledge chunks that are very very obviously wrong.
Smart Chain Bindings for Knowledge Base
The knowledge base has a different set of bindings for imported web-pages versus imported documents versus other custom data schemas that are loaded in. This allows you to customize how the knowledge base imports documents separately from how it imports web pages, and so on… These bindings can be found by searching for the prefix knowledge_base_imported_
on the smart chains.
The other bindings for the queries are shared for the entire knowledge base. They can be found by searching for the prefix knowledge_base_main_
in the bindings:
These allow you to customize the default way that the knowledge base processes queries.
Identity Bindings
For each of the identity smart chain, there is also an identity binding which points to it. These can be found by searching for identity
in the smart chain bindings.
These are some parts of the knowledge base system that are programmatically designed to use the identity smart chains in order to process knowledge base queries in specific situations that require turning off that components. For example, when you do a query in the Knowledge user interface, by default the filtering chain is turned off so that you can see all of the matches, whether they were good or bad.
IT IS RECOMMENDED TO NEVER MODIFY THE IDENTITY BINDINGS. PARTS OF THE SYSTEM DEPEND UPON THE FACT THAT THE IDENTITY BINDINGS DO AS TOLD.
Add Comment