Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 19 Next »

The Prospera Labs Flexi-Match Knowledge Base is a flexible system that allows your agent to access a large database of knowledge. The Knowledge Base can be used for multiple different use-cases - from answering questions about products and services, through to analyzing documents or situations against best-practices that have been uploaded.

Loading In Knowledge

There are many different ways to load in knowledge to your agent, which will depend a lot on the different use-cases for your agent.

Uploading Documents

One of the simplest ways to load in information is to import documents prepared in conventional Office software, such as PDF or Microsoft Word documents.

image-20250101-193630.png

You can upload documents by pressing the Import Document button in the top right.

image-20250101-194046.png

This will bring you to the bulk import page where you can then upload documents either by dragging and dropping, or by pressing the upload button and selecting the files.

image-20250101-194138.png

Uploading Web Pages

You can similarly import information from web pages. This is done in the Imported Webpage menu.

image-20250101-194635.png

Then press the “Import Web Page URL” button.

image-20250101-194729.png

That will bring you to this page. Its a bit terse but you can paste in the URL here:

https://prosperalabs.ai/

image-20250101-194831.png

How It Works

Here we will explain the basic process of how the knowledge base works in the default Q&A setup.

Document Ingestion

Knowledge Base Ingestion(2).png

When any type of document is taken into the system, we go through the following process:

  1. Convert into raw text

  2. Break the text into small sections based on the semantics and headers of the document

  3. For each section, we generate a list of questions that the section of text answers. E.g. almost like Jeopardy, we are going backwards from the answer (contained in the document) to questions

  4. We fetch the embedding vector for each of the questions

  5. Create a summary of the entire document, referred to as the “Qualifying Text” internally because it helps qualify whether a particular knowledge chunk is relevant

  6. Write the knowledge chunks out to the database

Querying

Knowledge Base Query Processing.png

When you query the knowledge base, the system goes through the following steps:

  1. The original raw query is transformed into a clean query. For the default knowledge base configuration, this means taking an ambiguous sentence, like location of food and turning it into the format of a question, such as Where is the food located? so that the text matches the format of the questions that were generated by the ingestion engine

  2. We lookup the embedding vector for the transformed query

  3. We use the embedding vector to perform a query on the knowledge base and find the top K knowledge chunks whose matching text (a.k.a the generated question) is the closest to the query text

  4. We use a reranking algorithm to rerank the matched knowledge chunks against the query

  5. We discard all except the top 5 matching knowledge chunks

  6. The matching knowledge chunks are returned. For your standard AI Agents, these matching knowledge chunks are then fed into the conversation history for the agent as a tool-result, which the agent can then use to formulate and write its response

Customizing

The knowledge base can be connected to many different sources of data, combining them together into a homogenous knowledge system. Sources of data include documents and web-pages that you import through the user interface. But may also include custom data schemas that you have created yourself, containing information that the bot has extracted from conversations (this allows the agent to have a ‘memory’ in-between multiple sessions). Eventually we may even allow third-party API services to become sources of data for the knowledge base.

Types of smart chains for knowledge base customization

Each source of data can be customized independently. There are six different smart-chains that you can modify to change the behavior of the knowledge base. Those are:

  1. Chunker chain - The chunker chain is responsible for taking a large, long document and breaking it up into individual sections of content. The default version just breaks a document into sections based on format meaning, e.g. using the existing headers and paragraphs as section breaks

  2. Matching Text Chain - The matching text chain is responsible for taking a chunk of content, and transforming them into bits of text that are embedded and matched against when querying. The default form that matching texts take are questions, e.g. “What kind of food is available in Alaska?”. However, its not necessary and some use-cases of the knowledge base require querying based on other formats of text.

  3. Qualifying Text Chain - The qualifying text chain is responsible for taking the chunk of content, along with the document as a whole, and creating a summary of the document as a whole along with how the text of the specific knowledge chunk fits into that larger document. E.g. if the whole document is on the subject of “Arctic Cuisine”, then the qualifying text might be “The section describes Alaskan cuisine within the context of a larger document describing various arctic cuisines”. The qualifying text is used to provide contextual information so that the LLM can correctly interpret the content of the knowledge chunk, which loses some meaning when separated from the larger document

  4. Query Transformation Chain - The query transformation chain is responsible for taking the query provided by the user, which might take a variety of different forms, and transform it into the same format that is produced by the Matching Text Chain. By default the matching texts take the format of a question, so the default query transformation chain is designed to take whatever you type in and transform it also into a question. E.g. it might take an abstract bit of text like “location of food” and turn it into a proper question format of “Where can the food be found?”

  5. Reranking Chain - The reranking chain is responsible for taking knowledge chunks that have been returned by the core Knowledge Base, and then using algorithms to assign a score between 0 and 1 as to how well the given knowledge chunk matches the query. By default, that means the reranking system is effectively responsible for determining if a given knowledge chunk actually answers the question provided by the user.

  6. Filtering Chain - The final filtering chain is responsible for removing any knowledge chunks that should not be returned. The filtering chain sets a cut-off for how good the match and rerank scores need to be in order for the knowledge chunks to be returned at all. In many situations, it’s better for the database not to return anything if the knowledge chunks it found do not match closely enough the query provided by the user. So the filtering chain is where you can customize the business logic. The default filtering chain sets a cutoff for both the match score and the rerank score at 0.5

Pre-made Smart Chains for Knowledge Base

Identity Chains

For each of these smart chains, you can find an “identity” version of the chain that does nothing. Those can be observed if you search for the word “identity” in the smart chains.

image-20250102-215331.png

These identity chains just pass their input data right on through to their output data.

  • Identity Chunker Chain - Keeps the entire content text as a single large block of text without breaking it apart

  • Identity Matching Text - Uses the entire content text as the matching text without transforming it

  • Identity Qualifying Text - Uses the entire content text as the qualifying text without transforming it

  • Identity Query Transformer - Passes through the users query verbatim without transforming it

  • Identity Reranker - Assigns the highest rerank score of 1.0 to every knowledge chunk regardless of its contents

  • Identity Filterer - Does not actually filter anything and just passes through all of the knowledge chunks through to its output

Premade Document Processing Chains

These are pre-made smart chains designed for processing long-form documents in a default Q&A style database. They can be found by searching for smart-chains with the prefix knowledge_base_document_ in the table view.

image-20250102-220238.png

  • Document Chunker - the default document chunker will take the input text and break it apart into sections based on lines and semantic and format information. The goal of the default chunker is to try to break apart the document into small sections that are roughly analogous to what the original author might have viewed the sections of the document to be. E.g. wherever the original author put headers and sections

IMPORTANT! The input document must contain newline characters. TODO: Fix this by automatically wrapping input text in the default chunker.

  • Document Qualifying Text - the default document qualifying text smart chain produces a summary of the entire document to use as the qualifying text for all the separate knowledge chunks which resulted from the document.

  • Document Matching Text - the default matching text smart chain is designed to take a chunk of content and try to come up with the questions that said content is answering. E.g. it works almost like jeopardy, trying to take an answer and go backwards to the questions. It is designed to generate as many questions as possible for the given knowledge chunk as it can. Additionally, it is supposed to take into account the contextual information provided by the qualifying text.

  • No labels