Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 14 Next »

The Prospera Labs Flexi-Match Knowledge Base is a flexible system that allows your agent to access a large database of knowledge. The Knowledge Base can be used for multiple different use-cases - from answering questions about products and services, through to analyzing documents or situations against best-practices that have been uploaded.

How It Works

Here we will explain the basic process of how the knowledge base works in the default Q&A setup.

Document Ingestion

Knowledge Base Ingestion(2).png

When any type of document is taken into the system, we go through the following process:

  1. Convert into raw text

  2. Break the text into small sections based on the semantics and headers of the document

  3. For each section, we generate a list of questions that the section of text answers. E.g. almost like Jeopardy, we are going backwards from the answer (contained in the document) to questions

  4. We fetch the embedding vector for each of the questions

  5. Create a summary of the entire document, referred to as the “Qualifying Text” internally because it helps qualify whether a particular knowledge chunk is relevant

  6. Write the knowledge chunks out to the database

Querying

Knowledge Base Query Processing.png

When you query the knowledge base, the system goes through the following steps:

  1. The original raw query is transformed into a clean query. For the default knowledge base configuration, this means taking an ambiguous sentence, like location of food and turning it into the format of a question, such as Where is the food located? so that the text matches the format of the questions that were generated by the ingestion engine

  2. We lookup the embedding vector for the transformed query

  3. We use the embedding vector to perform a query on the knowledge base and find the top K knowledge chunks whose matching text (a.k.a the generated question) is the closest to the query text

  4. We use a reranking algorithm to rerank the matched knowledge chunks against the query

  5. We discard all except the top 5 matching knowledge chunks

  6. The matching knowledge chunks are returned. For your standard AI Agents, these matching knowledge chunks are then fed into the conversation history for the agent as a tool-result, which the agent can then use to formulate and write its response

Loading In Knowledge

There are many different ways to load in knowledge to your agent, which will depend a lot on the different use-cases for your agent.

Uploading Documents

One of the simplest ways to load in information is to import documents prepared in conventional Office software, such as PDF or Microsoft Word documents.

image-20250101-193630.png

You can upload documents by pressing the Import Document button in the top right.

image-20250101-194046.png

This will bring you to the bulk import page where you can then upload documents either by dragging and dropping, or by pressing the upload button and selecting the files.

image-20250101-194138.png

Uploading Web Pages

You can similarly import information from web pages. This is done in the Imported Webpage menu.

image-20250101-194635.png

Then press the “Import Web Page URL” button.

image-20250101-194729.png

That will bring you to this page. Its a bit terse but you can paste in the URL here:

https://prosperalabs.ai/

image-20250101-194831.png

  • No labels