top of page

Improving Employee Onboarding with AI Agent: An Example Using LlamaIndex and GraphDB

Writer's picture: Brikesh KumarBrikesh Kumar

Updated: Jun 18, 2024

In our previous blog post (Creating an AI agent with LlamaIndex), we saw how to build a basic AI agent using LlamaIndex. We covered the basics of setting up the environment, integrating data sources, and building a functional AI agent capable of handling various tasks. Today, we’ll take this journey a step further by making our AI agent even smarter and more connected.


Alex's Onboarding Journey: Overcoming Information Overload

New employee onbaording challenges
New employee onboarding challenges

Meet Alex, a software engineer who just joined TechSphere Innovations. On his first day, he walked into the office with excitement and anticipation. He received a welcome packet and was directed to the company’s intranet for onboarding information.


Alex began exploring the intranet and quickly realized that some wiki pages were outdated with broken links, and critical documents were buried deep within SharePoint's complex folder structures, making it difficult to find the information he needed.


By lunchtime, Alex had more questions than answers. He wanted to know his team members and the project details he would be working on. Over the next few days, Alex's inbox filled up with welcome messages and onboarding instructions, containing useful information. However, he still had questions and felt overwhelmed by the amount of information to process.


Alex also had several HR-related questions about the remote work policy, health insurance enrollment, and the procedure for requesting time off. These answers were often buried in long policy documents.


In the afternoons, Alex tried to dive into the engineering project documentation, but it was scattered across various platforms, making understanding the codebase challenging. He had to reach out to previous engineers to understand the system architecture and decisions made during the project.


Alex wanted to get involved in the company culture and meet new people but was unsure about upcoming events or how to join employee resource groups. He missed out on several events simply because he didn’t know they were happening.


Introduction: Building Intelligent AI Agents with GraphDB, LlamaIndex, and API Integration

Imagine an AI agent that not only comprehends your queries but also seamlessly connects with various data sources, retrieves pertinent information, and performs tasks on your behalf. Constructing such an intelligent agent can vary in complexity, but the fundamental requirement is the ability to access dispersed information across silos, interpret it effectively, and take appropriate actions.


The efficiency of an AI agent heavily relies on how well the data is organized and how effectively the agent can leverage it. In this example, we will utilize a combination of Graph databases, LlamaIndex, and strategic API calls to demonstrate the process. This approach ensures that the AI agent can navigate complex data structures, understand relationships between different data points, and interact with various systems to provide comprehensive and actionable insights.


Introduction to Graph Database

Graph structures offer a powerful way to connect pieces of information and derive new insights. The basic structure of a graph consists of entities represented as nodes and the relationships between them depicted as edges. Each node and edge can have many properties as key value describing the nodes and the relationship.

Basic Graph Structure
Basic Graph Structure

The picture above represents a basic graph structure where each node and the edges connecting them have labels. When combined, they represent a fact within the graph.


Several operations can be performed with graph data, but the most fundamental action is traversal. Traversal involves starting from a given node and systematically visiting connected nodes by following edges, allowing exploration of the entire network.


For those interested in other applications of graph data, here are some commonly used graph algorithms that provide significant insights:


  • Statistical Analysis: Involves analyzing nodes, types, labels, etc.

  • Network Propagation: Examines how information spreads through the network.

  • Link Prediction: Predicts new connections based on existing data.

  • Collaborative filtering: Commonly used in recommendation systems.

  • Community Detection: Identifies clusters or groups within the network.

  • Finding Similarity and Centrality: Measures the importance and similarity of nodes.

  • Topological Machine Learning: Uses the structure of the graph for machine learning tasks.

  • Graph Feature Engineering: Creates features for machine learning models based on graph data


Knowledge Graph

A knowledge graph is a structured arrangement of data where information is contextualized, making insights readily accessible. A knowledge graph is created by connecting more abstract information about the underlying data. The abstract information and relationship include taxonomy and ontology.


Taxonomy: Focusses on classification and supports "is_a" relationship.

Ontology: This increases the level of abstraction and defines complex relationships in categories such as part_of, depends_on etc. The diagram below illustrates the idea.


Illustration of graph
Illustration of graph

While we can't cover all aspects of knowledge graphs in this article, we'll create a simple graph to demonstrate its benefits to an AI agent. By organizing data in this way, AI agents can navigate complex relationships and provide more accurate and insightful responses.


Implementing the Intelligent AI agent

With the foundational knowledge of graphs in place, let's look into creating an intelligent AI agent to streamline the onboarding process for new employees. This requires a fair amount of work initially, but once the foundational elements are in place - such as the data is structured and the tools/connectors for AI agents are built then it can be used to build more powerful AI agents. Here’s a concise guide to the key steps involved:


Organize the data

Data Model: We start by structuring the data in a graph format. The diagram below illustrates a sample schema where nodes represent entities within a company, and edges denote the relationships between these entities. Documents are segmented into chunks and stored with their embeddings. This schema allows us to build a rich information system, with nodes and labels annotated with relevant properties. For instance, graph link predictions, such as the WORKS_WITH relationship, can be established between employees. We can model the data in many ways, however key idea is how the entities and their relationships reveal interesting insights from the data. For example looking at the graph below we can answer question like:


Can you find a co-worker who might have worked on feature 'Y' and provide some context?

Search for any document with title similar to 'Y', find the team that owns the document and person(s) who contributed to the document. Create a response with summary of the document, owner etc.


What are HR policies for adding a dependent to on the insurance plan?

Search for documents/wikis owned by HR that are semantically similar and find the chunks of the document which has the most relevant information. Prepare the response with the source document link.


I am having this issue in setting up VPN, can you help?

Search for documents owned by IT department that are semantically similar and find the chunks of the document has the most relevant information.


Illustrative Graph Schema
Illustrative Graph Schema

Loading the Graph Database with Data: Load the graph database with data. For this blog, we are using Neo4j. Neo4j supports Cypher as a query language, known for its intuitive syntax. Neo4j’s bulk import tool simplifies the process of uploading data into the graph database. Document processing requires the standard RAG procedure: chunking, creating embeddings, and saving the chunks with their embeddings. Nodes and edges can be annotated with additional properties.


Integrating Microsoft Graph API: If your organization leverages Microsoft products, the Microsoft Graph API can be a valuable resource. It offers a variety of APIs useful for enhancing the AI agent's functionality, such as accessing OneNote, searching content on OneDrive and SharePoint, and interacting with Teams, searching Teams messages. You can seamlessly integrate data from Microsoft Graph APIs with information stored in the graph database. Check out the documentation of Microsoft Graph, to discover how these APIs can complement your graph database.

Create the tools required for the ai agent

Next, we need to create the tools that the agent will use to answer user queries. As explained in the previous blog post, tools are the functions that the agent calls when appropriate. These functions encapsulate the type of information we want to extract from the database and any action that we would want the agent to take when appropriate. Since the language model will decide which tool to invoke and how to construct the input parameters for calling the tool, this step is crucial in terms of naming conventions and providing good documentation for each function.


Here are some guidelines provided by OpenAI:

  1. Clear Naming Conventions: Ensure that function names clearly describe their purpose and the type of information they retrieve.

  2. Comprehensive Documentation: Provide detailed documentation for each function, explaining its parameters, expected inputs, and outputs.

  3. Parameter Construction: Ensure that the functions are designed to accept parameters in a way that the language model can easily construct them based on user queries.

  4. Function Modularity: Design functions to be modular and reusable across different types of queries and use cases.


Refer to OpenAI’s API documentation for more detailed guidelines and best practices.

Here are some code snippets on how to create a tool that queries the graph DB to resolve a user question. This will generate Cypher statements from the text. It uses few-shot inference to generate the Cypher query. You may need to provide additional Cypher query examples in the prompt, based on the schema and query complexity.

# Prompt for generating Cypher
CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to 
query a graph database.
Instructions:
Use only the provided relationship types and properties in the 
schema. Do not use any other relationship types or properties that 
are not provided.
Schema:
{schema}
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than 
for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
Examples: Here are a few examples of generated Cypher 
statements for particular questions:

# What investment firms are in San Francisco?
MATCH (org:Organization {name:'HR'})-[:OWNS]->(doc:Docs {topic: 'Benefits'})-[:PART_OF]->(chunk:Chunks)
RETURN doc, chunk
The question is:
{question}"""""

Creating Settings for Connecting to the Database

# Load from environment
load_dotenv('.env', override=True)
NEO4J_URI = os.getenv('NEO4J_URI')
NEO4J_USERNAME = os.getenv('NEO4J_USERNAME')
NEO4J_PASSWORD = os.getenv('NEO4J_PASSWORD')
NEO4J_DATABASE = os.getenv('NEO4J_DATABASE') or 'neo4j'
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')

# Note the code below is unique to this course environment, and not a 
# standard part of Neo4j's integration with OpenAI. Remove if running 
# in your own environment.
OPENAI_ENDPOINT = os.getenv('OPENAI_BASE_URL') + '/embeddings'

Create a Neo4j Connection

kg = Neo4jGraph(
 url=NEO4J_URI,username=NEO4J_USERNAME,password=NEO4J_PASSWORD,    
 database=NEO4J_DATABASE
)

Define the function that generates a cypher query from the given text using the prompt template and runs the query against the database. This is using the GraphCypherQAChain library of LangChain.

from langchain_community.graphs import Neo4jGraph
from langchain_community.vectorstores import Neo4jVector
from langchain_openai import OpenAIEmbeddings
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import GraphCypherQAChain

def get_answers_from_database(question: str) -> str:
    """
    Generate Cypher queries and retrieve answers from a Neo4j graph database using  
    Langchain and OpenAI.

    Parameters:
    question (str): The user's question that needs to be answered by querying the 
    Neo4j graph database.

    Returns:
    str: The response retrieved from the Neo4j graph database.
    """
    
    # Define the prompt template for Cypher query generation
    CYPHER_GENERATION_PROMPT = PromptTemplate(
        input_variables=["schema", "question"], 
        template=CYPHER_GENERATION_TEMPLATE
    )

    # Create the Cypher QA chain using the provided LLM and the knowledge graph
    cypherChain = GraphCypherQAChain.from_llm(
        ChatOpenAI(temperature=0),
        graph=kg,
        verbose=True,
        cypher_prompt=CYPHER_GENERATION_PROMPT,
    )

    # Run the Cypher QA chain to get the response
    response = cypherChain.run(question)    
    return response

Similarly you can create other functions for either searching via Microsoft Graph APIs or making a call to git. Another example of searching in OneDrive and SharePoint.

import requests
def search_onedrive_sharepoint(query: str, access_token: str) -> dict:
    """
    Search Microsoft OneDrive and SharePoint using Microsoft Graph API.

    Parameters:
    query (str): The search query string.
    access_token (str): The OAuth 2.0 access token for Microsoft Graph API.

    Returns:
    dict: The search results from OneDrive and SharePoint.
    """
    # Endpoint for Microsoft Graph API search
    url = "https://graph.microsoft.com/v1.0/search/query"

    # Headers for the request
    headers = {
        "Authorization": f"Bearer {access_token}",
        "Content-Type": "application/json"
    }

    # The request body for the search query
    body = {
        "requests": [
            {
                "entityTypes": ["driveItem"],
                "query": {
                    "queryString": query
                }
            }
        ]
    }
    # Make the HTTP request to the Microsoft Graph API
    response = requests.post(url, headers=headers, json=body)
    # Return the JSON response
    return response.json()

Create the AI Agent Pipeline

Once the functions are ready and tested, we can register them as tools using FunctionTool to be utilized by the LlamaIndex library. ObjectIndex is another way to register functions as tools if the functions are complex and require persistence.


The next step is to create an agent. With LlamaIndex, you have two options: you can either create the agent using the low-level AgentRunner and AgentWorker classes for more customization, or you can use one of the built-in agent classes with planning capabilities, such as StructuredPlannerAgent or ReActAgent.


from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent

database_search_tool = FunctionTool.from_defaults(fn=get_answers_from_database)
onedrive_search_tool = FunctionTool.from_defaults(fn=search_onedrive_sharepoint)
# initialize llm
llm = OpenAI(model="gpt-4o")
# initialize ReAct agent
agent = ReActAgent.from_tools([database_search_tool, onedrive_search_tool], llm=llm, verbose=True)
result = agent.run("Give some information on leave policy")

Expose an Endpoint to Interact

After creating the agent pipeline, the next step is to expose an endpoint that allows interaction with the agent. This can be done using a web framework like FastAPI.


Conclusion

Creating an intelligent AI agent is an iterative process, but once the foundation is set, many useful AI agents can be developed quickly. Teams can leverage a library of connectors and tools to create their own agents, enabling efficient access to data and automation of tasks. For more in-depth discussions and updates on the latest AI and ML trends, don’t forget to follow us on LinkedIn and X (Twitter). Stay tuned for more informational blogs and innovations in the world of AI!


113 views0 comments

Recent Posts

See All

Comments


bottom of page