This guide is your quickstart toolkit for integrating LlamaIndex with Activeloop Deep Lake. You'll learn how to effortlessly index GitHub repositories into Deep Lake and interact with your code through natural language queries.
What are LlamaIndex and Activeloop Deep Lake?
LlamaIndex: Your Data Framework for LLMs
LlamaIndex is a bridge between your data and Language Learning Models (LLMs). Whether your data resides in APIs, SQL databases, or PDFs, LlamaIndex ingests and structures it into a format easily consumable by LLMs. It offers features like data connectors for various data sources, indexing capabilities for quick retrieval, and natural language query engines that make your data not just accessible but also interactive.
It also offers Llama Hub, a platform aggregating custom plugins for all data types.
Activeloop Deep Lake: Optimized Data Lake for ML
Activeloop Deep Lake is a data lake solution specifically designed for machine learning workflows. Unlike traditional data lakes, it's optimized for quick data retrieval and manipulation, making it an ideal choice for machine learning projects that require efficient data access. It supports various data types and formats, from images and videos to more complex data structures, while maintaining high performance. You can create local vector stores or use the managed serverless service.
The Synergy
When LlamaIndex and Activeloop Deep Lake are combined, they offer a robust, efficient, and interactive data management solution. LlamaIndex takes care of ingesting and structuring your data, while Activeloop Deep Lake provides optimized storage and retrieval capabilities.
In this guide, we'll see how to store and interact with your code repositories through natural language queries, offering a unique and powerful way to manage and understand your codebase.
Requirements
Before getting started, make sure you have the following:
- Python - Version 3.7 or newer is required.
- An active account on OpenAI, along with an OpenAI API key.
- A Deep Lake account, complete with a Deep Lake API key.
- a 'classic' personal token from GitHub.
Getting Started
Creating a new Python virtual environment for this project is strongly advised. It helps maintain a tidy workspace by keeping dependencies in one place.
- Create a Python virtual environment with:
python3 -m venv repo-ai
Then activate it with:
source repo-ai/bin/activate
Install the required packages
This project will need Python packages, including LLamaIndex and Deep Lake. Run the following pip command:
pip install llama-index deeplake openai python-dotenv
Let's understand what we are installing and why.
1. llama-index
What is it?
LlamaIndex is a data framework that works with Language Learning Models (LLMs). It helps ingest, structure, and make data accessible through natural language queries.
Key Features:
- Data Connectors: Ingest data from various sources like APIs, SQL databases, and PDFs.
- Data Indexing: Structure the ingested data for quick and efficient retrieval.
- Query Engines: Enable natural language queries to interact with your data.
Use-Case in the guide:
In the context of this guide, LlamaIndex will be used to index GitHub repositories and make them queryable through natural language interfaces.
2. deeplake
What is it?
Activeloop Deep Lake is a specialized data lake optimized for machine learning workflows. It allows for efficient storage and retrieval of various data types.
Key Features:
- Optimized Storage: Designed for quick data retrieval, ideal for machine learning applications.
- Data Type Support: Handles multiple data types like images, videos, and complex data structures.
Use-Case in the guide:
Deep Lake is the storage layer where the GitHub repositories indexed by LlamaIndex will be stored.
3. openai
What is it?
The OpenAI Python package provides an interface to OpenAI's GPT models and other services. It allows you to make API calls to interact with these models.
Key Features:
- API Integration: Easy integration with OpenAI's GPT models.
- Text Generation: Generate text based on the model's training data and capabilities.
Use-Case in the guide:
LLamaIndex uses this package to interact with the OpenAI models.
4. python-dotenv
What is it?
Python-dotenv is a library that allows you to specify environment variables in a .env
file, making it easier to manage configurations.
Key Features:
- Environment Variable Management: Store configuration variables in a
.env
file. - Easy Import: Automatically import variables from
.env
into your Python environment.
Use-Case in the guide:
This package manages the API keys.
How does LLamaIndex work?
In the context of leveraging LlamaIndex for data-driven applications, the underlying logic and workflow are pretty simple. Here's a breakdown:
- Load Documents: The first step involves loading your raw data into the system. You can do this manually, directly inputting the data, or through a data loader that automates the process. LlamaIndex offers specialized data loaders that can ingest data from various sources, transforming them into Document objects, and you can find many plugins on Llama Hub. This is a crucial step as it sets the stage for the subsequent data manipulation and querying functionalities.
- Parse the Documents into Nodes: Once the documents are loaded, they are parsed into Nodes, essentially structured data units. These Nodes contain chunks of the original documents and carry valuable metadata and relationship information. This parsing process is vital as it organizes the raw data into a structured format, making it easier and more efficient for the system to handle.
- Construct an Index from Nodes or Documents: After the Nodes are prepared, an index is constructed to make the data searchable and queryable. Depending on your needs, this index can be built directly from the original documents or the parsed Nodes. The index is often stored in structures like
VectorStoreIndex
, optimized for quick data retrieval. This step is the system's heart, turning your structured data into a robust, queryable database. - Query the Index: With the index in place, the final step is to query it. A query engine is initialized, allowing you to make natural language queries against the indexed data. This is where the magic happens: you can conversationally ask the system questions, and it will sift through the indexed data to provide accurate and relevant answers.
Let's code
Now that the environment is ready and all the explanations are out of the way let's start with some code. In your project's directory, create a new file named .env
, paste the following, and add your API keys:
GITHUB_TOKEN="YOUR_GH_CLASSIC_TOKEN"
OPENAI_API_KEY="YOUR_OPENAI_KEY"
ACTIVELOOP_TOKEN="YOUR_ACTIVELOOP_TOKEN"
DATASET_PATH="hub://YOUR_ORG/repository_vector_store"
Remember to edit this line DATASET_PATH="hub://YOUR_ORG/repository_vector_store", adding your Activeloop organization's ID to the URL.
After this, create a new file named main.py
and paste the following code:
import os
import textwrap
from dotenv import load_dotenv
from llama_index import download_loader
from llama_hub.github_repo import GithubRepositoryReader, GithubClient
from llama_index import VectorStoreIndex
from llama_index.vector_stores import DeepLakeVectorStore
from llama_index.storage.storage_context import StorageContext
import re
# Load environment variables
load_dotenv()
# Fetch and set API keys
openai_api_key = os.getenv("OPENAI_API_KEY")
active_loop_token = os.getenv("ACTIVELOOP_TOKEN")
dataset_path = os.getenv("DATASET_PATH")
def parse_github_url(url):
pattern = r"https://github\.com/([^/]+)/([^/]+)"
match = re.match(pattern, url)
return match.groups() if match else (None, None)
def validate_owner_repo(owner, repo):
return bool(owner) and bool(repo)
def initialize_github_client():
github_token = os.getenv("GITHUB_TOKEN")
return GithubClient(github_token)
def main():
# Check for OpenAI API key
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
raise EnvironmentError("OpenAI API key not found in environment variables")
# Check for GitHub Token
github_token = os.getenv("GITHUB_TOKEN")
if not github_token:
raise EnvironmentError("GitHub token not found in environment variables")
# Check for Activeloop Token
active_loop_token = os.getenv("ACTIVELOOP_TOKEN")
if not active_loop_token:
raise EnvironmentError("Activeloop token not found in environment variables")
github_client = initialize_github_client()
download_loader("GithubRepositoryReader")
github_url = input("Please enter the GitHub repository URL: ")
owner, repo = parse_github_url(github_url)
while True:
owner, repo = parse_github_url(github_url)
if validate_owner_repo(owner, repo):
loader = GithubRepositoryReader(
github_client,
owner=owner,
repo=repo,
filter_file_extensions=(
[".py", ".js", ".ts", ".md"],
GithubRepositoryReader.FilterType.INCLUDE,
),
verbose=False,
concurrent_requests=5,
)
print(f"Loading {repo} repository by {owner}")
docs = loader.load_data(branch="main")
print("Documents uploaded:")
for doc in docs:
print(doc.metadata)
break # Exit the loop once the valid URL is processed
else:
print("Invalid GitHub URL. Please try again.")
github_url = input("Please enter the GitHub repository URL: ")
print("Uploading to vector store...")
# ====== Create vector store and upload data ======
vector_store = DeepLakeVectorStore(
dataset_path=dataset_path,
overwrite=True,
runtime={"tensor_db": True},
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(docs, storage_context=storage_context)
query_engine = index.as_query_engine()
# Include a simple question to test.
intro_question = "What is the repository about?"
print(f"Test question: {intro_question}")
print("=" * 50)
answer = query_engine.query(intro_question)
print(f"Answer: {textwrap.fill(str(answer), 100)} \n")
while True:
user_question = input("Please enter your question (or type 'exit' to quit): ")
if user_question.lower() == "exit":
print("Exiting, thanks for chatting!")
break
print(f"Your question: {user_question}")
print("=" * 50)
answer = query_engine.query(user_question)
print(f"Answer: {textwrap.fill(str(answer), 100)} \n")
if __name__ == "__main__":
main()
Main Function (main()
)
Understanding the code
At first glance, a lot is happening here; let's review it. Below is a step-by-step breakdown:
Initialization and Environment Setup
- Import Required Libraries: The script starts by importing all the necessary modules and packages.
- Load Environment Variables: Using
dotenv
, it loads environment variables stored in the.env
file. This is where API keys and tokens are stored securely.
Helper Functions
parse_github_url(url)
: This function takes a GitHub URL and extracts the repository owner and name using regular expressions.validate_owner_repo(owner, repo)
: Validates that both the repository owner and name are present.initialize_github_client()
: Initializes the GitHub client using the token fetched from the environment variables.- API Key Checks: Before proceeding, the script checks for the presence of the OpenAI API key, GitHub token, and Activeloop token, raising an error if any are missing.
- Initialize GitHub Client: Calls
initialize_github_client()
to get a GitHub client instance. - User Input for GitHub URL: Asks the user to input a GitHub repository URL.
- URL Parsing and Validation: Parses the URL to get the repository owner and name and validates them.
- Data Loading: If the URL is valid, it uses
GithubRepositoryReader
fromllama_index
to load the repository data, specifically Python and Markdown files. - Indexing: The loaded data is then indexed using
VectorStoreIndex
and stored in a DeepLake vector store. This makes the data queryable. - Query Engine Initialization: Initializes a query engine based on the indexed data.
- Test Query: Performs a test query to demonstrate the system's operation.
- User Queries: Enters a loop where the user can input natural language queries to interact with the indexed GitHub repository. The loop continues until the user types 'exit'.
Execution Entry Point
- The script uses the standard
if __name__ == "__main__":
Python idiom to ensure thatmain()
is called when the script is executed directly.
A Closer Look at the GithubRepositoryReader loader
For this project, we used the github_repo
data loader from LLamaIndex, and you can find its documentation on the Llama Hub. This is the part of the code taking care of that:
loader = GithubRepositoryReader(
github_client,
owner=owner,
repo=repo,
filter_file_extensions=([".py", ".js",".ts", ".md"], GithubRepositoryReader.FilterType.INCLUDE),
verbose=False,
concurrent_requests=10,
)
When it comes to loading data from a GitHub repository, the star of the show is the GithubRepositoryReader
class. This class is a powerhouse designed to fetch, filter, and format repository data for indexing. Let's break down its key components:
- GitHub Client: You'll first notice that the initialized GitHub client is passed into
GithubRepositoryReader
. This client provides authenticated access to GitHub repositories. - Repository Details: Next, the repository owner and name are specified. These are extracted from the URL you input, ensuring the data is fetched from the correct source. This is to give a nice message in the console.
- File Type Filters: One of the most flexible features here is the ability to specify which file types to load. In this example, we're focusing on Python, JavaScript, TypeScript, and Markdown files. The inclusion of Markdown files is so the reader will also pull in README files, offering valuable context for the language model's understanding.
- Verbose Logging: If you're the kind of person who likes to see every detail, you can enable verbose logging. This will print out detailed logs of the data loading process.
- Concurrent Requests: This is where you can speed things up. The number of concurrent requests specifies how many data-fetching operations will happen simultaneously. A word of caution, though: cranking this number up could make you hit GitHub's rate limits, so tread carefully.
Once all these parameters are set, the load_data()
method swings into action. It fetches the repository data and neatly packages it into a list of Document
objects, ready for the next stage— indexing.
Let's get into the intricacies of the indexing part of the code, which is arguably one of the most crucial steps in the entire process.
Indexing: Transforming Data into Queryable Intelligence
# ====== Create vector store and upload data ======
vector_store = DeepLakeVectorStore(
dataset_path=dataset_path,
overwrite=True,
runtime={"tensor_db": True},
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(docs, storage_context=storage_context)
query_engine = index.as_query_engine()
After the data is loaded, the next task is to make this raw information searchable and retrievable. This is where the VectorStoreIndex
and DeepLakeVectorStore
come into play. Let's dissect how they work:
- Vector Store: The first thing that happens is the creation of a
DeepLakeVectorStore
. Think of this as a specialized database designed to hold vectors. It's a storage unit and an enabler for high-speed queries. Thedataset_path
parameter specifies where this vector store will reside, and theoverwrite
flag allows you to control whether existing data should be replaced. The parameterruntime={"tensor_db": True}
specifies that the vector store will use the Managed Tensor Database for storage and query execution on the Deep Lake infrastructure.
The default here is to create a cloud vector DB. Instead, You can create a local vector DB by changing dataset_path
to the name of the directory you want to use as DB. For example
- dataset_path = "repository_db"
The vector store will create a directory with the name specified in dataset_path
.
- Storage Context: Next up is the
StorageContext
, which essentially acts as a manager for the vector store. It ensures the vector store is accessible and manageable throughout the indexing process. - From Documents to Index: The
VectorStoreIndex.from_documents()
method is the workhorse here. It takes the list ofDocument
objects you got from the GitHub repository and transforms them into a searchable index. This index is stored in the previously initializedDeepLakeVectorStore
.
The Interactive Finale: Querying and User Engagement
After the meticulous data loading and indexing process, the stage is set for the user to interact with the system. This is where the query engine comes into play.
- Query Engine: Once the index is built, a query engine is initialized using
index.as_query_engine()
. You'll interact with this engine when you want to search through the GitHub repository. It's optimized for speed and accuracy, ensuring your natural language queries return the most relevant results. - Introductory Question: The code starts with an introductory question: "What is the repository about?" This serves multiple purposes. It acts as a litmus test to ensure the system is operational and gives the user an immediate sense of what questions can be asked.
- Formatting and Display: The answer is then formatted to fit within a 100-character width for better readability, thanks to Python's
textwrap
library. - User Input: The code enters an infinite loop, inviting the user to ask questions. The user can type any query related to the GitHub repository, and the system will attempt to provide a relevant answer.
- Exit Strategy: The loop continues indefinitely until the user types 'exit', providing a simple yet effective way to end the interaction.
- Query Execution: Each time the user asks a question, the
query_engine.query()
method is called. This method consults the index built earlier and retrieves the most relevant information. - Answer Presentation: Like the introductory question, the answer to the user's query is formatted and displayed. This ensures that regardless of the complexity or length of the solution, it's presented in a readable manner.
Running the code
Now, you're fully equipped to index and query GitHub repositories. To tailor the system to your specific needs, pay attention to the filter_file_extensions
parameter. This is where you specify which types of files you'd like to include in your index. The current setting—[".py", ".js", ".ts", ".md"]
—focuses on Python, JavaScript, TypeScript, and Markdown files.
Consider storing these extensions in your .env
file for a more dynamic setup. You can easily switch between different configurations without modifying the codebase. It's a best practice that adds a layer of flexibility to your system.
Run the command to start:
python3 main.py
The console will ask for a repository URL. To test, I used my repository explaining how to build a ChatGPT plugin using Express.js, so this only has .js
extensions.
This is how the console will look during the interaction:
Please enter the GitHub repository URL: https://github.com/soos3d/chatgpt-plugin-development-quickstart-express
Loading chatgpt-plugin-development-quickstart-express repository by soos3d
Documents uploaded:
{'file_path': 'README.md', 'file_name': 'README.md'}
{'file_path': 'index.js', 'file_name': 'index.js'}
{'file_path': 'src/app.js', 'file_name': 'app.js'}
Uploading to vector store...
Your Deep Lake dataset has been successfully created!
Dataset(path='hub://YOUR_ORG/repository_db', tensors=['embedding', 'id', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding embedding (5, 1536) float32 None
id text (5, 1) str None
metadata json (5, 1) str None
text text (5, 1) str None
Test question: What is the repository about?
==================================================
Answer: The repository is a ChatGPT Plugin Quickstart with Express.js. It provides a foundation for
developing custom ChatGPT plugins using JavaScript and Express.js. The sample plugin in the
repository showcases how ChatGPT can integrate with external APIs, specifically API-Ninja's API, to
enhance its capabilities. The plugin fetches airport data based on a city name provided by the user.
Please enter your question (or type 'exit' to quit): how does the server work?
Your question: how does the server work?
==================================================
Answer: The server in this context works by setting up an Express.js server with various endpoints to serve
a ChatGPT plugin. It initializes the server, configures it to parse JSON in the body of incoming
requests, and sets up routes for serving the plugin manifest, OpenAPI schema, logo image, and
handling API requests. It also defines a catch-all route to handle any other requests. Finally, the
server starts and listens for requests on the specified port.
Please enter your question (or type 'exit' to quit): exit
Exiting, thanks for chatting!
Diving Deeper: LlamaIndex's Low-Level API for customizations
While the high-level API of LlamaIndex offers a seamless experience for most use cases, there might be situations where you need more granular control over the query logic. This is where the low-level API shines, offering customizations to fine-tune your interactions with the indexed data.
Building the Index
To start, you'll need to build an index from your documents, the same as we have done so far.
# Create an index of the documents
try:
vector_store = DeepLakeVectorStore(
dataset_path=dataset_path,
overwrite=True,
runtime={"tensor_db": True},
)
except Exception as e:
print(f"An unexpected error occurred while creating or fetching the vector store: {str(e)}")
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(docs, storage_context=storage_context)
Configuring the Retriever
The retriever is responsible for fetching relevant nodes from the index. LlamaIndex supports various retrieval modes, allowing you to choose the one that best fits your needs:
from llama_index.retrievers import VectorIndexRetriever
retriever = VectorIndexRetriever(index=index, similarity_top_k=4)
Let's break this down:
In modern retrieval systems, documents and queries are represented as vectors, often generated by machine learning models. When a query is made, its vector is compared to document vectors in the index using metrics like cosine similarity. The documents are then ranked based on their similarity scores to the query.
Top-k Retrieval: Instead of returning all the documents sorted by their similarity, often only the top k
most similar documents are of interest. This is where similarity_top_k
comes into play. If similarity_top_k=4
, the system will retrieve the top 4 most similar documents to the given query.
In the context of the code:
retriever = VectorIndexRetriever(index=index, similarity_top_k=4)
The VectorIndexRetriever
is configured to retrieve the top 4 documents most similar to any given query.
Benefits of using similarity_top_k
:
- Efficiency: Retrieving only the top-k results can be faster and more memory-efficient than retrieving all results, especially when dealing with large datasets.
- Relevance: Users are primarily interested in the most relevant results in many applications. By focusing on the top-k results, the system can provide the most pertinent information without overwhelming the user with less relevant results.
However, choosing an appropriate value for k
is essential. Some relevant results might be missed if k
is too small. If k
is too large, the system might return more results than necessary, which could be less efficient and potentially less helpful to the user.
While creating this guide, I noticed that using a value above '4' for the parameter led the LLM to produce off-context responses.
Customize the query engine
In LLamaIndex, passing extra parameters and customizations to the query engine is possible.
from llama_index import get_response_synthesizer
response_synthesizer = get_response_synthesizer()
query_engine = RetrieverQueryEngine.from_args(
retriever=retriever,
response_mode='default',
response_synthesizer=response_synthesizer,
node_postprocessors=[
SimilarityPostprocessor(similarity_cutoff=0.7)]
)
Let's break down this customization:
Getting the Response Synthesizer:
response_synthesizer = get_response_synthesizer()
Here, the get_response_synthesizer
function is called to get an instance of the response synthesizer. The query engine will use this synthesizer to combine and refine the information retrieved by the retriever.
Configuring the Query Engine:
query_engine = RetrieverQueryEngine.from_args(
retriever=retriever,
response_mode='default',
response_synthesizer=response_synthesizer,
node_postprocessors=[
SimilarityPostprocessor(similarity_cutoff=0.7)]
)
This section configures and initializes the query engine with the following components and settings:
retriever
: This component fetches relevant nodes (or documents) based on the query. It's passed as an argument, and we set it up in the previous step.response_mode='default'
: This sets the mode in which the response will be synthesized. The 'default' mode means the system will "create and refine" an answer by sequentially going through each retrieved node, making a separate LLM call for each node. This mode is suitable for generating more detailed explanations.response_synthesizer=response_synthesizer
: The previously obtained response synthesizer is passed to the query engine. This component will generate the final response using the specifiedresponse_mode
.node_postprocessors
: This is a list of postprocessors that can be applied to the nodes retrieved by the retriever. ASimilarityPostprocessor
with asimilarity_cutoff
of 0.7 is used in this case. This postprocessor filters out nodes based on their similarity scores, ensuring only nodes with a similarity score above 0.7 are considered.
All of those are optional except for the retriever; I recommend testing the various modes and values to find what is more suitable for your use case.
Exploring Different Response Modes
The RetrieverQueryEngine
offers various response modes to tailor the synthesis of responses based on the retrieved nodes.
Here's a breakdown of some of the available modes:
- Default Mode:
- Compact Mode:
- Tree Summarize Mode:
- No Text Mode:
query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='default')
In the default mode, the system processes each retrieved node sequentially, making a separate LLM call for each one. This mode is suitable for generating detailed answers.
query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='compact')
The compact mode fits as many node text chunks as possible within the maximum prompt size during each LLM call. If there are too many chunks, it refines the answer by processing multiple prompts.
query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='tree_summarize')
This mode constructs a tree from a set of node objects, and the query then returns the root node as the response. It's beneficial for summarization tasks.
query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='no_text')
In the no-text mode, the retriever fetches the nodes that would have been sent to the LLM but doesn't send them. This mode allows for inspecting the retrieved nodes without generating a synthesized response.
How does LLamaIndex compare to LangChain?
Now that you understand how LLamaIndex works comparing it with the big-name LangChain is an excellent time.
Llama Index and Langchain are both frameworks/libraries designed to enhance the capabilities of large language models by allowing them to interact with external data. While they serve similar overarching goals, their approaches and features differ.
Key Features:
Llama Index:
- Data Connectors: Offers a variety of connectors to ingest data, making it versatile for different data sources. LLama Hub is an excellent source of community-made tools.
- Data Structuring: Allows users to structure data using various index types, such as list index, tree index, and even the ability to compose indices.
- Query Abstraction: Provides layers of abstraction, enabling both simple and complex data querying.
Langchain:
- Modular Design: Comes with modules for tools, agents, and chains, each serving a distinct purpose.
- Chains: A series of steps or actions the language model takes, ideal for tasks requiring multiple interactions.
- Agents: Autonomous entities that can decide the next steps in a process, adding a layer of decision-making to the model's interactions.
- Tools: Utility agents are used to perform specific tasks, such as searching or querying an index.
Use Cases:
- The Llama Index is best suited for applications that require complex data structures and querying. Its strength lies in handling and querying structured data.
- Langchain, on the other hand, excels in scenarios that require multiple interactions with a language model, especially when those interactions involve decision-making or a series of steps.
Of course, you can combine the two; the potential of combining LlamaIndex and Langchain is promising. By integrating LlamaIndex's structured data querying capabilities with the multi-step interaction and decision-making features of Langchain, developers can create robust and versatile applications. As mentioned in the conclusion, This combination can offer the best of both worlds.
So which one to use? Use the tool that makes it easier to take care of your use cases; this is usually the leading factor in deciding which one I want to use.
Conclusion
In this comprehensive guide, we've journeyed through the intricacies of integrating LlamaIndex with Activeloop Deep Lake to create a conversational interface for GitHub repositories. We've seen how these powerful tools can transform a static codebase into an interactive, queryable entity. The synergy between LlamaIndex's data structuring and Deep Lake's optimized storage offers a robust solution for managing and understanding your code repositories.
The code we've explored indexes GitHub repositories and makes them accessible through natural language queries. This opens up a plethora of possibilities. Imagine a future where you don't just browse through code; you have conversations with it.
This gave you the basics of using LLamaIndex and Deep Lake; try to improve and customize this app to practice.