Putting Together an OpenAI Agent With LlamaIndex 

Thanks to the new OpenAI API that supports function calling, creating your own agent has never been easier!

In this tutorial notebook, we’ll demonstrate how to build an OpenAI agent in just 50 lines of code or less. Despite its brevity, our agent is fully-featured and capable of carrying on conversations while utilizing various tools. Using LlamaIndex we will bu agent that will get the current price of a stock using the Yahoo Finance API.

Setting Up

The main thing we need is:

  1. the OpenAI API (using our own llama_index LLM class)
  2. a place to keep conversation history
  3. a definition for tools that our agent can use.

If you’re opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

%pip install llama-index-agent-openai
%pip install llama-index-llms-openai
!pip install llama-index

Next, we need to set up the foundation for working with LlamaIndex agents, tools, and OpenAI’s LLMs, while also ensuring proper asynchronous execution.

import json
from typing import Sequence, List

from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
from llama_index.core.tools import BaseTool, FunctionTool

import nest_asyncio

nest_asyncio.apply()

Here we will create a function using the Yahoo Finance API, to get the current price of a stock.

import yfinance as yf
from functools import cache

@cache
def get_stock_price(ticker: str) -> Union[float, None]:
  """
  Retrieves the current price of a stock using yfinance, with caching for performance.

  Args:
    ticker: The ticker symbol of the stock.

  Returns:
    The current price of the stock, or None if an error occurs.
  """
  try:
    stock = yf.Ticker(ticker)
    return stock.info["regularMarketPrice"]
  except (yf.YFException, KeyError) as e:
    print(f"Error retrieving price for {ticker}: {e}")
    return None

stock_tool = FunctionTool.from_defaults(fn=stock)

Agent Definition

Now, we define our agent that’s capable of holding a conversation and calling tools in under 50 lines of code.

The meat of the agent logic is in the chat method. At a high-level, there are 3 steps:

  1. Call OpenAI to decide which tool (if any) to call and with what arguments.
  2. Call the tool with the arguments to obtain an output
  3. Call OpenAI to synthesize a response from the conversation context and the tool output.

The reset method simply resets the conversation context, so we can start another conversation.

class YourOpenAIAgent:
    def __init__(
        self,
        tools: Sequence[BaseTool] = [],
        llm: OpenAI = OpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
        chat_history: List[ChatMessage] = [],
    ) -> None:
        self._llm = llm
        self._tools = {tool.metadata.name: tool for tool in tools}
        self._chat_history = chat_history

    def reset(self) -> None:
        self._chat_history = []

    def chat(self, message: str) -> str:
        chat_history = self._chat_history
        chat_history.append(ChatMessage(role="user", content=message))
        tools = [
            tool.metadata.to_openai_tool() for _, tool in self._tools.items()
        ]

        ai_message = self._llm.chat(chat_history, tools=tools).message
        additional_kwargs = ai_message.additional_kwargs
        chat_history.append(ai_message)

        tool_calls = ai_message.additional_kwargs.get("tool_calls", None)
        # parallel function calling is now supported
        if tool_calls is not None:
            for tool_call in tool_calls:
                function_message = self._call_function(tool_call)
                chat_history.append(function_message)
                ai_message = self._llm.chat(chat_history).message
                chat_history.append(ai_message)

        return ai_message.content

    def _call_function(self, tool_call: dict) -> ChatMessage:
        id_ = tool_call["id"]
        function_call = tool_call["function"]
        tool = self._tools[function_call["name"]]
        output = tool(**json.loads(function_call["arguments"]))
        return ChatMessage(
            name=function_call["name"],
            content=str(output),
            role="tool",
            additional_kwargs={
                "tool_call_id": id_,
                "name": function_call["name"],
            },
        )

The agent serves as a bridge between the user and the LLM, managing conversation flow and tool integration. Tools extend the agent’s capabilities with custom functions. The agent maintains a chat history for context. It handles tool calls requested by the LLM, enabling dynamic interactions.

agent = YourOpenAIAgent(tools=[stock_tool])
agent.chat("Hi")
'Hello! How can I assist you today?'
agent.chat("What is the stock price of appl")

LlamaIndex has different implementation methods, some better than others. For example, they provide an OpenAIAgent .

OpenAIAgent 

This agent implementation not only adheres to the BaseChatEngine and BaseQueryEngine interfaces, making it seamlessly compatible with the LlamaIndex framework, but also boasts several advanced features such as support for multiple function calls per conversation turn, streaming capabilities, async endpoints, and callback and tracing functionality.

from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo-0613")
agent = OpenAIAgent.from_tools(
    [multiply_tool, add_tool], llm=llm, verbose=True
)

Streaming Chat

One key advantage is the ability to receive responses in a streaming fashion, allowing for incremental and real-time interaction with the model. This can be particularly useful for applications where immediate feedback or step-by-step processing is required, such as in conversational interfaces, real-time translation, or content generation. Additionally, streaming chat supports async endpoints, callback and tracing, and async streaming chat, providing flexibility and efficiency in handling conversations and responses

response = agent.stream_chat(
    "What is 121 * 2? Once you have the answer, use that number to write a"
    " story about a group of mice."
)

response_gen = response.response_gen

for token in response_gen:
    print(token, end="")

Related

How to 10x Your LLM Prompting With DSPy

Tired of spending countless hours tweaking prompts for large...

Google Announces A Cost Effective Gemini Flash

At Google's I/O event, the company unveiled Gemini Flash,...

WordPress vs Strapi: Choosing the Right CMS for Your Needs

With the growing popularity of headless CMS solutions, developers...

JPA vs. JDBC: Comparing the two DB APIs

Introduction The eternal battle rages on between two warring database...

Meta Introduces V-JEPA

The V-JEPA model, proposed by Yann LeCun, is a...

Subscribe to our AI newsletter. Get the latest on news, models, open source and trends.
Don't worry, we won't spam. 😎

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Lusera will use the information you provide on this form to be in touch with you and to provide updates and marketing.