Getting Started with DSPy for Beginners

If you’re new to the world of language models and prompt engineering, getting started with DSPy might seem daunting at first. However, DSPy offers a beginner-friendly tutorial that can help you get up to speed quickly. While DSPy may not be the most efficient tool for simple language model tasks, it really shines when it comes to more complex tasks such as knowledge database lookups, chain of thought reasoning, and multi-hop lookups.

One of the biggest advantages of DSPy is its clean class-based representation of the workflow, which makes it easier to solve for the best prompt structure to solve a problem. DSPy also promises to eliminate tedious prompt engineering by training prompts on a set of examples. By simulating the code on the inputs and making one or more simple zero-shot calls that respect the declarative signature, DSPy provides a highly-constrained search process that can automate and optimize the prompt generation process.

So, while DSPy may not be suitable for all tasks, it can offer significant advantages for more complex tasks by automating and optimizing the prompt generation process. Whether you’re a seasoned language model expert or just getting started, DSPy is definitely worth checking out.

Installation

Getting started with DSPy is relatively sytraight forward, thanks to the comprehensive documentation and beginner-friendly Collab Notebook provided by the DSPy team. The notebook introduces the DSPy framework for programming with foundation models, including language models (LMs) and retrieval models (RMs).

One of the key features of DSPy is its emphasis on programming over prompting. Instead of relying solely on prompt engineering, DSPy provides a minimalistic set of Pythonic operations that compose and learn, allowing you to express complex tasks in a familiar syntax.

DSPy provides composable and declarative modules for instructing LMs, making it easy to define the steps of your program in a clear and concise way. On top of that, DSPy includes an automatic compiler that teaches LMs how to conduct the declarative steps in your program. The compiler will internally trace your program and then craft high-quality prompts for large LMs or train automatic finetunes for small LMs to teach them the steps of your task.

To get started with DSPy, simply follow the installation instructions provided in the documentation. Once you have DSPy installed, you can open the beginner-friendly Collab Notebook and start exploring the framework’s features and capabilities. The notebook includes a series of examples and exercises that will help you get up to speed quickly and start building your own programs with DSPy.

This code prepares your environment to use DSPy. It checks if you have the necessary libraries installed and sets up a cache for faster data access. Finally, it makes the DSPy library available for you to use.

%load_ext autoreload
%autoreload 2

import sys
import os

try: # When on google Colab, let's clone the notebook so we download the cache.
    import google.colab
    repo_path = 'dspy'
    !git -C $repo_path pull origin || git clone https://github.com/stanfordnlp/dspy $repo_path
except:
    repo_path = '.'

if repo_path not in sys.path:
    sys.path.append(repo_path)

# Set up the cache for this notebook
os.environ["DSP_NOTEBOOK_CACHEDIR"] = os.path.join(repo_path, 'cache')

import pkg_resources # Install the package if it's not installed
if not "dspy-ai" in {pkg.key for pkg in pkg_resources.working_set}:
    !pip install -U pip
    !pip install dspy-ai
    !pip install openai~=0.28.1
    # !pip install -e $repo_path

import dspy

Getting Started

This code sets up DSPy to work with two different language models: a text generator (GPT-3.5-turbo) and a knowledge retriever that can access information from Wikipedia (ColBERTv2). This combination allows DSPy to generate text while also incorporating knowledge from a vast database.

turbo = dspy.OpenAI(model='gpt-3.5-turbo')
colbertv2_wiki17_abstracts = dspy.ColBERTv2(url='http://20.102.90.50:2017/wiki17_abstracts')

dspy.settings.configure(lm=turbo, rm=colbertv2_wiki17_abstracts)

Building A Q&A

The code loads a tiny sample from a dataset called HotPotQA, which contains questions and answers.

from dspy.datasets import HotPotQA

# Load the dataset.
dataset = HotPotQA(train_seed=1, train_size=20, eval_seed=2023, dev_size=50, test_size=0)

# Tell DSPy that the 'question' field is the input. Any other fields are labels and/or metadata.
trainset = [x.with_inputs('question') for x in dataset.train]
devset = [x.with_inputs('question') for x in dataset.dev]

len(trainset), len(devset)

DSPy requires minimal labeling: you only need labels for the initial question and final answer, and it figures out the rest.

train_example = trainset[0]
print(f"Question: {train_example.question}")
print(f"Answer: {train_example.answer}")

While this example uses an existing dataset, you can also define your own data format using dspy.Example.

How DSPy works behind the scenes to LLMs

Key points:

  • Clean Separation: You focus on designing the information flow of your program (like steps needed to answer a question), while DSPy handles how to use the LLM effectively for each step.
  • Automatic Optimization: DSPy figures out the best way to “talk” to the LLM (e.g., what prompts to use) to achieve your desired outcome.
  • Comparison to PyTorch: If you know PyTorch (a framework for machine learning), think of DSPy as a similar tool but specifically for working with LLMs.
class BasicQA(dspy.Signature):
    """Answer questions with short factoid answers."""

    question = dspy.InputField()
    answer = dspy.OutputField(desc="often between 1 and 5 words")

Signatures:

  • Think of it as a recipe for giving instructions to the LLM.
  • It tells the LLM:
    • What kind of work it needs to do (e.g., answer a question).
    • What information it will receive (e.g., the question itself).
    • What kind of answer it should give (e.g., the answer to the question).
  • Each piece of information (question, answer) is called a “field.”
  • You can customize it for different tasks, like giving the LLM a long text and asking it to summarize it.

Predictors:

  • Once you have a signature, you create a “predictor.”
  • Think of it as a skilled chef who follows the recipe (signature) and uses the LLM (ingredients) to cook the dish (answer).
  • Importantly, this chef can learn and adapt! As you use the predictor with different examples, it gets better at using the LLM to achieve the desired outcome.
# Define the predictor.
generate_answer = dspy.Predict(BasicQA)

# Call the predictor on a particular input.
pred = generate_answer(question=dev_example.question)

# Print the input and the prediction.
print(f"Question: {dev_example.question}")
print(f"Predicted Answer: {pred.answer}")

Building the RAG

This example shows how to create a program in DSPy that answers questions using relevant information from Wikipedia. The program retrieves the top 3 relevant passages from Wikipedia based on the question. Then it uses those passages as context to generate an answer using an LLM.

class GenerateAnswer(dspy.Signature):
    """Answer questions with short factoid answers."""

    context = dspy.InputField(desc="may contain relevant facts")
    question = dspy.InputField()
    answer = dspy.OutputField(desc="often between 1 and 5 words")

Putting it Together

Here’s a simplified explanation of the last part on Basic Retrieval-Augmented Generation (RAG):

Building a program to answer questions:

  • This example shows how to create a program in DSPy that answers questions using relevant information from Wikipedia.
  • The program retrieves the top 3 relevant passages from Wikipedia based on the question.
  • Then it uses those passages as context to generate an answer using an LLM.

Putting it together:

  • First, we define a “signature” called GenerateAnswer which specifies the task:
    • Input: context (relevant facts) and question.
    • Output: answer (short factoid).
  • Next, we create a program called RAG that inherits from dspy.Module.
    • It has two sub-modules:
      • dspy.Retrieve: finds relevant passages.
      • dspy.ChainOfThought: generates an answer using the retrieved context and the question.
    • The forward method defines the main steps:
      1. Retrieve relevant passages using self.retrieve.
      2. Generate an answer using self.generate_answer with the retrieved context and the question.
      3. Return the answer along with the retrieved context.
class RAG(dspy.Module):
    def __init__(self, num_passages=3):
        super().__init__()

        self.retrieve = dspy.Retrieve(k=num_passages)
        self.generate_answer = dspy.ChainOfThought(GenerateAnswer)
    
    def forward(self, question):
        context = self.retrieve(question).passages
        prediction = self.generate_answer(context=context, question=question)
        return dspy.Prediction(context=context, answer=prediction.answer)

Compiling

Now lastly, we just need to compile the RAG. Compiling fine-tunes the program using examples and a metric. Teleprompters are like AI chefs who improve the program’s instructions to the LLM. This is similar to training a neural network, but uses prompts instead of direct parameter updates.

from dspy.teleprompt import BootstrapFewShot

# Validation logic: check that the predicted answer is correct.
# Also check that the retrieved context does actually contain that answer.
def validate_context_and_answer(example, pred, trace=None):
    answer_EM = dspy.evaluate.answer_exact_match(example, pred)
    answer_PM = dspy.evaluate.answer_passage_match(example, pred)
    return answer_EM and answer_PM

# Set up a basic teleprompter, which will compile our RAG program.
teleprompter = BootstrapFewShot(metric=validate_context_and_answer)

# Compile!
compiled_rag = teleprompter.compile(RAG(), trainset=trainset)

And when the RAG is tried out.

# Ask any question you like to this simple RAG program.
my_question = "What castle did David Gregory inherit?"

# Get the prediction. This contains `pred.context` and `pred.answer`.
pred = compiled_rag(my_question)

# Print the contexts and the answer.
print(f"Question: {my_question}")
print(f"Predicted Answer: {pred.answer}")
print(f"Retrieved Contexts (truncated): {[c[:200] + '...' for c in pred.context]}")
Question: What castle did David Gregory inherit?
Predicted Answer: Kinnairdy Castle
Retrieved Contexts (truncated): ['David Gregory (physician) | David Gregory (20 December 1625 – 1720) was a Scottish physician and inventor. His surname is sometimes spelt as Gregorie, the original Scottish spelling. He inherited Kinn...', 'Gregory Tarchaneiotes | Gregory Tarchaneiotes (Greek: Γρηγόριος Ταρχανειώτης , Italian: "Gregorio Tracanioto" or "Tracamoto" ) was a "protospatharius" and the long-reigning catepan of Italy from 998 t...', 'David Gregory (mathematician) | David Gregory (originally spelt Gregorie) FRS (? 1659 – 10 October 1708) was a Scottish mathematician and astronomer. He was professor of mathematics at the University ...']

Conclusion

The key point is that DSPy makes it easier to build programs that use LLMs by automating some of the complex steps involved. For beginners, DSPy in my opinion is Potentially challenging. DSPy assumes some understanding of large language models, machine learning concepts, and Python programming. The documentation and examples use technical terms and require familiarity with these areas. It’s defenitly not going to be as plug and play as other tools for example to build agents. There is quite a bit of a steep learning curve as well. While DSPy simplifies some aspects of working with LLMs, understanding its core concepts and building programs might require significant effort for someone new to these fields. DSPy is not inherently “simple” but aims to offer a more manageable way to work with LLMs for those who already have the necessary background.

Related

How to 10x Your LLM Prompting With DSPy

Tired of spending countless hours tweaking prompts for large...

Google Announces A Cost Effective Gemini Flash

At Google's I/O event, the company unveiled Gemini Flash,...

WordPress vs Strapi: Choosing the Right CMS for Your Needs

With the growing popularity of headless CMS solutions, developers...

JPA vs. JDBC: Comparing the two DB APIs

Introduction The eternal battle rages on between two warring database...

Meta Introduces V-JEPA

The V-JEPA model, proposed by Yann LeCun, is a...

Subscribe to our AI newsletter. Get the latest on news, models, open source and trends.
Don't worry, we won't spam. 😎

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Lusera will use the information you provide on this form to be in touch with you and to provide updates and marketing.