Building AI Agent Applications with FastAPI: A Comprehensive Guide to the LangGraph Agent Template

May 16, 2025

Introduction

The FastAPI LangGraph Agent Template is a production-ready framework designed for developers looking to build AI agent applications efficiently. This template integrates LangGraph for AI workflows, providing a solid foundation for scalable, secure, and maintainable services.

Key Features of the FastAPI LangGraph Agent Template

  • Production-Ready Architecture: Built on FastAPI for high-performance async API endpoints.
  • LangGraph Integration: Seamlessly integrates with LangGraph for AI agent workflows.
  • Monitoring Tools: Utilizes Langfuse for observability and monitoring of LLMs.
  • Structured Logging: Environment-specific logging for better debugging.
  • Rate Limiting: Configurable rules to protect your API.
  • Data Persistence: Uses PostgreSQL for reliable data storage.
  • Containerization: Supports Docker and Docker Compose for easy deployment.
  • Metrics and Dashboards: Integrates Prometheus and Grafana for real-time monitoring.

Security Features

Security is paramount in any application. This template includes:

  • JWT-based Authentication: Secure user sessions with JSON Web Tokens.
  • Session Management: Efficiently manage user sessions.
  • Input Sanitization: Protect against common vulnerabilities.
  • CORS Configuration: Control resource sharing across domains.
  • Rate Limiting Protection: Prevent abuse of your API.

Developer Experience

The template is designed with developers in mind, featuring:

  • Environment-Specific Configuration: Easily manage settings for different environments.
  • Comprehensive Logging System: Keep track of application behavior.
  • Clear Project Structure: Navigate the codebase with ease.
  • Type Hints: Improve code readability and maintainability.
  • Easy Local Development Setup: Get started quickly with minimal configuration.

Model Evaluation Framework

This template includes a robust framework for evaluating AI models:

  • Automated Metric-Based Evaluation: Automatically assess model outputs.
  • Integration with Langfuse: Fetch traces for detailed analysis.
  • Interactive CLI: User-friendly interface for running evaluations.
  • Customizable Metrics: Define your own evaluation criteria.

Quick Start Guide

Prerequisites

Before you begin, ensure you have the following installed:

  • Python 3.13+
  • PostgreSQL: For data persistence.
  • Docker and Docker Compose: Optional, but recommended for deployment.

Environment Setup

Follow these steps to set up your environment:

  1. Clone the repository:
    git clone https://github.com/wassim249/fastapi-langgraph-agent-production-ready-template
    cd fastapi-langgraph-agent-production-ready-template
  2. Create and activate a virtual environment:
    uv sync
  3. Copy the example environment file:
    cp .env.example .env.development
  4. Update the `.env` file with your configuration.

Database Setup

Set up your PostgreSQL database:

  1. Create a PostgreSQL database (e.g., Supabase or local PostgreSQL).
  2. Update the database connection string in your `.env` file:
    POSTGRES_URL="postgresql://:your-db-password@POSTGRES_HOST:POSTGRES_PORT/POSTGRES_DB"

The ORM will handle table creation automatically. If you encounter issues, run the schemas.sql file to create tables manually.

Running the Application

Local Development

  1. Install dependencies:
    uv sync
  2. Run the application:
    make dev
  3. Access Swagger UI: http://localhost:8000/docs

Using Docker

  1. Build and run with Docker Compose:
    make docker-build-env ENV=development
    make docker-run-env ENV=development
  2. Access the monitoring stack:

    Default credentials for Grafana:

    • Username: admin
    • Password: admin

Model Evaluation

The project includes a robust evaluation framework for measuring and tracking model performance over time. You can run evaluations with different options using the provided Makefile commands:

make eval [ENV=development|staging|production]

For quick evaluations, use:

make eval-quick [ENV=development|staging|production]

To run evaluations without report generation:

make eval-no-report [ENV=development|staging|production]

Evaluation Features

  • Interactive CLI: User-friendly interface with colored output and progress bars.
  • Flexible Configuration: Set default values or customize at runtime.
  • Detailed Reports: JSON reports with comprehensive metrics including overall success rate and timing information.

Customizing Metrics

Evaluation metrics can be defined in evals/metrics/prompts/ as markdown files. To create a new metric:

  1. Create a new markdown file (e.g., my_metric.md) in the prompts directory.
  2. Define the evaluation criteria and scoring logic.
  3. The evaluator will automatically discover and apply your new metric.

Viewing Reports

Reports are generated in the evals/reports/ directory with timestamps in the filename:

evals/reports/evaluation_report_YYYYMMDD_HHMMSS.json

Each report includes high-level statistics and detailed trace-level information for debugging.

Conclusion and Resources

The FastAPI LangGraph Agent Template is an excellent choice for developers looking to build AI agent applications with a focus on performance, security, and maintainability. With its comprehensive features and easy setup, you can quickly get started on your next project.

For more information, visit the official repository:

FastAPI LangGraph Agent Template on GitHub

FAQ

What is FastAPI?

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. It is designed to be easy to use and to provide high performance.

How does LangGraph integrate with FastAPI?

LangGraph provides a seamless integration for building AI agent workflows within FastAPI applications, allowing developers to leverage advanced AI capabilities easily.

What are the benefits of using this template?

This template offers a production-ready architecture, built-in security features, and a robust model evaluation framework, making it easier for developers to create scalable AI applications.

Can I use Docker with this template?

Yes, the template supports Docker and Docker Compose, allowing for easy deployment and management of your application in containerized environments.