Introduction
The FastAPI LangGraph Agent Template is a production-ready framework designed for developers looking to build AI agent applications efficiently. This template integrates LangGraph for AI workflows, providing a solid foundation for scalable, secure, and maintainable services.
Key Features of the FastAPI LangGraph Agent Template
- Production-Ready Architecture: Built on FastAPI for high-performance async API endpoints.
 - LangGraph Integration: Seamlessly integrates with LangGraph for AI agent workflows.
 - Monitoring Tools: Utilizes Langfuse for observability and monitoring of LLMs.
 - Structured Logging: Environment-specific logging for better debugging.
 - Rate Limiting: Configurable rules to protect your API.
 - Data Persistence: Uses PostgreSQL for reliable data storage.
 - Containerization: Supports Docker and Docker Compose for easy deployment.
 - Metrics and Dashboards: Integrates Prometheus and Grafana for real-time monitoring.
 
Security Features
Security is paramount in any application. This template includes:
- JWT-based Authentication: Secure user sessions with JSON Web Tokens.
 - Session Management: Efficiently manage user sessions.
 - Input Sanitization: Protect against common vulnerabilities.
 - CORS Configuration: Control resource sharing across domains.
 - Rate Limiting Protection: Prevent abuse of your API.
 
Developer Experience
The template is designed with developers in mind, featuring:
- Environment-Specific Configuration: Easily manage settings for different environments.
 - Comprehensive Logging System: Keep track of application behavior.
 - Clear Project Structure: Navigate the codebase with ease.
 - Type Hints: Improve code readability and maintainability.
 - Easy Local Development Setup: Get started quickly with minimal configuration.
 
Model Evaluation Framework
This template includes a robust framework for evaluating AI models:
- Automated Metric-Based Evaluation: Automatically assess model outputs.
 - Integration with Langfuse: Fetch traces for detailed analysis.
 - Interactive CLI: User-friendly interface for running evaluations.
 - Customizable Metrics: Define your own evaluation criteria.
 
Quick Start Guide
Prerequisites
Before you begin, ensure you have the following installed:
- Python 3.13+
 - PostgreSQL: For data persistence.
 - Docker and Docker Compose: Optional, but recommended for deployment.
 
Environment Setup
Follow these steps to set up your environment:
- Clone the repository:
git clone https://github.com/wassim249/fastapi-langgraph-agent-production-ready-template cd fastapi-langgraph-agent-production-ready-template - Create and activate a virtual environment:
uv sync - Copy the example environment file:
cp .env.example .env.development - Update the `.env` file with your configuration.
 
Database Setup
Set up your PostgreSQL database:
- Create a PostgreSQL database (e.g., Supabase or local PostgreSQL).
 - Update the database connection string in your `.env` file:
POSTGRES_URL="postgresql://:your-db-password@POSTGRES_HOST:POSTGRES_PORT/POSTGRES_DB" 
The ORM will handle table creation automatically. If you encounter issues, run the schemas.sql file to create tables manually.
Running the Application
Local Development
- Install dependencies:
uv sync - Run the application:
make dev - Access Swagger UI: http://localhost:8000/docs
 
Using Docker
- Build and run with Docker Compose:
make docker-build-env ENV=development make docker-run-env ENV=development - Access the monitoring stack:
Default credentials for Grafana:
- Username: admin
 - Password: admin
 
 
Model Evaluation
The project includes a robust evaluation framework for measuring and tracking model performance over time. You can run evaluations with different options using the provided Makefile commands:
make eval [ENV=development|staging|production]For quick evaluations, use:
make eval-quick [ENV=development|staging|production]To run evaluations without report generation:
make eval-no-report [ENV=development|staging|production]Evaluation Features
- Interactive CLI: User-friendly interface with colored output and progress bars.
 - Flexible Configuration: Set default values or customize at runtime.
 - Detailed Reports: JSON reports with comprehensive metrics including overall success rate and timing information.
 
Customizing Metrics
Evaluation metrics can be defined in evals/metrics/prompts/ as markdown files. To create a new metric:
- Create a new markdown file (e.g., 
my_metric.md) in the prompts directory. - Define the evaluation criteria and scoring logic.
 - The evaluator will automatically discover and apply your new metric.
 
Viewing Reports
Reports are generated in the evals/reports/ directory with timestamps in the filename:
evals/reports/evaluation_report_YYYYMMDD_HHMMSS.jsonEach report includes high-level statistics and detailed trace-level information for debugging.
Conclusion and Resources
The FastAPI LangGraph Agent Template is an excellent choice for developers looking to build AI agent applications with a focus on performance, security, and maintainability. With its comprehensive features and easy setup, you can quickly get started on your next project.
For more information, visit the official repository:
FastAPI LangGraph Agent Template on GitHubFAQ
What is FastAPI?
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. It is designed to be easy to use and to provide high performance.
How does LangGraph integrate with FastAPI?
LangGraph provides a seamless integration for building AI agent workflows within FastAPI applications, allowing developers to leverage advanced AI capabilities easily.
What are the benefits of using this template?
This template offers a production-ready architecture, built-in security features, and a robust model evaluation framework, making it easier for developers to create scalable AI applications.
Can I use Docker with this template?
Yes, the template supports Docker and Docker Compose, allowing for easy deployment and management of your application in containerized environments.
