Skip to content

Deploy Your Complete AI Stack

Powered by Prayog.io - Your Ultimate AI Development Platform

Get your complete AI development environment running in minutes! This guide focuses on the fastest way to deploy your AI stack with OpenWebUI, N8N workflow automation, Qdrant vector database, PostgreSQL, and comprehensive monitoring.

What You Get Instantly

Your complete AI development environment includes:

AppURLIntent
Open WebUIhttp://localhost:3000AI Chat Interface
Grafana Monitoringhttp://localhost:4000Infrastructure & System Monitoring
Langfusehttp://localhost:3001LLM Observability & Analytics
OpenWebUI Pipelineshttp://localhost:9099AI Pipeline Processing
N8N with OpenTelemetryhttp://localhost:5678Workflow Automation + Observability
Qdranthttp://localhost:6333Vector Database
PostgreSQLlocalhost:5433Relational Database

Prerequisites

Before you begin, ensure you have:

  1. Docker and Docker Compose installed - Download from docker.com
  2. Git installed - For cloning the repository Prayog.io/prayog-ai-stack
  3. 8GB+ RAM recommended - For optimal performance
  4. Available ports - Ensure ports 3000, 3001, 4000, 5433, 5678, 6333, and 9099 are free

Super Quick Start (One Command!)

The fastest way to get your entire AI stack running:

Terminal window
git clone https://github.com/prayog-io/prayog-ai-stack.git
cd prayog-ai-stack
./quick-start.sh

That’s it! This single command sets up everything and gets you running in minutes.

Quick Access

Once the stack is running, access your services:

ServiceURLLogin
OpenWebUIhttp://localhost:3000Sign up on first visit
Grafanahttp://localhost:4000admin / admin123
Langfusehttp://localhost:3001Create account
N8Nhttp://localhost:5678Setup on first visit

Management Commands

CommandDescription
./quick-start.shStart everything
./status.shCheck service health
./logs.shView service logs
./stop.shStop all services

Consumption Steps

Once your AI stack is running:

  1. Open OpenWebUI at http://localhost:3000 and create your account
  2. Start chatting with AI models immediately
  3. Monitor everything through Grafana at http://localhost:4000
  4. Track AI interactions with Langfuse at http://localhost:3001
  5. Create workflows using N8N at http://localhost:5678

Your AI development environment is now ready for immediate use!

Need More Control?

For advanced configuration, custom deployment options, troubleshooting, and production setup, see our comprehensive Custom Deployment Guide.


Ready for production? Check out our Custom Deployment Guide for advanced configuration, security settings, and scaling options.