Deploy Your Complete AI Stack
Powered by Prayog.io - Your Ultimate AI Development Platform
Get your complete AI development environment running in minutes! This guide focuses on the fastest way to deploy your AI stack with OpenWebUI, N8N workflow automation, Qdrant vector database, PostgreSQL, and comprehensive monitoring.
What You Get Instantly
Your complete AI development environment includes:
| App | URL | Intent |
|---|---|---|
| Open WebUI | http://localhost:3000 | AI Chat Interface |
| Grafana Monitoring | http://localhost:4000 | Infrastructure & System Monitoring |
| Langfuse | http://localhost:3001 | LLM Observability & Analytics |
| OpenWebUI Pipelines | http://localhost:9099 | AI Pipeline Processing |
| N8N with OpenTelemetry | http://localhost:5678 | Workflow Automation + Observability |
| Qdrant | http://localhost:6333 | Vector Database |
| PostgreSQL | localhost:5433 | Relational Database |
Prerequisites
Before you begin, ensure you have:
- Docker and Docker Compose installed - Download from docker.com
- Git installed - For cloning the repository Prayog.io/prayog-ai-stack
- 8GB+ RAM recommended - For optimal performance
- Available ports - Ensure ports 3000, 3001, 4000, 5433, 5678, 6333, and 9099 are free
Super Quick Start (One Command!)
The fastest way to get your entire AI stack running:
git clone https://github.com/prayog-io/prayog-ai-stack.gitcd prayog-ai-stack./quick-start.shThat’s it! This single command sets up everything and gets you running in minutes.
Quick Access
Once the stack is running, access your services:
| Service | URL | Login |
|---|---|---|
| OpenWebUI | http://localhost:3000 | Sign up on first visit |
| Grafana | http://localhost:4000 | admin / admin123 |
| Langfuse | http://localhost:3001 | Create account |
| N8N | http://localhost:5678 | Setup on first visit |
Management Commands
| Command | Description |
|---|---|
./quick-start.sh | Start everything |
./status.sh | Check service health |
./logs.sh | View service logs |
./stop.sh | Stop all services |
Consumption Steps
Once your AI stack is running:
- Open OpenWebUI at http://localhost:3000 and create your account
- Start chatting with AI models immediately
- Monitor everything through Grafana at http://localhost:4000
- Track AI interactions with Langfuse at http://localhost:3001
- Create workflows using N8N at http://localhost:5678
Your AI development environment is now ready for immediate use!
Need More Control?
For advanced configuration, custom deployment options, troubleshooting, and production setup, see our comprehensive Custom Deployment Guide.
Ready for production? Check out our Custom Deployment Guide for advanced configuration, security settings, and scaling options.