
Self-hosting LobeChat the easy way
Yulei ChenLobeChat is an open-source AI chat framework with a beautiful, modern UI. It supports multiple AI providers like OpenAI, Anthropic Claude, Google Gemini, and many more. Instead of paying for multiple AI subscriptions, you can bring your own API keys and get a unified chat interface for all your models.
Sliplane makes deploying LobeChat effortless. With one click, you get a running instance with SSL, no server setup, and no reverse proxy headaches. Your API keys stay on your own server, and you control who gets access.
Prerequisites
Before deploying, ensure you have a Sliplane account (free trial available).
You'll also need at least one AI provider API key (e.g. an OpenAI API key) to start chatting.
Quick start
Sliplane provides one-click deployment with presets.
- Click the deploy button above
- Select a project
- Select a server (If you just signed up you get a 48-hour free trial server)
- Click Deploy!
About the preset
The one-click deploy above uses Sliplane's LobeChat preset. Here's what's included:
- Official
lobehub/lobe-chatDocker image with a specific version tag for stability - Port 3210 exposed for the web interface
ACCESS_CODEpre-configured with a random password to protect your instanceOPENAI_API_KEYplaceholder ready for your API keyOPENAI_PROXY_URLset to the default OpenAI endpoint, customizable for other providers
LobeChat's lite mode stores conversations in your browser's local storage, so no database or persistent volumes are needed.
Next steps
Once LobeChat is deployed, open the domain Sliplane assigned (e.g. lobe-chat-xxxx.sliplane.app).
Access code
Your instance is protected by an access code. You can find it in the ACCESS_CODE environment variable in your Sliplane service settings. Enter this code when prompted to unlock the chat interface.
Adding AI providers
LobeChat supports a wide range of AI providers out of the box. After logging in, go to Settings > Language Model to configure your providers:
- OpenAI: Already pre-configured via the
OPENAI_API_KEYenvironment variable. Replace the placeholder value in your Sliplane service settings with your actual key. - Anthropic Claude: Add
ANTHROPIC_API_KEYas an environment variable. - Google Gemini: Add
GOOGLE_API_KEYas an environment variable. - Local models: Point to a self-hosted Ollama instance using
OPENAI_PROXY_URL.
You can also configure providers directly in the LobeChat settings UI without environment variables.
Environment variables
Here are some useful environment variables you can set in your Sliplane service settings:
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY | Your OpenAI API key | (none) |
OPENAI_PROXY_URL | Custom OpenAI-compatible endpoint | https://api.openai.com/v1 |
ACCESS_CODE | Password to protect your instance | (auto-generated) |
ANTHROPIC_API_KEY | Your Anthropic API key | (none) |
GOOGLE_API_KEY | Your Google Gemini API key | (none) |
Check the LobeChat environment variables docs for the full list of supported options.
Logging
LobeChat logs to STDOUT by default, which works perfectly with Sliplane's built-in log viewer. For general Docker log tips, check out our post on how to use Docker logs.
Cost comparison
You can also self-host LobeChat with other cloud providers. Here is a pricing comparison for the most common ones:
| Provider | vCPU | RAM | Disk | Monthly Cost | Note |
|---|---|---|---|---|---|
| Sliplane | 2 | 2 GB | 40 GB | €9 (~$10.65) | Flat rate, 1 TB bandwidth, SSL included |
| Fly.io | 2 | 2 GB | 40 GB | ~$18 | Disk and bandwidth billed separately |
| Render | 1 | 2 GB | 40 GB | ~$35 | 100 GB bandwidth, Disk billed separately |
| Railway | 2 | 2 GB | 40 GB | ~$67 + $20 plan | Pro plan floor, usage-based, bandwidth billed separately |
Click here to see how these numbers were calculated.
(Assuming an always-on instance running 730 hrs/month)
- Sliplane: flat €9/month for the Base server. Unlimited services on the same server, 1 TB egress and SSL included.
- Fly.io:
shared-cpu-2x2 GB = $11.83/mo + 40 GB volume × $0.15/GB = $6 -> ~$17.83/mo. Egress billed separately ($0.02/GB in EU). - Render: closest match is Standard ($25, 1 vCPU / 2 GB) plus 40 GB disk × $0.25/GB = $10 -> ~$35/mo. Stepping up to Pro (2 vCPU / 4 GB) costs $85/mo + disk.
- Railway (Pro plan): CPU 2 × $0.00000772/s × 2,628,000 s = $40.57; RAM 2 × $0.00000386/s × 2,628,000 s = $20.29; volume 40 × $0.00000006/s × 2,628,000 s = $6.31 -> ~$67/mo compute, plus the $20/mo Pro plan floor and $0.05/GB egress.
Bandwidth costs can add up fast on usage-based providers. Use our bandwidth cost comparison tool to see what your egress would cost on each platform.
FAQ
What AI models does LobeChat support?
LobeChat supports a wide range of AI providers including OpenAI (GPT-4o, o1, o3), Anthropic (Claude), Google (Gemini), Mistral, Groq, Ollama for local models, and many more. You can configure multiple providers at the same time and switch between them in the chat.
Can I use LobeChat with local models?
Yes. You can deploy an Ollama instance on the same Sliplane server and point LobeChat to it using the OPENAI_PROXY_URL environment variable. Use the internal service URL (e.g. http://ollama.internal:11434/v1) so traffic stays on the private network.
How do I update LobeChat?
Change the image tag in your Sliplane service settings and redeploy. Check Docker Hub for the latest stable version.
Are there alternatives to LobeChat?
Yes, popular alternatives include Open WebUI (great for Ollama integration), AnythingLLM (all-in-one RAG and chat), and LibreChat (another multi-provider chat UI). Check out our post on 5 awesome Open WebUI alternatives for more options.
Where are my conversations stored?
In LobeChat's lite mode (which this preset uses), all conversations are stored in your browser's local storage. This means your chat history stays entirely on your device. If you need server-side storage and multi-device sync, you can upgrade to LobeChat's database mode by adding a PostgreSQL service and configuring the database connection.