Getting Started
Prerequisites
What you need before running Binexia.
Required
- Linux machine (Ubuntu/Debian) or macOS on the same network as your laptop
- Docker + Docker Compose v2 — install guide
- Git
- GitHub personal access token — needed to pull private fimula-base packages during build. Generate at https://github.com/settings/tokens (needs
repo:readscope). Containers will not build without this.
LLM Provider Keys (optional)
The platform works in mock mode without any LLM keys — dashboard data is hardcoded, AI queries return demo responses. Add keys when you're ready for live AI features.
| Key | Where to get | Cost | Default for |
|---|---|---|---|
OPENAI_API_KEY | https://platform.openai.com/api-keys | ~$0.15/1M tokens | Orchestrator, Knowledge, Context, Document Extraction |
ANTHROPIC_API_KEY | https://console.anthropic.com/settings/keys | ~$3/1M tokens | Analytics, Behavioral Scoring, Anomaly Detection, Forecast |
Additional providers
| Key | Where to get | Notes |
|---|---|---|
OPENROUTER_API_KEY | https://openrouter.ai/keys | Unified gateway — access 100+ models with one key |
GLM_API_KEY | https://open.bigmodel.cn | Zhipu AI. Also works as Anthropic-compatible via custom api_base |
DEEPSEEK_API_KEY | https://platform.deepseek.com | Cheap reasoning models |
MISTRAL_API_KEY | https://console.mistral.ai | European provider |
GROQ_API_KEY | https://console.groq.com | Fast inference (LPU hardware) |
What works without LLM keys
- Full frontend UI (dashboards, admin, knowledge library)
- Login and authentication
- Dashboard widgets with mock data
- All CRUD operations (semantic model, widgets, goals)
- Document upload and storage (MinIO)
What requires LLM keys
- AI query answers (text-to-SQL, RAG)
- Document extraction (structured field synthesis)
- Behavioral scoring explanations
- Anomaly detection analysis
- Revenue forecasting
Hardware
12 Docker containers run simultaneously. Recommended minimum:
- RAM: 8 GB (16 GB preferred)
- Disk: 20 GB free (Docker images + data volumes)
- CPU: 4 cores
Info
Adding LLM keys later is seamless — edit .env.testip.local and restart containers. No rebuild needed.