Everything you need to ship
Social Inference Engine ships with comprehensive documentation covering every aspect of deployment, configuration, and customisation. All docs live in the GitHub repository.
Quick Start
git clone https://github.com/Shengboj0324/Inference-Engine.git && cd Inference-Engine
cp .env.example .env && nano .env # add your API key
docker compose up
curl http://localhost:8000/health
Reference Documentation
Getting Started
Deploy Social Inference Engine locally in under 5 minutes with Docker Compose. No cloud account required.
Architecture
System design and component relationships — Bloom filter deduplication, reservoir sampling, LLM routing, and pgvector retrieval.
Deployment Guide
Step-by-step instructions for Docker Compose, bare-metal (macOS/Ubuntu), and Windows WSL2 installations.
API Reference
Full OpenAPI 3.1 reference for all endpoints. Browse interactively at http://localhost:8000/docs when the server is running.
LLM Configuration
Configure OpenAI, Anthropic, Ollama, or vLLM providers. Set up two-tier routing for cost efficiency.
Training Guide
Calibration training and dataset format. How to update temperature scalars with your own analyst feedback data.
All documentation lives in the repository
Docs are maintained alongside the source code. Open a pull request to improve them.
View on GitHub