Compar:IA est un outil permettant de comparer à l’aveugle différents modèles d'IA conversationnelle pour sensibiliser aux enjeux de l'IA générative (biais, impact environmental) et constituer des jeux de données de préférence en français.
Compar:IA is a tool for blindly comparing different conversational AI models to raise awareness about the challenges of generative AI (bias, environmental impact) and to build up French-language preference datasets.
🌐 comparia.beta.gouv.fr · 📚 À propos · 🚀 Description de la startup d'Etat
We rely heavily on OpenRouter, so if you want to test with real providers, in your environment variables, you need to have OPENROUTER_API_KEY set according to the configured models located in utils/models/generated_models.json.
docker compose -f docker/docker-compose.yml up backend frontend
The easiest way to run Languia is using the provided Makefile:
# Install all dependencies (backend + frontend)
make install
# Run both backend and frontend in development mode
make dev
This will start:
Backend:
uv: curl -LsSf https://astral.sh/uv/install.sh | shuv syncuv run uvicorn main:app --reload --timeout-graceful-shutdown 1Frontend:
cd frontend/yarn installnpx vite devController (optional dashboard):
uv run uvicorn controller:app --reload --port 21001
make help # Display all available commands
make install # Install all dependencies
make install-backend # Install backend dependencies only
make install-frontend # Install frontend dependencies only
make dev # Run backend + frontend (parallel)
make dev-backend # Run backend only
make dev-frontend # Run frontend only
make dev-controller # Run the dashboard controller
make build-frontend # Build frontend for production
make test-backend # Run backend tests
make test-frontend # Run frontend tests
make clean # Clean generated files
make db-schema-init # Initializes the database schema
make db-migrate # Applies migrations
make models-build # Generates model files from JSON sources
make models-maintenance # Launches the model maintenance script
make dataset-export # Exports datasets to HuggingFace
Prerequisites: DATABASE_URI environment variable configured
# Initialize database schema
psql $DATABASE_URI -f utils/schemas/conversations.sql
psql $DATABASE_URI -f utils/schemas/votes.sql
psql $DATABASE_URI -f utils/schemas/reactions.sql
psql $DATABASE_URI -f utils/schemas/logs.sql
# Apply database migrations
psql $DATABASE_URI -f utils/schemas/migrations/conversations_13102025.sql
psql $DATABASE_URI -f utils/schemas/migrations/reactions_13102025.sql
These commands generate utils/models/generated-models.json and update translations in frontend/locales/messages/fr.json.
# Generate model files from JSON sources
uv run python utils/models/build_models.py
# Run the models maintenance script
uv run python utils/models/maintenance.py
Prerequisites: DATABASE_URI and HF_PUSH_DATASET_KEY environment variables configured
# Export datasets to HuggingFace
uv run python utils/export_dataset.py
# Install ranking_methods project dependencies (via uv)
cd utils/ranking_methods && uv pip install -e .
For more details, consult utils/ranking_methods/README.md and the notebooks in utils/ranking_methods/notebooks/.
frontend/: main code for frontend.
Frontend is Sveltekit. It lives in frontend/ and runs on port 5173 in dev env, which is Vite's default.
main.py: the Python file for the main FastAPI app
languia: backend code.
Most of the Gradio code is split between languia/block_arena.py and languia/listeners.py with languia/config.py for config.
It runs on port 8000 by default. Backend is a mounted gradio.Blocks within a FastAPI app.
demo.py: the Python file for Gradio's gr.Blocks configuration
docker/: Docker config
utils/: utilities for models generation and maintenance, ranking methods (Elo, maximum likelihood), database schemas, and dataset export to HuggingFace
controller.py: a simplistic dashboard
You can run it with FastAPI: uv run uvicorn controller:app --reload --port 21001
templates: Jinja2 template for the dashboard
pyproject.toml: Python requirements
sonar-project.properties SonarQube configuration
We want to get rid of that Gradio code by transforming it into async FastAPI code and Redis session handling.