SportSQL now supports multiple LLM providers with Gemini as the default and OpenAI GPT as an alternative.
# No changes needed - works as before
python app.py --server local
python evaluate_pipeline.py
# Add --llm openai to any command
python app.py --server local --llm openai
python evaluate_pipeline.py --llm openai
python test_evaluation.py --llm openai
# .env file
API_KEY=your_gemini_api_key
GEMINI_MODEL=gemini-2.0-flash
# .env file
OPENAI_API_KEY=your_openai_api_key
GPT_MODEL=gpt-4o-mini # or gpt-4o
# Force a specific provider globally
LLM_PROVIDER=openai # or gemini
# Use Gemini (default)
python evaluate_pipeline.py
# Use OpenAI GPT
python evaluate_pipeline.py --llm openai
python evaluate_pipeline.py --llm gpt # same as openai
# Test specific models
python llm_wrapper.py --llm openai --model gpt-4o
python llm_wrapper.py --llm gemini --model gemini-1.5-pro
from llm_wrapper import LLMWrapper, get_global_llm
# Use global instance (respects CLI args)
llm = get_global_llm()
response = llm.generate_content("Your prompt here")
# Create specific provider
llm_gpt = LLMWrapper(provider='openai', model='gpt-4o')
response = llm_gpt.generate_content("Your prompt here")
# Quick generation
from llm_wrapper import generate_with_llm
response = generate_with_llm("Your prompt", provider='openai')
| Feature | Gemini | OpenAI GPT |
|---|---|---|
| Default | โ Yes | โ No |
| Cost | ๐ฐ Lower | ๐ฐ๐ฐ Higher |
| Rate Limits | โ ๏ธ Stricter | โ More generous |
| SQL Quality | โ Good | โ Excellent |
| Reliability | โ ๏ธ Occasional issues | โ Very reliable |
| Speed | โ Fast | โ Fast |
pip install openai>=1.0.0
pip install -r requirements.txt
python app.py --server local --llm openai
python evaluate_pipeline.py --llm openai
python test_evaluation.py --llm openai
python update_gt_sql.py --llm openai
python llm_wrapper.py --llm openai --prompt "Test prompt"
# Error: "API key not found"
# Solution: Set the appropriate environment variable
export OPENAI_API_KEY=your_key_here
# or
export API_KEY=your_gemini_key_here
# Gemini rate limits hit
python your_script.py --llm openai # Switch to OpenAI
# OpenAI rate limits hit
python your_script.py --llm gemini # Switch to Gemini
# Missing google-generativeai
pip install google-generativeai
# Missing openai
pip install openai>=1.0.0
python evaluate_pipeline.py # Uses Gemini by default
LLM_PROVIDER=openai python app.py --server remote
# If Gemini hits limits, switch to OpenAI
python evaluate_pipeline.py --llm openai
# Use cheaper models
export GPT_MODEL=gpt-4o-mini
export GEMINI_MODEL=gemini-1.5-flash
The wrapper maintains full backward compatibility:
# Test Gemini
python llm_wrapper.py --llm gemini --prompt "SELECT COUNT(*) FROM players"
# Test OpenAI
python llm_wrapper.py --llm openai --prompt "SELECT COUNT(*) FROM players"
# Test with actual SportSQL pipeline
python -c "
import sys
sys.argv.extend(['--server', 'local', '--llm', 'openai'])
from gemini_api import generate_sql
print(generate_sql('How many goals has Haaland scored?'))
"
This unified approach gives you the flexibility to choose the best LLM for each task while maintaining full compatibility with existing code! ๐