Troubleshooting Guide
This guide covers common issues and solutions for the Review Bot Automator, including LLM provider setup, privacy verification, and general usage problems.
Table of Contents
GitHub Authentication Issues
“Authentication failed” Error
Problem: GitHub API authentication fails.
Symptoms:
Error: Authentication failed
HTTP 401: Unauthorized
Solutions:
Verify token is set:
echo $GITHUB_PERSONAL_ACCESS_TOKEN # Should display your token (starts with ghp_)
Check token permissions:
Token must have
reposcope (full control of private repositories)For organization repos, token needs
read:orgscopeRegenerate token at: https://github.com/settings/tokens
Set token correctly:
export GITHUB_PERSONAL_ACCESS_TOKEN="ghp_your_token_here" # Or add to ~/.bashrc or ~/.zshrc for persistence
Try alternative token environment variable:
export GITHUB_TOKEN="ghp_your_token_here" # GITHUB_TOKEN is supported for backward compatibility
“Repository not found” Error
Problem: Cannot access repository.
Symptoms:
Error: Repository not found
HTTP 404: Not Found
Solutions:
Verify repository details:
# Check repository exists and is accessible gh repo view OWNER/REPO
Check token scopes:
Private repos require
reposcopeOrganization repos require
read:orgscope
Verify owner and repo names:
Owner is case-sensitive
Repository name must match exactly
LLM Provider Issues
OpenAI API
Authentication Failures
Problem: OpenAI API key rejected.
Symptoms:
Error: Incorrect API key provided
AuthenticationError: Invalid API key
Solutions:
Verify API key:
echo $CR_LLM_API_KEY # Should start with sk-
Generate new API key:
Create new secret key
Copy and set immediately (can’t view later)
Set API key:
export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="openai" export CR_LLM_API_KEY="sk-..."
Rate Limiting
Problem: OpenAI API rate limits exceeded.
Symptoms:
Error: Rate limit exceeded
RateLimitError: You exceeded your current quota
Solutions:
Check usage limits:
Verify you have available credits
Reduce request rate:
The resolver has automatic retry with exponential backoff
Wait a few minutes and try again
Upgrade OpenAI account:
Add payment method if on free tier
Increase usage limits at: https://platform.openai.com/account/billing
Anthropic API
Authentication Failures
Problem: Anthropic API key rejected.
Symptoms:
Error: Invalid API key
AuthenticationError: x-api-key header is invalid
Solutions:
Verify API key:
echo $CR_LLM_API_KEY # Should start with sk-ant-
Generate new API key:
Create new key
Copy and set immediately
Set API key:
export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="anthropic" export CR_LLM_API_KEY="sk-ant-..."
Model Not Found
Problem: Specified model doesn’t exist or isn’t available.
Symptoms:
Error: model: claude-sonnet-4-5 does not exist
Solutions:
Use correct model name:
# Correct model names (as of Nov 2025): export CR_LLM_MODEL="claude-sonnet-4-5" # Recommended (aliases claude-sonnet-4-20250514) export CR_LLM_MODEL="claude-haiku-4-5" # Budget option
Check available models:
Verify model name and availability
Claude CLI
Command Not Found
Problem: claude command not available.
Symptoms:
bash: claude: command not found
Solutions:
Install Claude CLI:
npm install -g @anthropic-ai/claude-code
Verify installation:
claude --versionCheck PATH:
which claude # Should show path to claude binary
Authentication Required
Problem: Claude CLI not authenticated.
Symptoms:
Error: Not authenticated. Please run 'claude auth login'
Solutions:
Authenticate:
claude auth login # Follow interactive prompts
Verify authentication:
claude auth status
Set provider:
export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="claude-cli" # No API key needed - uses CLI authentication
Codex CLI
Command Not Found
Problem: codex command not available.
Symptoms:
bash: codex: command not found
Solutions:
Install GitHub Copilot CLI (standalone):
# Requires Node.js and npm npm install -g @github/copilot
Note: The old
gh-copilotextension for GitHub CLI was deprecated and stopped working on October 25, 2025. Use the standalone npm package instead.Verify installation:
github-copilot --versionSet provider:
export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="codex-cli" # Authenticate with: github-copilot auth
Copilot Subscription Required
Problem: GitHub Copilot subscription not active.
Symptoms:
Error: GitHub Copilot subscription required
Solutions:
Subscribe to GitHub Copilot:
Individual: https://github.com/settings/copilot
Organization: Contact your GitHub admin
Verify subscription:
gh copilot explain "test"
Ollama
Connection Refused
Problem: Cannot connect to Ollama server.
Symptoms:
Error: Connection refused
Failed to connect to <http://localhost:11434>
Solutions:
Start Ollama service:
# macOS/Linux ollama serve # Or check if already running: curl <http://localhost:11434/api/tags>
Verify Ollama installation:
ollama --versionInstall Ollama if missing:
# Linux/macOS curl -fsSL <https://ollama.ai/install.sh> | sh # macOS (alternative) brew install ollama # Windows # Download from <https://ollama.ai/download>
Check custom URL:
export OLLAMA_BASE_URL="<http://localhost:11434">
Model Not Found
Problem: Requested model not downloaded.
Symptoms:
Error: model 'llama3.3:70b' not found
Solutions:
Pull model:
ollama pull llama3.3:70b
List available models:
ollama listUse auto-download script:
./scripts/download_ollama_models.sh # Interactive script to download recommended modelsRecommended models:
# Best performance (requires 40GB+ RAM) ollama pull llama3.3:70b # Balanced (requires 8GB+ RAM) ollama pull llama3.1:8b # Lightweight (requires 4GB+ RAM) ollama pull llama3.2:3b
GPU Not Detected
Problem: Ollama not using GPU acceleration.
Symptoms:
Warning: GPU not detected, using CPU inference
Inference is very slow
Solutions:
Verify GPU availability:
# NVIDIA nvidia-smi # AMD (Linux) rocm-smi # Apple Silicon system_profiler SPDisplaysDataType | grep "Chipset Model"
Install GPU drivers:
NVIDIA: Install CUDA toolkit and drivers
AMD: Install ROCm
Apple: GPU support built-in (macOS 12+)
Restart Ollama:
# Stop Ollama pkill ollama # Start again (should detect GPU) ollama serve
Check GPU detection script:
# Run GPU detection test python -c "from review_bot_automator.llm.providers.gpu_detector import GPUDetector; print(GPUDetector.detect_gpu('<http://localhost:11434'>))"
Circuit Breaker Issues
CircuitBreakerOpen Error
Problem: Requests blocked by circuit breaker.
Symptoms:
CircuitBreakerOpen: Circuit breaker is open, retry in 45.2s
Causes:
5+ consecutive LLM API failures triggered the circuit
Provider experiencing outage or rate limiting
Network connectivity issues
Solutions:
Wait for cooldown:
# Default cooldown is 60 seconds # Check remaining time in error message
Check provider status:
# OpenAI curl https://status.openai.com/api/v2/status.json # Anthropic curl https://status.anthropic.com/api/v2/status.json
Review logs for root cause:
pr-resolve apply 123 --log-level DEBUG # Look for "Circuit breaker opening after X consecutive failures"
Adjust threshold if too sensitive:
export CR_LLM_CIRCUIT_BREAKER_THRESHOLD=10 # Default: 5 export CR_LLM_CIRCUIT_BREAKER_COOLDOWN=30.0 # Default: 60.0
Disable circuit breaker (not recommended):
export CR_LLM_CIRCUIT_BREAKER_ENABLED=false
Circuit Trips Too Often
Problem: Circuit breaker triggers on transient failures.
Solutions:
Increase threshold:
llm: circuit_breaker_threshold: 10 # Allow more failures
Enable retry first:
llm: retry_on_rate_limit: true retry_max_attempts: 5
Check network stability:
ping api.openai.com ping api.anthropic.com
Cost and Budget Issues
Budget Exceeded Error
Problem: LLM requests blocked due to budget limit.
Symptoms:
ERROR: LLM cost budget exceeded: $5.23 of $5.00 used
Solutions:
Increase budget:
export CR_LLM_COST_BUDGET=10.0 # Increase from $5 to $10
Check current usage:
pr-resolve apply 123 --llm-enabled --show-metrics # Review "Total cost" in output
Use cheaper model:
# Anthropic: Use Haiku instead of Sonnet export CR_LLM_MODEL="claude-haiku-4-20250514" # OpenAI: Use mini instead of full GPT-4o export CR_LLM_MODEL="gpt-4o-mini"
Enable caching:
llm: cache_enabled: true # Reuse responses for identical prompts
Unexpected High Costs
Problem: LLM costs higher than expected.
Solutions:
Enable metrics to track:
pr-resolve apply 123 --llm-enabled --show-metrics --metrics-output costs.json
Review per-provider costs:
cat costs.json | jq '.provider_stats'
Check cache hit rate:
# Low cache hit rate = more API calls = higher cost # Target: > 30% cache hit rate
Use free providers for development:
export CR_LLM_PROVIDER=ollama export CR_LLM_MODEL=qwen2.5-coder:7b
Budget Warning Not Appearing
Problem: No warning before budget exceeded.
Solutions:
Warning appears at 80% by default:
WARNING: LLM cost budget warning: $4.12 of $5.00 used (82.4%)
Check log level:
export CR_LOG_LEVEL=INFO # Warning requires INFO or lower
Parallel Processing Issues
Race Conditions or Corrupted Output
Problem: Parallel processing produces inconsistent results.
Symptoms:
Different results on each run
Partial or corrupted output files
Mixed content from different files
Solutions:
Reduce worker count:
pr-resolve apply 123 --parallel --max-workers 2
Disable parallel processing:
pr-resolve apply 123 # Sequential by default # Or explicitly: export CR_PARALLEL=false
Check for file conflicts:
# If multiple comments target same file, conflicts may occur pr-resolve analyze 123 --log-level DEBUG
Workers Exhausted or Hanging
Problem: Parallel workers timeout or hang.
Symptoms:
Warning: Worker timeout waiting for response
Processing stalled at X%
Solutions:
Reduce workers:
export CR_MAX_WORKERS=2 # Default: 4
Check LLM provider health:
# Test provider directly curl http://localhost:11434/api/tags # Ollama
Increase timeout:
# Not configurable via env var - contact supportMonitor system resources:
htop # Check CPU/memory usage
Out of Memory with Parallel Processing
Problem: System runs out of memory.
Solutions:
Reduce workers:
export CR_MAX_WORKERS=2
Use smaller Ollama model:
export CR_LLM_MODEL=llama3.2:3b # Instead of 70b
Process sequentially:
export CR_PARALLEL=false
Monitor memory usage:
watch -n 1 free -h
Privacy Verification Issues
Privacy Script Fails
Problem: verify_privacy.sh script fails or reports errors.
Symptoms:
Error: Ollama not running
Error: Test PR processing failed
Solutions:
Start Ollama first:
ollama servePull required model:
ollama pull llama3.3:70b
Set environment variables:
export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="ollama" export CR_LLM_MODEL="llama3.3:70b" export GITHUB_PERSONAL_ACCESS_TOKEN="ghp_..."
Run script with debug:
bash -x ./scripts/verify_privacy.sh
Unexpected Network Connections
Problem: Privacy script detects connections to third-party LLM vendors.
Symptoms:
ERROR: Detected connection to api.openai.com
Privacy violation detected!
Solutions:
Verify Ollama provider is set:
echo $CR_LLM_PROVIDER # Should be "ollama"
Check for conflicting environment variables:
env | grep CR_LLM # Remove any OpenAI/Anthropic API keys unset CR_LLM_API_KEY
Restart with clean environment:
# Clear all LLM-related vars unset CR_LLM_API_KEY unset CR_LLM_PROVIDER unset CR_LLM_MODEL # Set only Ollama vars export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="ollama" export CR_LLM_MODEL="llama3.3:70b"
Performance Issues
Slow Processing
Problem: PR analysis takes too long.
Symptoms:
Processing 10+ minutes for small PRs
Timeout errors
Solutions:
Enable parallel processing:
pr-resolve apply --pr 123 --owner myorg --repo myrepo \ --parallel --max-workers 8
Use faster model:
# Anthropic (fast with caching) export CR_LLM_MODEL="claude-haiku-4" # OpenAI (fast) export CR_LLM_MODEL="gpt-4o-mini" # Ollama (use smaller model) export CR_LLM_MODEL="llama3.1:8b"
Check network connection:
# Test GitHub API speed time gh api user
High Memory Usage
Problem: Tool uses too much memory.
Symptoms:
System slowdown
Out of memory errors
Solutions:
Use smaller Ollama model:
# Instead of llama3.3:70b (40GB RAM) ollama pull llama3.1:8b # 8GB RAM ollama pull llama3.2:3b # 4GB RAM
Reduce parallel workers:
pr-resolve apply --pr 123 --owner myorg --repo myrepo \ --parallel --max-workers 2
Process in batches:
# Process specific files instead of whole PR pr-resolve apply --pr 123 --owner myorg --repo myrepo \ --mode non-conflicts-only # Process only non-conflicting first
Installation Issues
Dependency Conflicts
Problem: pip reports dependency conflicts.
Symptoms:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed
Solutions:
Use fresh virtual environment:
python -m venv .venv source .venv/bin/activate pip install review-bot-automator
Upgrade pip:
pip install --upgrade pip setuptools wheel
Install from source:
git clone <https://github.com/VirtualAgentics/review-bot-automator.git> cd review-bot-automator python -m venv .venv source .venv/bin/activate pip install -e ".[dev]"
Python Version Incompatible
Problem: Python version too old.
Symptoms:
ERROR: Package requires Python >=3.12
Solutions:
Check Python version:
python --version # Should be 3.12 or higher
Install Python 3.12+:
# macOS brew install python@3.12 # Ubuntu sudo apt update sudo apt install python3.12 python3.12-venv # Or use pyenv pyenv install 3.12.0 pyenv global 3.12.0
General Issues
“No conflicts detected” but comments exist
Problem: Analyzer reports no conflicts but PR has comments.
Solutions:
Check comment format:
Comments must be from CodeRabbit or supported format
Comments must contain change suggestions (not just reviews)
Verify LLM parsing:
# Enable LLM parsing for better coverage export CR_LLM_ENABLED="true" export CR_LLM_PROVIDER="ollama" # or other provider
Check comment line numbers:
Comments must be on lines that exist in current file
Outdated comments may be ignored
Type Checking Errors
Problem: MyPy reports type errors during development.
Solutions:
Run MyPy manually:
source .venv/bin/activate mypy src/ --strict
Check MyPy configuration:
cat pyproject.toml | grep -A 10 "\[tool.mypy\]"
Update type stubs:
pip install --upgrade types-requests types-PyYAML
Tests Failing
Problem: pytest reports test failures.
Solutions:
Run tests with verbose output:
source .venv/bin/activate pytest -v
Run specific test:
pytest tests/test_specific.py::test_function_name -v
Check test dependencies:
pip install -e ".[dev]"
Clear pytest cache:
rm -rf .pytest_cache pytest --cache-clear
Getting Additional Help
If your issue isn’t covered here:
Check existing issues:
Visit: https://github.com/VirtualAgentics/review-bot-automator/issues
Search for similar problems
Create new issue:
Use issue templates
Include error messages, commands run, environment details
Provide minimal reproduction steps
Join discussions:
Visit: https://github.com/VirtualAgentics/review-bot-automator/discussions
Ask questions and share solutions
Review documentation:
Debug Mode
For any issue, enable debug logging for detailed output:
# CLI flag
pr-resolve apply --pr 123 --owner myorg --repo myrepo --log-level DEBUG
# Environment variable
export CR_LOG_LEVEL="DEBUG"
# With log file
pr-resolve apply --pr 123 --owner myorg --repo myrepo \
--log-level DEBUG --log-file debug.log
Then share the debug log when reporting issues.