Troubleshooting Guide

This guide covers common issues and solutions for the Review Bot Automator, including LLM provider setup, privacy verification, and general usage problems.

Table of Contents

GitHub Authentication Issues

“Authentication failed” Error

Problem: GitHub API authentication fails.

Symptoms:

Error: Authentication failed
HTTP 401: Unauthorized

Solutions:

  1. Verify token is set:

    echo $GITHUB_PERSONAL_ACCESS_TOKEN
    # Should display your token (starts with ghp_)
    
  2. Check token permissions:

    • Token must have repo scope (full control of private repositories)

    • For organization repos, token needs read:org scope

    • Regenerate token at: https://github.com/settings/tokens

  3. Set token correctly:

    export GITHUB_PERSONAL_ACCESS_TOKEN="ghp_your_token_here"
    # Or add to ~/.bashrc or ~/.zshrc for persistence
    
  4. Try alternative token environment variable:

    export GITHUB_TOKEN="ghp_your_token_here"
    # GITHUB_TOKEN is supported for backward compatibility
    

“Repository not found” Error

Problem: Cannot access repository.

Symptoms:

Error: Repository not found
HTTP 404: Not Found

Solutions:

  1. Verify repository details:

    # Check repository exists and is accessible
    gh repo view OWNER/REPO
    
  2. Check token scopes:

    • Private repos require repo scope

    • Organization repos require read:org scope

  3. Verify owner and repo names:

    • Owner is case-sensitive

    • Repository name must match exactly

LLM Provider Issues

OpenAI API

Authentication Failures

Problem: OpenAI API key rejected.

Symptoms:

Error: Incorrect API key provided
AuthenticationError: Invalid API key

Solutions:

  1. Verify API key:

    echo $CR_LLM_API_KEY
    # Should start with sk-
    
  2. Generate new API key:

  3. Set API key:

    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="openai"
    export CR_LLM_API_KEY="sk-..."
    

Rate Limiting

Problem: OpenAI API rate limits exceeded.

Symptoms:

Error: Rate limit exceeded
RateLimitError: You exceeded your current quota

Solutions:

  1. Check usage limits:

  2. Reduce request rate:

    • The resolver has automatic retry with exponential backoff

    • Wait a few minutes and try again

  3. Upgrade OpenAI account:

Anthropic API

Authentication Failures

Problem: Anthropic API key rejected.

Symptoms:

Error: Invalid API key
AuthenticationError: x-api-key header is invalid

Solutions:

  1. Verify API key:

    echo $CR_LLM_API_KEY
    # Should start with sk-ant-
    
  2. Generate new API key:

  3. Set API key:

    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="anthropic"
    export CR_LLM_API_KEY="sk-ant-..."
    

Model Not Found

Problem: Specified model doesn’t exist or isn’t available.

Symptoms:

Error: model: claude-sonnet-4-5 does not exist

Solutions:

  1. Use correct model name:

    # Correct model names (as of Nov 2025):
    export CR_LLM_MODEL="claude-sonnet-4-5"      # Recommended (aliases claude-sonnet-4-20250514)
    export CR_LLM_MODEL="claude-haiku-4-5"       # Budget option
    
  2. Check available models:

Claude CLI

Command Not Found

Problem: claude command not available.

Symptoms:

bash: claude: command not found

Solutions:

  1. Install Claude CLI:

    npm install -g @anthropic-ai/claude-code
    
  2. Verify installation:

    claude --version
    
  3. Check PATH:

    which claude
    # Should show path to claude binary
    

Authentication Required

Problem: Claude CLI not authenticated.

Symptoms:

Error: Not authenticated. Please run 'claude auth login'

Solutions:

  1. Authenticate:

    claude auth login
    # Follow interactive prompts
    
  2. Verify authentication:

    claude auth status
    
  3. Set provider:

    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="claude-cli"
    # No API key needed - uses CLI authentication
    

Codex CLI

Command Not Found

Problem: codex command not available.

Symptoms:

bash: codex: command not found

Solutions:

  1. Install GitHub Copilot CLI (standalone):

    # Requires Node.js and npm
    npm install -g @github/copilot
    

    Note: The old gh-copilot extension for GitHub CLI was deprecated and stopped working on October 25, 2025. Use the standalone npm package instead.

  2. Verify installation:

    github-copilot --version
    
  3. Set provider:

    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="codex-cli"
    # Authenticate with: github-copilot auth
    

Copilot Subscription Required

Problem: GitHub Copilot subscription not active.

Symptoms:

Error: GitHub Copilot subscription required

Solutions:

  1. Subscribe to GitHub Copilot:

  2. Verify subscription:

    gh copilot explain "test"
    

Ollama

Connection Refused

Problem: Cannot connect to Ollama server.

Symptoms:

Error: Connection refused
Failed to connect to <http://localhost:11434>

Solutions:

  1. Start Ollama service:

    # macOS/Linux
    ollama serve
    
    # Or check if already running:
    curl <http://localhost:11434/api/tags>
    
  2. Verify Ollama installation:

    ollama --version
    
  3. Install Ollama if missing:

    # Linux/macOS
    curl -fsSL <https://ollama.ai/install.sh> | sh
    
    # macOS (alternative)
    brew install ollama
    
    # Windows
    # Download from <https://ollama.ai/download>
    
  4. Check custom URL:

    export OLLAMA_BASE_URL="<http://localhost:11434">
    

Model Not Found

Problem: Requested model not downloaded.

Symptoms:

Error: model 'llama3.3:70b' not found

Solutions:

  1. Pull model:

    ollama pull llama3.3:70b
    
  2. List available models:

    ollama list
    
  3. Use auto-download script:

    ./scripts/download_ollama_models.sh
    # Interactive script to download recommended models
    
  4. Recommended models:

    # Best performance (requires 40GB+ RAM)
    ollama pull llama3.3:70b
    
    # Balanced (requires 8GB+ RAM)
    ollama pull llama3.1:8b
    
    # Lightweight (requires 4GB+ RAM)
    ollama pull llama3.2:3b
    

GPU Not Detected

Problem: Ollama not using GPU acceleration.

Symptoms:

Warning: GPU not detected, using CPU inference
Inference is very slow

Solutions:

  1. Verify GPU availability:

    # NVIDIA
    nvidia-smi
    
    # AMD (Linux)
    rocm-smi
    
    # Apple Silicon
    system_profiler SPDisplaysDataType | grep "Chipset Model"
    
  2. Install GPU drivers:

    • NVIDIA: Install CUDA toolkit and drivers

    • AMD: Install ROCm

    • Apple: GPU support built-in (macOS 12+)

  3. Restart Ollama:

    # Stop Ollama
    pkill ollama
    
    # Start again (should detect GPU)
    ollama serve
    
  4. Check GPU detection script:

    # Run GPU detection test
    python -c "from review_bot_automator.llm.providers.gpu_detector import GPUDetector; print(GPUDetector.detect_gpu('<http://localhost:11434'>))"
    

Circuit Breaker Issues

CircuitBreakerOpen Error

Problem: Requests blocked by circuit breaker.

Symptoms:

CircuitBreakerOpen: Circuit breaker is open, retry in 45.2s

Causes:

  • 5+ consecutive LLM API failures triggered the circuit

  • Provider experiencing outage or rate limiting

  • Network connectivity issues

Solutions:

  1. Wait for cooldown:

    # Default cooldown is 60 seconds
    # Check remaining time in error message
    
  2. Check provider status:

    # OpenAI
    curl https://status.openai.com/api/v2/status.json
    
    # Anthropic
    curl https://status.anthropic.com/api/v2/status.json
    
  3. Review logs for root cause:

    pr-resolve apply 123 --log-level DEBUG
    # Look for "Circuit breaker opening after X consecutive failures"
    
  4. Adjust threshold if too sensitive:

    export CR_LLM_CIRCUIT_BREAKER_THRESHOLD=10  # Default: 5
    export CR_LLM_CIRCUIT_BREAKER_COOLDOWN=30.0  # Default: 60.0
    
  5. Disable circuit breaker (not recommended):

    export CR_LLM_CIRCUIT_BREAKER_ENABLED=false
    

Circuit Trips Too Often

Problem: Circuit breaker triggers on transient failures.

Solutions:

  1. Increase threshold:

    llm:
      circuit_breaker_threshold: 10  # Allow more failures
    
  2. Enable retry first:

    llm:
      retry_on_rate_limit: true
      retry_max_attempts: 5
    
  3. Check network stability:

    ping api.openai.com
    ping api.anthropic.com
    

Cost and Budget Issues

Budget Exceeded Error

Problem: LLM requests blocked due to budget limit.

Symptoms:

ERROR: LLM cost budget exceeded: $5.23 of $5.00 used

Solutions:

  1. Increase budget:

    export CR_LLM_COST_BUDGET=10.0  # Increase from $5 to $10
    
  2. Check current usage:

    pr-resolve apply 123 --llm-enabled --show-metrics
    # Review "Total cost" in output
    
  3. Use cheaper model:

    # Anthropic: Use Haiku instead of Sonnet
    export CR_LLM_MODEL="claude-haiku-4-20250514"
    
    # OpenAI: Use mini instead of full GPT-4o
    export CR_LLM_MODEL="gpt-4o-mini"
    
  4. Enable caching:

    llm:
      cache_enabled: true  # Reuse responses for identical prompts
    

Unexpected High Costs

Problem: LLM costs higher than expected.

Solutions:

  1. Enable metrics to track:

    pr-resolve apply 123 --llm-enabled --show-metrics --metrics-output costs.json
    
  2. Review per-provider costs:

    cat costs.json | jq '.provider_stats'
    
  3. Check cache hit rate:

    # Low cache hit rate = more API calls = higher cost
    # Target: > 30% cache hit rate
    
  4. Use free providers for development:

    export CR_LLM_PROVIDER=ollama
    export CR_LLM_MODEL=qwen2.5-coder:7b
    

Budget Warning Not Appearing

Problem: No warning before budget exceeded.

Solutions:

  1. Warning appears at 80% by default:

    WARNING: LLM cost budget warning: $4.12 of $5.00 used (82.4%)
    
  2. Check log level:

    export CR_LOG_LEVEL=INFO  # Warning requires INFO or lower
    

Parallel Processing Issues

Race Conditions or Corrupted Output

Problem: Parallel processing produces inconsistent results.

Symptoms:

  • Different results on each run

  • Partial or corrupted output files

  • Mixed content from different files

Solutions:

  1. Reduce worker count:

    pr-resolve apply 123 --parallel --max-workers 2
    
  2. Disable parallel processing:

    pr-resolve apply 123  # Sequential by default
    # Or explicitly:
    export CR_PARALLEL=false
    
  3. Check for file conflicts:

    # If multiple comments target same file, conflicts may occur
    pr-resolve analyze 123 --log-level DEBUG
    

Workers Exhausted or Hanging

Problem: Parallel workers timeout or hang.

Symptoms:

Warning: Worker timeout waiting for response
Processing stalled at X%

Solutions:

  1. Reduce workers:

    export CR_MAX_WORKERS=2  # Default: 4
    
  2. Check LLM provider health:

    # Test provider directly
    curl http://localhost:11434/api/tags  # Ollama
    
  3. Increase timeout:

    # Not configurable via env var - contact support
    
  4. Monitor system resources:

    htop  # Check CPU/memory usage
    

Out of Memory with Parallel Processing

Problem: System runs out of memory.

Solutions:

  1. Reduce workers:

    export CR_MAX_WORKERS=2
    
  2. Use smaller Ollama model:

    export CR_LLM_MODEL=llama3.2:3b  # Instead of 70b
    
  3. Process sequentially:

    export CR_PARALLEL=false
    
  4. Monitor memory usage:

    watch -n 1 free -h
    

Privacy Verification Issues

Privacy Script Fails

Problem: verify_privacy.sh script fails or reports errors.

Symptoms:

Error: Ollama not running
Error: Test PR processing failed

Solutions:

  1. Start Ollama first:

    ollama serve
    
  2. Pull required model:

    ollama pull llama3.3:70b
    
  3. Set environment variables:

    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="ollama"
    export CR_LLM_MODEL="llama3.3:70b"
    export GITHUB_PERSONAL_ACCESS_TOKEN="ghp_..."
    
  4. Run script with debug:

    bash -x ./scripts/verify_privacy.sh
    

Unexpected Network Connections

Problem: Privacy script detects connections to third-party LLM vendors.

Symptoms:

ERROR: Detected connection to api.openai.com
Privacy violation detected!

Solutions:

  1. Verify Ollama provider is set:

    echo $CR_LLM_PROVIDER
    # Should be "ollama"
    
  2. Check for conflicting environment variables:

    env | grep CR_LLM
    # Remove any OpenAI/Anthropic API keys
    unset CR_LLM_API_KEY
    
  3. Restart with clean environment:

    # Clear all LLM-related vars
    unset CR_LLM_API_KEY
    unset CR_LLM_PROVIDER
    unset CR_LLM_MODEL
    
    # Set only Ollama vars
    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="ollama"
    export CR_LLM_MODEL="llama3.3:70b"
    

Performance Issues

Slow Processing

Problem: PR analysis takes too long.

Symptoms:

  • Processing 10+ minutes for small PRs

  • Timeout errors

Solutions:

  1. Enable parallel processing:

    pr-resolve apply --pr 123 --owner myorg --repo myrepo \
      --parallel --max-workers 8
    
  2. Use faster model:

    # Anthropic (fast with caching)
    export CR_LLM_MODEL="claude-haiku-4"
    
    # OpenAI (fast)
    export CR_LLM_MODEL="gpt-4o-mini"
    
    # Ollama (use smaller model)
    export CR_LLM_MODEL="llama3.1:8b"
    
  3. Check network connection:

    # Test GitHub API speed
    time gh api user
    

High Memory Usage

Problem: Tool uses too much memory.

Symptoms:

  • System slowdown

  • Out of memory errors

Solutions:

  1. Use smaller Ollama model:

    # Instead of llama3.3:70b (40GB RAM)
    ollama pull llama3.1:8b    # 8GB RAM
    ollama pull llama3.2:3b    # 4GB RAM
    
  2. Reduce parallel workers:

    pr-resolve apply --pr 123 --owner myorg --repo myrepo \
      --parallel --max-workers 2
    
  3. Process in batches:

    # Process specific files instead of whole PR
    pr-resolve apply --pr 123 --owner myorg --repo myrepo \
      --mode non-conflicts-only  # Process only non-conflicting first
    

Installation Issues

Dependency Conflicts

Problem: pip reports dependency conflicts.

Symptoms:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed

Solutions:

  1. Use fresh virtual environment:

    python -m venv .venv
    source .venv/bin/activate
    pip install review-bot-automator
    
  2. Upgrade pip:

    pip install --upgrade pip setuptools wheel
    
  3. Install from source:

    git clone <https://github.com/VirtualAgentics/review-bot-automator.git>
    cd review-bot-automator
    python -m venv .venv
    source .venv/bin/activate
    pip install -e ".[dev]"
    

Python Version Incompatible

Problem: Python version too old.

Symptoms:

ERROR: Package requires Python >=3.12

Solutions:

  1. Check Python version:

    python --version
    # Should be 3.12 or higher
    
  2. Install Python 3.12+:

    # macOS
    brew install python@3.12
    
    # Ubuntu
    sudo apt update
    sudo apt install python3.12 python3.12-venv
    
    # Or use pyenv
    pyenv install 3.12.0
    pyenv global 3.12.0
    

General Issues

“No conflicts detected” but comments exist

Problem: Analyzer reports no conflicts but PR has comments.

Solutions:

  1. Check comment format:

    • Comments must be from CodeRabbit or supported format

    • Comments must contain change suggestions (not just reviews)

  2. Verify LLM parsing:

    # Enable LLM parsing for better coverage
    export CR_LLM_ENABLED="true"
    export CR_LLM_PROVIDER="ollama"  # or other provider
    
  3. Check comment line numbers:

    • Comments must be on lines that exist in current file

    • Outdated comments may be ignored

Type Checking Errors

Problem: MyPy reports type errors during development.

Solutions:

  1. Run MyPy manually:

    source .venv/bin/activate
    mypy src/ --strict
    
  2. Check MyPy configuration:

    cat pyproject.toml | grep -A 10 "\[tool.mypy\]"
    
  3. Update type stubs:

    pip install --upgrade types-requests types-PyYAML
    

Tests Failing

Problem: pytest reports test failures.

Solutions:

  1. Run tests with verbose output:

    source .venv/bin/activate
    pytest -v
    
  2. Run specific test:

    pytest tests/test_specific.py::test_function_name -v
    
  3. Check test dependencies:

    pip install -e ".[dev]"
    
  4. Clear pytest cache:

    rm -rf .pytest_cache
    pytest --cache-clear
    

Getting Additional Help

If your issue isn’t covered here:

  1. Check existing issues:

  2. Create new issue:

    • Use issue templates

    • Include error messages, commands run, environment details

    • Provide minimal reproduction steps

  3. Join discussions:

  4. Review documentation:

Debug Mode

For any issue, enable debug logging for detailed output:

# CLI flag
pr-resolve apply --pr 123 --owner myorg --repo myrepo --log-level DEBUG

# Environment variable
export CR_LOG_LEVEL="DEBUG"

# With log file
pr-resolve apply --pr 123 --owner myorg --repo myrepo \
  --log-level DEBUG --log-file debug.log

Then share the debug log when reporting issues.