Skip to content

Troubleshooting

Common issues and solutions for ChimeraLM users.

Installation Issues

Python Version Errors

ModuleNotFoundError or ImportError after installation

Symptom: ModuleNotFoundError: No module named 'chimeralm'

Cause: Wrong Python environment or installation failed

Solution:

# Check Python version (must be 3.10 or 3.11)
python --version

# Verify pip is using correct Python
which pip
python -m pip --version

# Reinstall in current environment
python -m pip install --force-reinstall chimeralm

UnsupportedPython version: requires Python 3.10 or 3.11

Symptom: Installation fails with Python version error

Cause: ChimeraLM doesn't support Python 3.12 yet

Solution:

# Create environment with Python 3.11
conda create -n chimeralm python=3.11
conda activate chimeralm
pip install chimeralm

Dependency Conflicts

ERROR: pip's dependency resolver does not currently take into account all the packages

Symptom: Pip reports dependency conflicts during installation

Solution:

# Install in a clean environment
python -m venv chimeralm_env
source chimeralm_env/bin/activate  # On Windows: chimeralm_env\Scripts\activate
pip install chimeralm

Runtime Issues

CUDA / GPU Problems

CUDA out of memory

Symptom: RuntimeError: CUDA out of memory

Cause: Batch size too large for your GPU

Solution:

# Reduce batch size (default is 12)
chimeralm predict input.bam --gpus 1 --batch-size 8

# Or use CPU mode
chimeralm predict input.bam --gpus 0

GPU not detected despite having CUDA

Symptom: ChimeraLM runs on CPU even with --gpus 1

Cause: PyTorch not installed with CUDA support

Solution:

# Check CUDA availability
python -c "import torch; print(torch.cuda.is_available())"

# If False, reinstall PyTorch with CUDA
pip uninstall torch torchvision torchaudio
pip install torch==2.5.1 --index-url https://download.pytorch.org/whl/cu121
pip install chimeralm

Model Loading Issues

HTTPError: 404 Client Error when loading model

Symptom: HTTPError: 404 Client Error: Not Found for url

Cause: Cannot connect to Hugging Face Hub to download model

Solution:

# Check internet connection
curl -I https://huggingface.co

# Use local model checkpoint
chimeralm predict input.bam --ckpt /path/to/local/checkpoint.ckpt

# Or download model manually first
python -c "from transformers import AutoModel; AutoModel.from_pretrained('yangliz5/chimeralm')"

Model not found error

Symptom: Model yangliz5/chimeralm not found

Cause: Model not cached locally and internet unavailable

Solution: Download the model when you have internet, it will be cached:

# Pre-download model
python -c "from chimeralm.models.lm import ChimeraLM; ChimeraLM.from_pretrained('yangliz5/chimeralm')"

BAM File Issues

ValueError: BAM file not readable or invalid format

Symptom: ValueError: BAM file not readable

Cause: File is corrupted, not a valid BAM, or lacks SA tags

Solution:

# Verify BAM file is valid
samtools view -H input.bam | head

# Check for SA tags (chimeric indicator)
samtools view input.bam | grep -c "SA:Z:"

# If no SA tags found
# ChimeraLM only processes reads with SA tags (chimeric candidates)
# Make sure your WGA data includes supplementary alignments

PermissionError: [Errno 13] Permission denied

Symptom: Cannot read or write BAM files

Solution:

# Check file permissions
ls -l input.bam

# Add read permission
chmod +r input.bam

# Check write permission for output directory
ls -ld predictions/
chmod +w predictions/

Performance Issues

Predictions are very slow (>1 minute for 1000 reads)

Symptom: Predictions take much longer than expected

Cause: Running on CPU or batch size too small

Solution:

# Enable GPU if available
chimeralm predict input.bam --gpus 1 --batch-size 24

# Increase worker threads (if using CPU)
chimeralm predict input.bam --gpus 0 --workers 4

High memory usage

Symptom: System runs out of RAM during prediction

Solution:

# Reduce batch size
chimeralm predict input.bam --batch-size 4

# Process in smaller chunks with --max-sample
chimeralm predict input.bam --max-sample 500

Output Issues

Predictions file is empty

Symptom: predictions.txt created but has no content

Cause: No reads with SA tags found in BAM file

Solution:

# Verify input BAM has chimeric candidates
samtools view input.bam | grep "SA:Z:" | wc -l

# If count is 0, your BAM has no chimeric candidates
# This is expected for non-WGA data

FilteredBAM has same size as input

Symptom: chimeralm filter produces output same size as input

Cause: No chimeric reads detected (all labeled as 0)

Check:

# Count chimeric predictions
grep -c "1$" predictions/predictions.txt

# If count is 0, no chimeric reads detected
# This could be normal for high-quality data

General Help

Enable Verbose Logging

For debugging, enable detailed output:

chimeralm predict input.bam --verbose

Check ChimeraLM Version

Ensure you're using the latest version:

# Check current version
chimeralm --version

# Update to latest
pip install --upgrade chimeralm

System Information

Collect system info for bug reports:

# Python version
python --version

# PyTorch version and CUDA
python -c "import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}')"

# ChimeraLM version
chimeralm --version

Getting Further Help

If your issue isn't covered here:

  1. Check existing issues: GitHub Issues
  2. Search discussions: GitHub Discussions
  3. Open a new issue: Include:
  4. ChimeraLM version (chimeralm --version)
  5. Python version (python --version)
  6. Operating system
  7. Complete error message
  8. Minimal reproducible example

Before Opening an Issue

  • Update to the latest version
  • Try with sample data (tests/data/mk1c_test.bam)
  • Include full error traceback
  • Describe what you expected vs. what happened