Troubleshooting¶
Common issues and solutions for ChimeraLM users.
Installation Issues¶
Python Version Errors¶
ModuleNotFoundError or ImportError after installation
Symptom: ModuleNotFoundError: No module named 'chimeralm'
Cause: Wrong Python environment or installation failed
Solution:
# Check Python version (must be 3.10 or 3.11)
python --version
# Verify pip is using correct Python
which pip
python -m pip --version
# Reinstall in current environment
python -m pip install --force-reinstall chimeralm
UnsupportedPython version: requires Python 3.10 or 3.11
Symptom: Installation fails with Python version error
Cause: ChimeraLM doesn't support Python 3.12 yet
Solution:
# Create environment with Python 3.11
conda create -n chimeralm python=3.11
conda activate chimeralm
pip install chimeralm
Dependency Conflicts¶
ERROR: pip's dependency resolver does not currently take into account all the packages
Symptom: Pip reports dependency conflicts during installation
Solution:
# Install in a clean environment
python -m venv chimeralm_env
source chimeralm_env/bin/activate # On Windows: chimeralm_env\Scripts\activate
pip install chimeralm
Runtime Issues¶
CUDA / GPU Problems¶
CUDA out of memory
Symptom: RuntimeError: CUDA out of memory
Cause: Batch size too large for your GPU
Solution:
GPU not detected despite having CUDA
Symptom: ChimeraLM runs on CPU even with --gpus 1
Cause: PyTorch not installed with CUDA support
Solution:
Model Loading Issues¶
HTTPError: 404 Client Error when loading model
Symptom: HTTPError: 404 Client Error: Not Found for url
Cause: Cannot connect to Hugging Face Hub to download model
Solution:
Model not found error
Symptom: Model yangliz5/chimeralm not found
Cause: Model not cached locally and internet unavailable
Solution: Download the model when you have internet, it will be cached:
BAM File Issues¶
ValueError: BAM file not readable or invalid format
Symptom: ValueError: BAM file not readable
Cause: File is corrupted, not a valid BAM, or lacks SA tags
Solution:
PermissionError: [Errno 13] Permission denied
Symptom: Cannot read or write BAM files
Solution:
Performance Issues¶
Predictions are very slow (>1 minute for 1000 reads)
Symptom: Predictions take much longer than expected
Cause: Running on CPU or batch size too small
Solution:
High memory usage
Symptom: System runs out of RAM during prediction
Solution:
Output Issues¶
Predictions file is empty
Symptom: predictions.txt created but has no content
Cause: No reads with SA tags found in BAM file
Solution:
FilteredBAM has same size as input
Symptom: chimeralm filter produces output same size as input
Cause: No chimeric reads detected (all labeled as 0)
Check:
General Help¶
Enable Verbose Logging¶
For debugging, enable detailed output:
Check ChimeraLM Version¶
Ensure you're using the latest version:
System Information¶
Collect system info for bug reports:
# Python version
python --version
# PyTorch version and CUDA
python -c "import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}')"
# ChimeraLM version
chimeralm --version
Getting Further Help¶
If your issue isn't covered here:
- Check existing issues: GitHub Issues
- Search discussions: GitHub Discussions
- Open a new issue: Include:
- ChimeraLM version (
chimeralm --version) - Python version (
python --version) - Operating system
- Complete error message
- Minimal reproducible example
Before Opening an Issue
- Update to the latest version
- Try with sample data (
tests/data/mk1c_test.bam) - Include full error traceback
- Describe what you expected vs. what happened