Switch from nvidia/cuda base + manual PyTorch install to
pytorch/pytorch:2.6.0-cuda12.4-cudnn9-runtime base image.
This avoids the ~15GB build that exceeds Docker disk limits.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Nextflow pipeline using chai1 Docker image from Harbor
- S3-based input/output paths (s3://omic/eureka/chai-lab/)
- GPU-accelerated protein folding with MSA support
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>