Chai-1 requires EsmModel from transformers top-level import.
Versions 4.45+ moved it, causing ImportError at runtime.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
K8s caches :latest tag. Using :v2 ensures the permission-fixed
image is pulled instead of the cached old one.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
K8s runs containers as non-root. Chai-1 tries to download model
weights to /opt/conda/.../downloads which fails with PermissionError.
Set writable dirs and env vars for matplotlib, HF, and chai downloads.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Enable docker globally (required by WES)
- Set default container, memory (32GB), cpus (4) at process level
- Add NVIDIA_VISIBLE_DEVICES env for GPU visibility in k8s_gpu
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WES runs Nextflow with -profile k8s_gpu for GPU workloads.
Profiles configure K8s executor, GPU node selector, and
eureka-pvc storage claim for data access.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Switch from nvidia/cuda base + manual PyTorch install to
pytorch/pytorch:2.6.0-cuda12.4-cudnn9-runtime base image.
This avoids the ~15GB build that exceeds Docker disk limits.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Nextflow pipeline using chai1 Docker image from Harbor
- S3-based input/output paths (s3://omic/eureka/chai-lab/)
- GPU-accelerated protein folding with MSA support
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>