Compare commits
3 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| b773e291d2 | |||
| b408cfd4dd | |||
| 21d77e3faa |
188
README.md
188
README.md
@@ -1,93 +1,159 @@
|
||||
# corto
|
||||
# Corto Metabolomics Analysis Pipeline
|
||||
|
||||
A Python implementation of the corto algorithm for analyzing metabolomics and gene expression data, translated from the original R codebase. This project provides tools for preprocessing multi-omics data and performing network analysis to identify relationships between metabolites and gene expression.
|
||||
|
||||
## Background
|
||||
|
||||
## Getting started
|
||||
The original corto algorithm was implemented in R for analyzing gene expression data and identifying master regulators. This project extends and modernizes the implementation by:
|
||||
|
||||
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
|
||||
1. Translating core functionality to Python
|
||||
2. Adding support for metabolomics data
|
||||
3. Implementing memory-efficient processing for large datasets
|
||||
4. Adding parallel processing capabilities
|
||||
5. Providing a robust command-line interface
|
||||
|
||||
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
|
||||
## Code Translation Overview
|
||||
|
||||
## Add your files
|
||||
### Detailed Code Translation Mapping
|
||||
|
||||
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
|
||||
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
|
||||
#### corto-data-prep-final.py
|
||||
|
||||
```
|
||||
cd existing_repo
|
||||
git remote add origin https://gitlab.com/omic/next/registry/tools/clei2block.git
|
||||
git branch -M master
|
||||
git push -uf origin master
|
||||
This script primarily implements functionality from corto.R:
|
||||
|
||||
1. Data Loading and Validation
|
||||
- Initial data loading logic from `corto()` function
|
||||
- Input validation checks in `validate_ccle_format()`
|
||||
- Initial data preprocessing steps in `preprocess_ccle_data()`
|
||||
|
||||
2. Zero Variance Feature Handling
|
||||
- Translates zero variance removal logic:
|
||||
```R
|
||||
# From corto.R
|
||||
if(sum(is.na(inmat))>0){
|
||||
stop("Input matrix contains NA fields")
|
||||
}
|
||||
allvars<-apply(inmat,1,var)
|
||||
keep<-names(allvars)[allvars>0]
|
||||
inmat<-inmat[keep,]
|
||||
```
|
||||
|
||||
## Integrate with your tools
|
||||
3. CNV Correction
|
||||
- Implements CNV correction logic from corto.R:
|
||||
```R
|
||||
if(!is.null(cnvmat)){
|
||||
commonrows<-intersect(rownames(cnvmat),rownames(inmat))
|
||||
commoncols<-intersect(colnames(cnvmat),colnames(inmat))
|
||||
cnvmat<-cnvmat[commonrows,commoncols]
|
||||
inmat<-inmat[commonrows,commoncols]
|
||||
```
|
||||
|
||||
- [ ] [Set up project integrations](https://gitlab.com/omic/next/registry/tools/clei2block/-/settings/integrations)
|
||||
#### corto-matrix-combination-final.py
|
||||
|
||||
## Collaborate with your team
|
||||
This script implements functionality from multiple R sources:
|
||||
|
||||
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
|
||||
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
|
||||
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
|
||||
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
|
||||
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
|
||||
1. From functions.R:
|
||||
- Direct translation of `p2r()`:
|
||||
```R
|
||||
p2r<-function(p,n){
|
||||
t<-qt(p/2,df=n-2,lower.tail=FALSE)
|
||||
r<-sqrt((t^2)/(n-2+t^2))
|
||||
return(r)
|
||||
}
|
||||
```
|
||||
|
||||
## Test and Deploy
|
||||
2. From mra.R:
|
||||
- Correlation calculation logic from MRA functions
|
||||
- Bootstrap implementation approach
|
||||
|
||||
Use the built-in continuous integration in GitLab.
|
||||
3. From gsea.R:
|
||||
- Statistical analysis approaches
|
||||
- Matrix manipulation techniques
|
||||
|
||||
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
|
||||
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
|
||||
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
|
||||
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
|
||||
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
|
||||
### Key Implementation Differences
|
||||
|
||||
***
|
||||
1. Memory Management:
|
||||
- Added chunked processing for large matrices
|
||||
- Implemented parallel processing with ProcessPoolExecutor
|
||||
|
||||
# Editing this README
|
||||
2. Extended Functionality:
|
||||
- Added combined matrix mode
|
||||
- Improved logging system
|
||||
- Command line interface
|
||||
|
||||
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
|
||||
3. Data Structure Updates:
|
||||
- Uses pandas DataFrames instead of R matrices
|
||||
- Optimized memory handling for large datasets
|
||||
|
||||
## Suggestions for a good README
|
||||
|
||||
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
|
||||
|
||||
## Name
|
||||
Choose a self-explaining name for your project.
|
||||
|
||||
## Description
|
||||
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
|
||||
|
||||
## Badges
|
||||
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
|
||||
|
||||
## Visuals
|
||||
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
|
||||
4. Additional Features:
|
||||
- More extensive error checking
|
||||
- Progress reporting
|
||||
- Configurable preprocessing options
|
||||
|
||||
## Installation
|
||||
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/yourusername/corto-metabolomics.git
|
||||
|
||||
# Install required packages
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Usage
|
||||
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
|
||||
|
||||
## Support
|
||||
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
|
||||
### Data Preparation
|
||||
|
||||
## Roadmap
|
||||
If you have ideas for releases in the future, it is a good idea to list them in the README.
|
||||
```bash
|
||||
python corto-data-prep-final.py \
|
||||
--metabolomics_file data/metabolomics.csv \
|
||||
--expression_file data/expression.txt \
|
||||
--cnv_file data/cnv.csv \
|
||||
--normalization standard \
|
||||
--outlier_detection zscore \
|
||||
--imputation knn
|
||||
```
|
||||
|
||||
## Contributing
|
||||
State if you are open to contributions and what your requirements are for accepting them.
|
||||
### Network Analysis
|
||||
|
||||
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
|
||||
```bash
|
||||
python corto-matrix-combination-final.py \
|
||||
--mode corto \
|
||||
--expression_file prepared_expression.csv \
|
||||
--metabolomics_file prepared_metabolomics.csv \
|
||||
--p_threshold 1e-30 \
|
||||
--nbootstraps 100 \
|
||||
--nthreads 4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
|
||||
## Key Features
|
||||
|
||||
## Authors and acknowledgment
|
||||
Show your appreciation to those who have contributed to the project.
|
||||
### Data Preprocessing
|
||||
- Zero-variance feature removal
|
||||
- CNV correction
|
||||
- Outlier detection
|
||||
- Missing value imputation
|
||||
- Sample alignment
|
||||
- Quality control metrics
|
||||
|
||||
## License
|
||||
For open source projects, say how it is licensed.
|
||||
### Network Analysis
|
||||
- Two analysis modes:
|
||||
- 'corto': Original approach keeping matrices separate
|
||||
- 'combined': Matrix combination approach for higher-order relationships
|
||||
- Parallel processing for bootstraps
|
||||
- Memory-efficient chunked processing
|
||||
- Comprehensive result reporting
|
||||
|
||||
## Output Files
|
||||
|
||||
The pipeline generates several output files:
|
||||
|
||||
1. Preprocessed Data:
|
||||
- `prepared_metabolomics.csv`
|
||||
- `prepared_expression.csv`
|
||||
- `prepared_metrics.txt`
|
||||
|
||||
2. Network Analysis:
|
||||
- `corto_network_{mode}.csv`: Network edges and statistics
|
||||
- `corto_regulon_{mode}.txt`: Regulon object with relationship details
|
||||
|
||||
## Project status
|
||||
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
||||
|
||||
656
corto-data-prep.py
Normal file
656
corto-data-prep.py
Normal file
@@ -0,0 +1,656 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
CCLE Data Preparation Pipeline for Metabolomics Analysis
|
||||
|
||||
This script prepares metabolomics and gene expression data for analysis with the corto algorithm.
|
||||
It ensures compatibility with corto's requirements while providing optional additional preprocessing steps.
|
||||
|
||||
Basic Usage:
|
||||
python prepare_data.py --metabolomics_file data/metabolomics.csv --expression_file data/expression.txt
|
||||
|
||||
Advanced Usage with Additional Preprocessing:
|
||||
python prepare_data.py --metabolomics_file data/metabolomics.csv \
|
||||
--expression_file data/expression.txt \
|
||||
--cnv_file data/cnv.csv \
|
||||
--normalization standard \
|
||||
--outlier_detection zscore \
|
||||
--imputation knn
|
||||
|
||||
For detailed information about options, use the --help flag.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from scipy import stats
|
||||
from sklearn.linear_model import LinearRegression
|
||||
from sklearn.preprocessing import StandardScaler, RobustScaler
|
||||
from sklearn.impute import KNNImputer
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List, Optional, Tuple, Any
|
||||
import warnings
|
||||
import argparse
|
||||
|
||||
@dataclass
|
||||
class DataQualityMetrics:
|
||||
"""Track data quality metrics through processing"""
|
||||
initial_shape: Tuple[int, int]
|
||||
final_shape: Tuple[int, int]
|
||||
removed_features: List[str]
|
||||
zero_var_features: List[str]
|
||||
missing_value_counts: Dict[str, int]
|
||||
extreme_value_counts: Dict[str, int]
|
||||
sample_correlations: Optional[pd.Series]
|
||||
processing_steps: List[str]
|
||||
|
||||
@dataclass
|
||||
class PreprocessingConfig:
|
||||
"""Configuration for preprocessing steps"""
|
||||
# Corto-compatible preprocessing
|
||||
remove_zero_variance: bool = True
|
||||
min_variance: float = 1e-10
|
||||
remove_duplicates: bool = True
|
||||
cnv_correction: bool = True
|
||||
|
||||
# Centroid detection parameters
|
||||
centroid_detection_threshold: float = 0.1 # Fraction of features to select as centroids (0.1 = top 10%)
|
||||
|
||||
# Additional preprocessing (disabled by default)
|
||||
normalization: Optional[str] = None # ['standard', 'robust', 'log']
|
||||
feature_selection: Optional[str] = None # ['variance', 'cv']
|
||||
outlier_detection: Optional[str] = None # ['zscore', 'iqr']
|
||||
imputation: Optional[str] = None # ['mean', 'median', 'knn']
|
||||
|
||||
# Processing options
|
||||
save_intermediate: bool = False
|
||||
dry_run: bool = False
|
||||
n_jobs: int = 1
|
||||
|
||||
# Thresholds
|
||||
min_samples_threshold: float = 0.5
|
||||
outlier_threshold: float = 3.0
|
||||
feature_selection_threshold: float = 0.5
|
||||
|
||||
class ModularDataPrep:
|
||||
"""Main class for data preparation pipeline"""
|
||||
|
||||
def __init__(self, config: Optional[PreprocessingConfig] = None):
|
||||
self.config = config or PreprocessingConfig()
|
||||
self.logger = logging.getLogger(__name__)
|
||||
self.metrics: Dict[str, Any] = {}
|
||||
self.scalers: Dict[str, Any] = {}
|
||||
self.intermediate_data: Dict[str, pd.DataFrame] = {}
|
||||
|
||||
def save_intermediate_step(self, df: pd.DataFrame, name: str, step: str) -> None:
|
||||
"""Save intermediate data if configured"""
|
||||
if self.config.save_intermediate:
|
||||
output_file = f"intermediate_{name}_{step}.csv"
|
||||
df.to_csv(output_file)
|
||||
self.logger.info(f"Saved intermediate data to {output_file}")
|
||||
self.intermediate_data[f"{name}_{step}"] = df
|
||||
|
||||
def validate_ccle_format(self, df: pd.DataFrame, data_type: str) -> None:
|
||||
"""
|
||||
Validate expected CCLE data format
|
||||
|
||||
Args:
|
||||
df: Input dataframe
|
||||
data_type: Type of data ('metabolomics', 'expression', 'cnv')
|
||||
|
||||
Raises:
|
||||
ValueError: If data format doesn't match CCLE requirements
|
||||
"""
|
||||
if df.empty:
|
||||
raise ValueError(f"Empty dataframe provided for {data_type}")
|
||||
|
||||
if df.isna().all().all():
|
||||
raise ValueError(f"All values are NA in {data_type} data")
|
||||
|
||||
if data_type == 'metabolomics':
|
||||
if 'CCLE_ID' not in df.columns:
|
||||
raise ValueError("Metabolomics data must have CCLE_ID column")
|
||||
|
||||
elif data_type == 'expression':
|
||||
if not {'gene_id', 'transcript_id'}.intersection(df.columns):
|
||||
raise ValueError("Expression data must have gene_id and transcript_id columns")
|
||||
|
||||
# Check for numeric data after removing ID columns
|
||||
id_cols = []
|
||||
if data_type == 'metabolomics':
|
||||
id_cols = ['CCLE_ID']
|
||||
elif data_type == 'expression':
|
||||
id_cols = ['gene_id', 'transcript_id']
|
||||
|
||||
data_cols = df.drop(columns=[col for col in id_cols if col in df.columns])
|
||||
if not data_cols.select_dtypes(include=[np.number]).columns.any():
|
||||
raise ValueError(f"No numeric data columns found in {data_type} data")
|
||||
|
||||
def preprocess_ccle_data(self, df: pd.DataFrame, data_type: str) -> pd.DataFrame:
|
||||
"""
|
||||
Preprocess CCLE format data to get numeric matrix
|
||||
|
||||
Args:
|
||||
df: Input dataframe
|
||||
data_type: Type of data ('metabolomics', 'expression', 'cnv')
|
||||
|
||||
Returns:
|
||||
Preprocessed numeric dataframe
|
||||
"""
|
||||
self.logger.info(f"Preprocessing {data_type} data")
|
||||
|
||||
if data_type == 'metabolomics':
|
||||
# For metabolomics, set CCLE_ID as index and drop DepMap_ID
|
||||
if 'CCLE_ID' in df.columns:
|
||||
# Drop DepMap_ID if it exists and get only numeric columns
|
||||
columns_to_drop = ['DepMap_ID'] if 'DepMap_ID' in df.columns else []
|
||||
df = df.set_index('CCLE_ID').drop(columns=columns_to_drop)
|
||||
|
||||
# Convert all remaining columns to numeric
|
||||
numeric_df = df.apply(pd.to_numeric, errors='coerce')
|
||||
self.logger.info("Processed metabolomics data to numeric format")
|
||||
return numeric_df
|
||||
|
||||
elif data_type == 'expression':
|
||||
# For expression data, set gene/transcript IDs as multi-index
|
||||
if {'gene_id', 'transcript_id'}.intersection(df.columns):
|
||||
df = df.set_index(['gene_id', 'transcript_id'])
|
||||
# Convert all remaining columns to numeric
|
||||
numeric_df = df.apply(pd.to_numeric, errors='coerce')
|
||||
self.logger.info("Processed expression data to numeric format")
|
||||
return numeric_df
|
||||
|
||||
# If we reached here without returning, something went wrong
|
||||
raise ValueError(f"Could not process {data_type} data into numeric format")
|
||||
|
||||
def remove_zero_variance_features(self, df: pd.DataFrame, name: str) -> pd.DataFrame:
|
||||
"""Remove features with variance below threshold"""
|
||||
variances = df.var()
|
||||
zero_var_features = variances[variances <= self.config.min_variance].index.tolist()
|
||||
if zero_var_features:
|
||||
self.logger.info(f"Removing {len(zero_var_features)} zero variance features from {name}")
|
||||
df = df.drop(columns=zero_var_features)
|
||||
self.metrics[f"{name}_zero_var_features"] = zero_var_features
|
||||
return df
|
||||
|
||||
def normalize_data(self, df: pd.DataFrame, name: str) -> pd.DataFrame:
|
||||
"""Apply selected normalization method"""
|
||||
if self.config.normalization == 'standard':
|
||||
scaler = StandardScaler()
|
||||
elif self.config.normalization == 'robust':
|
||||
scaler = RobustScaler()
|
||||
elif self.config.normalization == 'log':
|
||||
return np.log1p(df) # log1p handles zeros gracefully
|
||||
else:
|
||||
return df
|
||||
|
||||
self.scalers[name] = scaler
|
||||
return pd.DataFrame(
|
||||
scaler.fit_transform(df),
|
||||
index=df.index,
|
||||
columns=df.columns
|
||||
)
|
||||
|
||||
def handle_outliers(self, df: pd.DataFrame, name: str) -> pd.DataFrame:
|
||||
"""Handle outliers using selected method"""
|
||||
if self.config.outlier_detection == 'zscore':
|
||||
z_scores = stats.zscore(df)
|
||||
outlier_mask = abs(z_scores) > self.config.outlier_threshold
|
||||
elif self.config.outlier_detection == 'iqr':
|
||||
Q1 = df.quantile(0.25)
|
||||
Q3 = df.quantile(0.75)
|
||||
IQR = Q3 - Q1
|
||||
outlier_mask = ((df < (Q1 - 1.5 * IQR)) | (df > (Q3 + 1.5 * IQR)))
|
||||
else:
|
||||
return df
|
||||
|
||||
# Replace outliers with NaN for later imputation
|
||||
df[outlier_mask] = np.nan
|
||||
return df
|
||||
|
||||
def impute_missing_values(self, df: pd.DataFrame, name: str) -> pd.DataFrame:
|
||||
"""Impute missing values using selected method"""
|
||||
if self.config.imputation == 'mean':
|
||||
return df.fillna(df.mean())
|
||||
elif self.config.imputation == 'median':
|
||||
return df.fillna(df.median())
|
||||
elif self.config.imputation == 'knn':
|
||||
imputer = KNNImputer(n_neighbors=5)
|
||||
return pd.DataFrame(
|
||||
imputer.fit_transform(df),
|
||||
index=df.index,
|
||||
columns=df.columns
|
||||
)
|
||||
return df
|
||||
|
||||
def detect_centroids(self, expression_data: pd.DataFrame) -> List[str]:
|
||||
"""
|
||||
Auto-detect potential centroids from expression data based on network properties.
|
||||
|
||||
This method identifies potential centroids by:
|
||||
1. Calculating feature variance (higher variance = more informative)
|
||||
2. Calculating feature connectivity (correlation with other features)
|
||||
3. Scoring features based on both variance and connectivity
|
||||
4. Selecting top N% as centroids, where N is defined by centroid_detection_threshold
|
||||
|
||||
Args:
|
||||
expression_data: Expression matrix
|
||||
|
||||
Returns:
|
||||
List of detected centroid feature names
|
||||
|
||||
Note:
|
||||
The centroid_detection_threshold parameter (default 0.1 = 10%) determines
|
||||
what fraction of features are selected as centroids. Higher values will
|
||||
select more centroids but may include less informative features.
|
||||
"""
|
||||
# Calculate variance for each feature
|
||||
variances = expression_data.var()
|
||||
|
||||
# Calculate connectivity (correlation with other features)
|
||||
connectivity = expression_data.corr().abs().sum()
|
||||
|
||||
# Score features based on variance and connectivity
|
||||
scores = variances * connectivity
|
||||
|
||||
# Select top N% as centroids
|
||||
num_centroids = int(len(scores) * self.config.centroid_detection_threshold)
|
||||
centroids = scores.nlargest(num_centroids).index.tolist()
|
||||
|
||||
self.logger.info(
|
||||
f"Detected {len(centroids)} potential centroids "
|
||||
f"(top {self.config.centroid_detection_threshold*100:.1f}% of features)"
|
||||
)
|
||||
return centroids
|
||||
|
||||
def select_features(self, df: pd.DataFrame, name: str) -> pd.DataFrame:
|
||||
"""Select features using specified method"""
|
||||
if self.config.feature_selection == 'variance':
|
||||
selector = df.var()
|
||||
threshold = np.percentile(selector, self.config.feature_selection_threshold * 100)
|
||||
selected = selector[selector >= threshold].index
|
||||
elif self.config.feature_selection == 'cv':
|
||||
cv = df.std() / df.mean()
|
||||
threshold = np.percentile(cv, self.config.feature_selection_threshold * 100)
|
||||
selected = cv[cv >= threshold].index
|
||||
else:
|
||||
return df
|
||||
|
||||
return df[selected]
|
||||
|
||||
def preprocess_matrix(self, df: pd.DataFrame, name: str) -> pd.DataFrame:
|
||||
"""Process a single matrix through all selected preprocessing steps"""
|
||||
if self.config.dry_run:
|
||||
self.logger.info(f"\nDry run: would preprocess {name} matrix with steps:")
|
||||
steps = []
|
||||
if self.config.remove_zero_variance:
|
||||
steps.append("- Remove zero variance features")
|
||||
if self.config.remove_duplicates:
|
||||
steps.append("- Remove duplicates")
|
||||
if self.config.normalization:
|
||||
steps.append(f"- Apply {self.config.normalization} normalization")
|
||||
if self.config.outlier_detection:
|
||||
steps.append(f"- Detect outliers using {self.config.outlier_detection}")
|
||||
if self.config.imputation:
|
||||
steps.append(f"- Impute missing values using {self.config.imputation}")
|
||||
if self.config.feature_selection:
|
||||
steps.append(f"- Select features using {self.config.feature_selection}")
|
||||
|
||||
for step in steps:
|
||||
self.logger.info(step)
|
||||
return df
|
||||
|
||||
self.logger.info(f"\nPreprocessing {name} matrix")
|
||||
processed = df.copy()
|
||||
steps = []
|
||||
|
||||
# Corto-compatible preprocessing
|
||||
if self.config.remove_zero_variance:
|
||||
processed = self.remove_zero_variance_features(processed, name)
|
||||
steps.append('zero_variance_removal')
|
||||
self.save_intermediate_step(processed, name, 'zero_var_removed')
|
||||
|
||||
if self.config.remove_duplicates:
|
||||
processed = processed[~processed.index.duplicated(keep='first')]
|
||||
steps.append('duplicate_removal')
|
||||
self.save_intermediate_step(processed, name, 'duplicates_removed')
|
||||
|
||||
# Additional preprocessing steps
|
||||
if self.config.normalization:
|
||||
processed = self.normalize_data(processed, name)
|
||||
steps.append(f'normalization_{self.config.normalization}')
|
||||
self.save_intermediate_step(processed, name, 'normalized')
|
||||
|
||||
if self.config.outlier_detection:
|
||||
processed = self.handle_outliers(processed, name)
|
||||
steps.append(f'outlier_detection_{self.config.outlier_detection}')
|
||||
self.save_intermediate_step(processed, name, 'outliers_handled')
|
||||
|
||||
if self.config.imputation:
|
||||
processed = self.impute_missing_values(processed, name)
|
||||
steps.append(f'imputation_{self.config.imputation}')
|
||||
self.save_intermediate_step(processed, name, 'imputed')
|
||||
|
||||
if self.config.feature_selection:
|
||||
processed = self.select_features(processed, name)
|
||||
steps.append(f'feature_selection_{self.config.feature_selection}')
|
||||
self.save_intermediate_step(processed, name, 'features_selected')
|
||||
|
||||
self.metrics[f"{name}_processing_steps"] = steps
|
||||
return processed
|
||||
|
||||
def apply_cnv_correction(
|
||||
self,
|
||||
expression_data: pd.DataFrame,
|
||||
cnv_data: pd.DataFrame,
|
||||
centroids: List[str]
|
||||
) -> pd.DataFrame:
|
||||
"""
|
||||
Correct expression data based on CNV data, following corto's approach
|
||||
|
||||
Args:
|
||||
expression_data: Expression matrix
|
||||
cnv_data: Copy number variation matrix
|
||||
centroids: List of centroid feature names
|
||||
|
||||
Returns:
|
||||
Corrected expression matrix
|
||||
"""
|
||||
self.logger.info("Applying CNV correction")
|
||||
|
||||
# Get common features and samples
|
||||
common_features = list(set(expression_data.index) & set(cnv_data.index))
|
||||
common_samples = list(set(expression_data.columns) & set(cnv_data.columns))
|
||||
|
||||
if len(common_features) <= 1:
|
||||
raise ValueError("One or fewer features in common between CNV and expression data")
|
||||
if len(common_samples) <= 1:
|
||||
raise ValueError("One or fewer samples in common between CNV and expression data")
|
||||
|
||||
# Subset data to common elements
|
||||
expr = expression_data.loc[common_features, common_samples]
|
||||
cnv = cnv_data.loc[common_features, common_samples]
|
||||
|
||||
# Get targets (non-centroids)
|
||||
targets = list(set(common_features) - set(centroids))
|
||||
|
||||
# Correct expression based on CNV for targets only
|
||||
target_expr = expr.loc[targets]
|
||||
target_cnv = cnv.loc[targets]
|
||||
|
||||
self.logger.info(f"Calculating residuals for {len(targets)} target features")
|
||||
|
||||
# Calculate residuals for each target
|
||||
corrected_targets = pd.DataFrame(index=target_expr.index, columns=target_expr.columns)
|
||||
for feature in targets:
|
||||
# Fit linear model: expression ~ cnv
|
||||
X = target_cnv.loc[feature].values.reshape(-1, 1)
|
||||
y = target_expr.loc[feature].values
|
||||
model = LinearRegression()
|
||||
model.fit(X, y)
|
||||
|
||||
# Calculate residuals
|
||||
residuals = y - model.predict(X)
|
||||
corrected_targets.loc[feature] = residuals
|
||||
|
||||
# Replace target values with residuals
|
||||
corrected_expr = expr.copy()
|
||||
corrected_expr.loc[targets] = corrected_targets
|
||||
|
||||
self.logger.info("CNV correction complete")
|
||||
return corrected_expr
|
||||
|
||||
def prepare_matrices(
|
||||
self,
|
||||
metabolomics_data: pd.DataFrame,
|
||||
expression_data: pd.DataFrame,
|
||||
centroids: Optional[List[str]] = None,
|
||||
cnv_data: Optional[pd.DataFrame] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Prepare metabolomics and expression matrices for corto analysis
|
||||
|
||||
Args:
|
||||
metabolomics_data: Raw metabolomics data
|
||||
expression_data: Raw expression data
|
||||
centroids: Optional list of centroid features
|
||||
cnv_data: Optional CNV data for correction
|
||||
|
||||
Returns:
|
||||
Dictionary containing processed matrices and quality metrics
|
||||
"""
|
||||
# Validate input formats
|
||||
self.validate_ccle_format(metabolomics_data, 'metabolomics')
|
||||
self.validate_ccle_format(expression_data, 'expression')
|
||||
if cnv_data is not None:
|
||||
self.validate_ccle_format(cnv_data, 'cnv')
|
||||
|
||||
# Preprocess data into correct format
|
||||
metabolomics_data = self.preprocess_ccle_data(metabolomics_data, 'metabolomics')
|
||||
expression_data = self.preprocess_ccle_data(expression_data, 'expression')
|
||||
if cnv_data is not None:
|
||||
cnv_data = self.preprocess_ccle_data(cnv_data, 'cnv')
|
||||
|
||||
# Process metabolomics data
|
||||
processed_met = self.preprocess_matrix(metabolomics_data, 'metabolomics')
|
||||
|
||||
# Process expression data
|
||||
processed_exp = self.preprocess_matrix(expression_data, 'expression')
|
||||
|
||||
# Apply CNV correction if data provided
|
||||
if cnv_data is not None and self.config.cnv_correction:
|
||||
self.logger.info("Applying CNV correction")
|
||||
|
||||
# Use provided centroids or detect them
|
||||
if centroids is None:
|
||||
centroids = self.detect_centroids(expression_data)
|
||||
self.logger.info("Using auto-detected centroids")
|
||||
else:
|
||||
self.logger.info(f"Using {len(centroids)} provided centroids")
|
||||
|
||||
# Apply CNV correction
|
||||
processed_exp = self.apply_cnv_correction(
|
||||
processed_exp,
|
||||
cnv_data,
|
||||
centroids
|
||||
)
|
||||
|
||||
return {
|
||||
'metabolomics': processed_met,
|
||||
'expression': processed_exp,
|
||||
'quality_metrics': self.metrics
|
||||
}
|
||||
|
||||
def parse_arguments() -> argparse.Namespace:
|
||||
"""Parse command line arguments"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description=__doc__,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter
|
||||
)
|
||||
|
||||
# Required with defaults
|
||||
parser.add_argument(
|
||||
'--metabolomics_file',
|
||||
default='CCLE_metabolomics_20190502.csv',
|
||||
help='Path to metabolomics data CSV file'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--expression_file',
|
||||
default='CCLE_RNAseq_rsem_transcripts_tpm_20180929.txt',
|
||||
help='Path to gene expression data file'
|
||||
)
|
||||
|
||||
# Optional input/output
|
||||
parser.add_argument(
|
||||
'--cnv_file',
|
||||
help='Path to copy number variation data file (optional)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--output_prefix',
|
||||
default='prepared',
|
||||
help='Prefix for output files (default: prepared)'
|
||||
)
|
||||
|
||||
# Additional preprocessing options
|
||||
parser.add_argument(
|
||||
'--normalization',
|
||||
choices=['standard', 'robust', 'log'],
|
||||
help='Normalization method (optional)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--outlier_detection',
|
||||
choices=['zscore', 'iqr'],
|
||||
help='Outlier detection method (optional)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--centroids',
|
||||
required=False,
|
||||
help='Optional: Comma-separated list of centroid feature names. If not provided, centroids will be auto-detected.'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--centroid_threshold',
|
||||
type=float,
|
||||
default=0.1,
|
||||
help='Fraction of features to select as centroids when auto-detecting (default: 0.1 = top 10%)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--imputation',
|
||||
choices=['mean', 'median', 'knn'],
|
||||
help='Missing value imputation method (optional)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--feature_selection',
|
||||
choices=['variance', 'cv'],
|
||||
help='Feature selection method (optional)'
|
||||
)
|
||||
|
||||
# Processing options
|
||||
parser.add_argument(
|
||||
'--save_intermediate',
|
||||
action='store_true',
|
||||
help='Save intermediate data after each processing step'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--dry_run',
|
||||
action='store_true',
|
||||
help='Preview preprocessing steps without executing'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--n_jobs',
|
||||
type=int,
|
||||
default=1,
|
||||
help='Number of parallel jobs for applicable operations (default: 1)'
|
||||
)
|
||||
|
||||
# Logging options
|
||||
parser.add_argument(
|
||||
'--verbose',
|
||||
action='store_true',
|
||||
help='Enable verbose logging'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--log_file',
|
||||
help='Path to log file (optional, default: console output)'
|
||||
)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
def main() -> Dict[str, Any]:
|
||||
"""Main function to run the preprocessing pipeline"""
|
||||
# Parse arguments
|
||||
args = parse_arguments()
|
||||
|
||||
# Set up logging
|
||||
log_level = logging.INFO if args.verbose else logging.WARNING
|
||||
log_config = {
|
||||
'level': log_level,
|
||||
'format': '%(asctime)s - %(levelname)s - %(message)s'
|
||||
}
|
||||
|
||||
if args.log_file:
|
||||
log_config['filename'] = args.log_file
|
||||
|
||||
logging.basicConfig(**log_config)
|
||||
|
||||
# Create preprocessing configuration from arguments
|
||||
config = PreprocessingConfig(
|
||||
normalization=args.normalization,
|
||||
outlier_detection=args.outlier_detection,
|
||||
imputation=args.imputation,
|
||||
feature_selection=args.feature_selection,
|
||||
save_intermediate=args.save_intermediate,
|
||||
dry_run=args.dry_run,
|
||||
n_jobs=args.n_jobs,
|
||||
centroid_detection_threshold=args.centroid_threshold
|
||||
)
|
||||
|
||||
try:
|
||||
# Initialize preprocessor
|
||||
prep = ModularDataPrep(config)
|
||||
|
||||
# Read input data
|
||||
logging.info(f"Reading metabolomics data from {args.metabolomics_file}")
|
||||
met_df = pd.read_csv(args.metabolomics_file)
|
||||
|
||||
logging.info(f"Reading expression data from {args.expression_file}")
|
||||
exp_df = pd.read_csv(args.expression_file, sep='\t')
|
||||
|
||||
cnv_df = None
|
||||
if args.cnv_file:
|
||||
logging.info(f"Reading CNV data from {args.cnv_file}")
|
||||
cnv_df = pd.read_csv(args.cnv_file)
|
||||
|
||||
# Prepare matrices
|
||||
centroids = args.centroids.split(',') if args.centroids else None
|
||||
prepared_data = prep.prepare_matrices(
|
||||
met_df,
|
||||
exp_df,
|
||||
centroids=centroids, # Now optional
|
||||
cnv_data=cnv_df
|
||||
)
|
||||
|
||||
# Save processed data
|
||||
metabolomics_out = f"{args.output_prefix}_metabolomics.csv"
|
||||
expression_out = f"{args.output_prefix}_expression.csv"
|
||||
metrics_out = f"{args.output_prefix}_metrics.txt"
|
||||
|
||||
prepared_data['metabolomics'].to_csv(metabolomics_out)
|
||||
prepared_data['expression'].to_csv(expression_out)
|
||||
|
||||
# Save quality metrics
|
||||
with open(metrics_out, 'w') as f:
|
||||
f.write("Data Preparation Metrics\n")
|
||||
f.write("=======================\n")
|
||||
metrics = prepared_data['quality_metrics']
|
||||
for metric_name, metric_value in metrics.items():
|
||||
if isinstance(metric_value, (list, dict)):
|
||||
f.write(f"\n{metric_name}:\n")
|
||||
if isinstance(metric_value, list):
|
||||
for item in metric_value:
|
||||
f.write(f" - {item}\n")
|
||||
else:
|
||||
for k, v in metric_value.items():
|
||||
f.write(f" {k}: {v}\n")
|
||||
else:
|
||||
f.write(f"{metric_name}: {metric_value}\n")
|
||||
|
||||
logging.info(f"Processed data saved to {metabolomics_out} and {expression_out}")
|
||||
logging.info(f"Quality metrics saved to {metrics_out}")
|
||||
|
||||
return prepared_data
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error in preprocessing pipeline: {str(e)}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
410
corto-matrix-combination.py
Normal file
410
corto-matrix-combination.py
Normal file
@@ -0,0 +1,410 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from scipy import stats
|
||||
from sklearn.linear_model import LinearRegression
|
||||
from typing import Dict, List, Optional, Tuple, Literal
|
||||
import logging
|
||||
from concurrent.futures import ProcessPoolExecutor
|
||||
import argparse
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
def setup_logger(verbose: bool = False) -> logging.Logger:
|
||||
"""Setup logging configuration"""
|
||||
logger = logging.getLogger('CortoNetwork')
|
||||
logger.setLevel(logging.INFO if verbose else logging.WARNING)
|
||||
|
||||
# Create console handler with formatting
|
||||
handler = logging.StreamHandler()
|
||||
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
|
||||
return logger
|
||||
|
||||
def load_data(expression_file: str, metabolite_file: str, logger: logging.Logger) -> Tuple[pd.DataFrame, pd.DataFrame]:
|
||||
"""Load and preprocess data files"""
|
||||
logger.info("Loading expression data...")
|
||||
exp_df = pd.read_csv(expression_file)
|
||||
|
||||
# Set multi-index and convert to numeric matrix
|
||||
logger.info("Processing expression data...")
|
||||
exp_df.set_index(['gene_id', 'transcript_id'], inplace=True)
|
||||
exp_df = exp_df.apply(pd.to_numeric, errors='coerce')
|
||||
exp_df.index = [f"{idx[0]}_{idx[1]}" for idx in exp_df.index]
|
||||
|
||||
logger.info(f"Expression matrix shape: {exp_df.shape}")
|
||||
|
||||
# Load metabolite data
|
||||
logger.info("Loading metabolomics data...")
|
||||
met_df = pd.read_csv(metabolite_file)
|
||||
|
||||
logger.info("Processing metabolomics data...")
|
||||
met_df.set_index('CCLE_ID', inplace=True)
|
||||
met_df = met_df.select_dtypes(include=[np.number])
|
||||
met_df = met_df.T
|
||||
|
||||
logger.info(f"Metabolomics matrix shape: {met_df.shape}")
|
||||
|
||||
# Align samples
|
||||
common_samples = list(set(exp_df.columns) & set(met_df.columns))
|
||||
if not common_samples:
|
||||
raise ValueError("No common samples between matrices")
|
||||
|
||||
logger.info(f"Found {len(common_samples)} common samples")
|
||||
exp_df = exp_df[common_samples]
|
||||
met_df = met_df[common_samples]
|
||||
|
||||
return exp_df, met_df
|
||||
|
||||
def remove_zero_variance(df: pd.DataFrame, logger: logging.Logger) -> pd.DataFrame:
|
||||
"""Remove features with zero variance"""
|
||||
logger.info(f"Checking variance in matrix of shape {df.shape}")
|
||||
vars = df.var(axis=1)
|
||||
keep = vars[vars > 0].index
|
||||
logger.info(f"Keeping {len(keep)} features with non-zero variance")
|
||||
return df.loc[keep]
|
||||
|
||||
def p2r(p: float, n: int) -> float:
|
||||
"""Convert p-value to correlation coefficient threshold"""
|
||||
t = stats.t.ppf(p/2, df=n-2, loc=0, scale=1)
|
||||
r = np.sqrt((t**2)/(n-2 + t**2))
|
||||
return r
|
||||
|
||||
def calculate_correlations_corto(expression_df: pd.DataFrame,
|
||||
metabolite_df: pd.DataFrame,
|
||||
r_threshold: float,
|
||||
logger: logging.Logger) -> pd.DataFrame:
|
||||
"""Calculate correlations keeping matrices separate (corto approach)"""
|
||||
logger.info("Calculating correlations...")
|
||||
|
||||
# Calculate correlations in chunks to save memory
|
||||
chunk_size = 1000 # Adjust based on available memory
|
||||
n_chunks = int(np.ceil(len(expression_df) / chunk_size))
|
||||
|
||||
edges = []
|
||||
for i in range(n_chunks):
|
||||
start_idx = i * chunk_size
|
||||
end_idx = min((i + 1) * chunk_size, len(expression_df))
|
||||
|
||||
logger.info(f"Processing chunk {i+1}/{n_chunks}")
|
||||
exp_chunk = expression_df.iloc[start_idx:end_idx]
|
||||
|
||||
# Calculate correlations for this chunk
|
||||
chunk_corr = pd.DataFrame(
|
||||
np.corrcoef(exp_chunk, metabolite_df)[
|
||||
:exp_chunk.shape[0],
|
||||
exp_chunk.shape[0]:
|
||||
],
|
||||
index=exp_chunk.index,
|
||||
columns=metabolite_df.index
|
||||
)
|
||||
|
||||
# Find significant correlations
|
||||
for gene in chunk_corr.index:
|
||||
for metabolite in chunk_corr.columns:
|
||||
corr = chunk_corr.loc[gene, metabolite]
|
||||
if abs(corr) >= r_threshold:
|
||||
edges.append({
|
||||
'source': gene,
|
||||
'target': metabolite,
|
||||
'correlation': corr,
|
||||
'type': 'gene_metabolite'
|
||||
})
|
||||
|
||||
# Clear memory
|
||||
del chunk_corr
|
||||
|
||||
logger.info(f"Found {len(edges)} significant correlations")
|
||||
return pd.DataFrame(edges)
|
||||
|
||||
def calculate_correlations_combined(expression_df: pd.DataFrame,
|
||||
metabolite_df: pd.DataFrame,
|
||||
r_threshold: float,
|
||||
logger: logging.Logger) -> pd.DataFrame:
|
||||
"""Calculate correlations using combined matrix approach"""
|
||||
logger.info("Combining matrices...")
|
||||
|
||||
# Add prefixes and combine
|
||||
exp_prefixed = expression_df.copy()
|
||||
exp_prefixed.index = 'GENE_' + exp_prefixed.index
|
||||
|
||||
met_prefixed = metabolite_df.copy()
|
||||
met_prefixed.index = 'MET_' + met_prefixed.index
|
||||
|
||||
combined_df = pd.concat([exp_prefixed, met_prefixed])
|
||||
|
||||
logger.info("Calculating correlations...")
|
||||
|
||||
edges = []
|
||||
chunk_size = 1000
|
||||
n_chunks = int(np.ceil(len(combined_df) / chunk_size))
|
||||
|
||||
for i in range(n_chunks):
|
||||
start_idx = i * chunk_size
|
||||
end_idx = min((i + 1) * chunk_size, len(combined_df))
|
||||
|
||||
logger.info(f"Processing chunk {i+1}/{n_chunks}")
|
||||
chunk = combined_df.iloc[start_idx:end_idx]
|
||||
|
||||
chunk_corr = pd.DataFrame(
|
||||
np.corrcoef(chunk, combined_df)[
|
||||
:chunk.shape[0],
|
||||
chunk.shape[0]:
|
||||
],
|
||||
index=chunk.index,
|
||||
columns=combined_df.index
|
||||
)
|
||||
|
||||
for source in chunk_corr.index:
|
||||
for target in chunk_corr.columns:
|
||||
if source < target: # Only take upper triangle
|
||||
corr = chunk_corr.loc[source, target]
|
||||
if abs(corr) >= r_threshold:
|
||||
type = 'gene_gene' if 'GENE_' in source and 'GENE_' in target else \
|
||||
'metabolite_metabolite' if 'MET_' in source and 'MET_' in target else \
|
||||
'gene_metabolite'
|
||||
edges.append({
|
||||
'source': source,
|
||||
'target': target,
|
||||
'correlation': corr,
|
||||
'type': type
|
||||
})
|
||||
|
||||
del chunk_corr
|
||||
|
||||
logger.info(f"Found {len(edges)} significant correlations")
|
||||
return pd.DataFrame(edges)
|
||||
|
||||
def bootstrap_network(expression_df: pd.DataFrame,
|
||||
metabolite_df: pd.DataFrame,
|
||||
r_threshold: float,
|
||||
seed: int,
|
||||
logger: logging.Logger) -> List[str]:
|
||||
"""Bootstrap for a single iteration"""
|
||||
np.random.seed(seed)
|
||||
|
||||
# Sample with replacement
|
||||
sample_idx = np.random.choice(
|
||||
expression_df.shape[1],
|
||||
size=expression_df.shape[1],
|
||||
replace=True
|
||||
)
|
||||
|
||||
# Sample matrices
|
||||
boot_expression = expression_df.iloc[:, sample_idx]
|
||||
boot_metabolite = metabolite_df.iloc[:, sample_idx]
|
||||
|
||||
# Calculate correlations for bootstrap sample
|
||||
edges = []
|
||||
chunk_size = 1000 # Process in chunks to save memory
|
||||
n_chunks = int(np.ceil(len(boot_expression) / chunk_size))
|
||||
|
||||
for i in range(n_chunks):
|
||||
start_idx = i * chunk_size
|
||||
end_idx = min((i + 1) * chunk_size, len(boot_expression))
|
||||
|
||||
exp_chunk = boot_expression.iloc[start_idx:end_idx]
|
||||
|
||||
# Calculate correlations for this chunk
|
||||
chunk_corr = pd.DataFrame(
|
||||
np.corrcoef(exp_chunk, boot_metabolite)[
|
||||
:exp_chunk.shape[0],
|
||||
exp_chunk.shape[0]:
|
||||
],
|
||||
index=exp_chunk.index,
|
||||
columns=boot_metabolite.index
|
||||
)
|
||||
|
||||
# Find significant correlations
|
||||
for gene in chunk_corr.index:
|
||||
for metabolite in chunk_corr.columns:
|
||||
corr = chunk_corr.loc[gene, metabolite]
|
||||
if abs(corr) >= r_threshold:
|
||||
edges.append({
|
||||
'source': gene,
|
||||
'target': metabolite,
|
||||
'correlation': corr
|
||||
})
|
||||
|
||||
# Find strongest connections for each target
|
||||
winners = []
|
||||
edge_df = pd.DataFrame(edges)
|
||||
if not edge_df.empty:
|
||||
for target in edge_df['target'].unique():
|
||||
target_edges = edge_df[edge_df['target'] == target]
|
||||
if not target_edges.empty:
|
||||
winner = target_edges.loc[target_edges['correlation'].abs().idxmax()]
|
||||
winners.append(f"{winner['source']}_{winner['target']}")
|
||||
|
||||
return winners
|
||||
|
||||
def main(args):
|
||||
# Setup logging
|
||||
logger = setup_logger(args.verbose)
|
||||
|
||||
logger.info(f"Starting corto network analysis in {args.mode} mode...")
|
||||
|
||||
try:
|
||||
# Load data
|
||||
expression_df, metabolite_df = load_data(args.expression_file, args.metabolomics_file, logger)
|
||||
|
||||
# Remove zero variance features
|
||||
expression_df = remove_zero_variance(expression_df, logger)
|
||||
metabolite_df = remove_zero_variance(metabolite_df, logger)
|
||||
|
||||
# Calculate correlation threshold
|
||||
r_threshold = p2r(args.p_threshold, len(metabolite_df.columns))
|
||||
logger.info(f"Using correlation threshold: {r_threshold}")
|
||||
|
||||
# Calculate initial correlations based on mode
|
||||
if args.mode == 'corto':
|
||||
edge_df = calculate_correlations_corto(
|
||||
expression_df,
|
||||
metabolite_df,
|
||||
r_threshold,
|
||||
logger
|
||||
)
|
||||
else:
|
||||
edge_df = calculate_correlations_combined(
|
||||
expression_df,
|
||||
metabolite_df,
|
||||
r_threshold,
|
||||
logger
|
||||
)
|
||||
|
||||
# Store valid pairs for bootstrapping
|
||||
valid_pairs = set([f"{row['source']}_{row['target']}" for _, row in edge_df.iterrows()])
|
||||
|
||||
# Initialize occurrence tracking using valid pairs
|
||||
occurrences = pd.DataFrame({
|
||||
'source': edge_df['source'],
|
||||
'target': edge_df['target'],
|
||||
'correlation': edge_df['correlation'],
|
||||
'type': edge_df['type'], # Now using type from edge_df
|
||||
'occurrences': 0
|
||||
})
|
||||
occurrences.index = occurrences['source'] + '_' + occurrences['target']
|
||||
|
||||
# Run bootstraps
|
||||
logger.info(f"Running {args.nbootstraps} bootstraps...")
|
||||
|
||||
with ProcessPoolExecutor(max_workers=args.nthreads) as executor:
|
||||
futures = [
|
||||
executor.submit(
|
||||
bootstrap_network if args.mode == 'corto' else bootstrap_network_combined,
|
||||
expression_df,
|
||||
metabolite_df,
|
||||
r_threshold,
|
||||
i,
|
||||
logger
|
||||
)
|
||||
for i in range(args.nbootstraps)
|
||||
]
|
||||
|
||||
bootstrap_winners = []
|
||||
for future in futures:
|
||||
# Only keep winners that were in original valid pairs
|
||||
winners = future.result()
|
||||
valid_winners = [w for w in winners if w in valid_pairs]
|
||||
bootstrap_winners.extend(valid_winners)
|
||||
|
||||
# Update occurrences
|
||||
winner_counts = pd.Series(bootstrap_winners).value_counts()
|
||||
occurrences.loc[winner_counts.index, 'occurrences'] += winner_counts
|
||||
|
||||
# Calculate final likelihoods
|
||||
occurrences['likelihood'] = occurrences['occurrences'] / args.nbootstraps
|
||||
|
||||
# Create regulon object
|
||||
regulon = {}
|
||||
for source in occurrences['source'].unique():
|
||||
source_edges = occurrences[occurrences['source'] == source]
|
||||
if args.mode == 'corto':
|
||||
regulon[source] = {
|
||||
'tfmode': dict(zip(source_edges['target'], source_edges['correlation'])),
|
||||
'likelihood': dict(zip(source_edges['target'], source_edges['likelihood']))
|
||||
}
|
||||
else:
|
||||
# For combined mode, include edge types
|
||||
regulon[source] = {
|
||||
'tfmode': dict(zip(source_edges['target'], source_edges['correlation'])),
|
||||
'likelihood': dict(zip(source_edges['target'], source_edges['likelihood'])),
|
||||
'edge_types': dict(zip(source_edges['target'], source_edges['type']))
|
||||
}
|
||||
|
||||
# Save results
|
||||
logger.info("Saving results...")
|
||||
|
||||
# Save network with additional stats
|
||||
network_file = f'corto_network_{args.mode}.csv'
|
||||
regulon_file = f'corto_regulon_{args.mode}.txt'
|
||||
|
||||
occurrences['support'] = occurrences['occurrences'] / args.nbootstraps
|
||||
occurrences['abs_correlation'] = abs(occurrences['correlation'])
|
||||
|
||||
# Remove prefixes if in combined mode
|
||||
if args.mode == 'combined':
|
||||
occurrences['source'] = occurrences['source'].str.replace('GENE_', '').str.replace('MET_', '')
|
||||
occurrences['target'] = occurrences['target'].str.replace('GENE_', '').str.replace('MET_', '')
|
||||
|
||||
occurrences.sort_values('abs_correlation', ascending=False).to_csv(network_file)
|
||||
|
||||
# Save regulon with pretty formatting
|
||||
with open(regulon_file, 'w') as f:
|
||||
f.write(f"# Corto Regulon Analysis\n")
|
||||
f.write(f"# Mode: {args.mode}\n")
|
||||
f.write(f"# Parameters:\n")
|
||||
f.write(f"# p-threshold: {args.p_threshold}\n")
|
||||
f.write(f"# bootstraps: {args.nbootstraps}\n")
|
||||
f.write(f"# edges found: {len(occurrences)}\n\n")
|
||||
|
||||
for source, data in regulon.items():
|
||||
source_name = source.replace('GENE_', '').replace('MET_', '') if args.mode == 'combined' else source
|
||||
f.write(f"\n{source_name}:\n")
|
||||
for key, values in data.items():
|
||||
f.write(f" {key}:\n")
|
||||
if key == 'edge_types':
|
||||
for target, value in values.items():
|
||||
target_name = target.replace('GENE_', '').replace('MET_', '')
|
||||
f.write(f" {target_name}: {value}\n")
|
||||
else:
|
||||
sorted_items = sorted(values.items(), key=lambda x: abs(x[1]), reverse=True)
|
||||
for target, value in sorted_items:
|
||||
target_name = target.replace('GENE_', '').replace('MET_', '') if args.mode == 'combined' else target
|
||||
f.write(f" {target_name}: {value:.4f}\n")
|
||||
|
||||
logger.info("Analysis complete!")
|
||||
if args.mode == 'corto':
|
||||
logger.info(f"Found {len(occurrences)} significant gene-metabolite relationships")
|
||||
else:
|
||||
relationship_counts = occurrences['type'].value_counts()
|
||||
for rel_type, count in relationship_counts.items():
|
||||
logger.info(f"Found {count} significant {rel_type} relationships")
|
||||
|
||||
logger.info(f"Results saved to {network_file} and {regulon_file}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during analysis: {str(e)}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Run corto network analysis')
|
||||
|
||||
parser.add_argument('--mode', choices=['corto', 'combined'], default='corto',
|
||||
help='Analysis mode - either corto or combined (default: corto)')
|
||||
parser.add_argument('--expression_file', required=True,
|
||||
help='Path to expression data file')
|
||||
parser.add_argument('--metabolomics_file', required=True,
|
||||
help='Path to metabolomics data file')
|
||||
parser.add_argument('--p_threshold', type=float, default=1e-30,
|
||||
help='P-value threshold')
|
||||
parser.add_argument('--nbootstraps', type=int, default=100,
|
||||
help='Number of bootstrap iterations')
|
||||
parser.add_argument('--nthreads', type=int, default=4,
|
||||
help='Number of parallel threads')
|
||||
parser.add_argument('--verbose', action='store_true',
|
||||
help='Print verbose output')
|
||||
|
||||
args = parser.parse_args()
|
||||
main(args)
|
||||
Reference in New Issue
Block a user