The MEDIPIPE is designed for automated end-to-end quality control (QC) and analysis of (cf)MeDIP-seq data. The pipeline starts from the raw FASTQ files all the way to QC, alignment, methylation quantifications and aggregation. The pipeline was developed by Yong Zeng based on some prior work of Wenbin Ye, Eric Zhao.
- Portability: MEDIPIPE was developed with Snakemake, which will automatically deploy the execution environments. Users can also specify the version of softwares to be install in YAML files. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
- Flexibility: MEDIPIPE can deal with single-end or paired-end reads, which comes along with or without spike-in/UMI sequences. It can be applied to individual samples, as well as to aggregate multiple samples from large-scale profiling.
Please see and cite the preprint here for now.
This schematic diagram shows you how pipeline works:

-
Make sure that you have a Conda-based Python3 distribution(e.g.,the Miniconda). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential conflicts. The installation of Mamba is also recommended:
$ conda install -n base -c conda-forge mamba
-
Git clone this pipeline.
$ cd $ git clone https://github.com/yzeng-lol/MEDIPIPE -
Install pipeline's core enviroment
$ cd MEDIPIPE $ conda activate base $ mamba env create --file conda_env.yaml -
Test run
IMPORTANT: EXTRA ENVIRONMENTS WILL BE INSTALLED, MAKE SURE YOU STILL HAVE INTERNET ACCESS.
- Step 1: Prepare reference, samples FASTQ and aggregation files according to templates.
- Step 2: Specify input configuration file by following the instructions here.
- NOTE: For testing run, you can simply run the SED command below to specify files in Step1, 2. This test dataset aims for a swift extra enviroments installation, which will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.
$ sed -i 's,/path/to,'"$PWD"',g' ./test/*template.* $ conda activate MEDIPIPE $ snakemake --snakefile ./workflow/Snakefile \ --configfile ./test/config_template.yaml \ --conda-prefix ${CONDA_PREFIX}_extra_env \ --use-conda --cores 4 -p
- Step 3 (Optional): You can perform the full-scale test run using another toy dataset by following step 1, 2.
-
Run on HPCs
You can also submit this pipeline to clusters with the template ./workflow/sbatch_Snakefile_template.sh. This template is for SLURM, however, it could be modified to different resource management systems. More details about cluster configuration can be found at here.
## Test run by SLURM submission $ sed -i 's,/path/to,'"$PWD"',g' ./workflow/sbatch_Snakefile_template.sh # replace PATHs for testing $ sbatch ./workflow/sbatch_Snakefile_template.sh
There are several scripts are enclosed in the folder assets, allowing you to download/build reference index and manifest table, to fogre BSgeome package for spike-in controls, to filter regions for fragment profiling calculation. Please also see this document for troubleshooting. I will keep updating this document for errors reported by users.