Skip to content

Commit fa70560

Browse files
authored
Merge pull request #4 from pughlab/Yong-patch-1
Yong patch 1
2 parents aba40c8 + cedba32 commit fa70560

File tree

14 files changed

+3820
-25
lines changed

14 files changed

+3820
-25
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
# MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.0.0)
1+
# MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.1.0)
22

33
## Intoduction
44
The MEDIPIPE is designed for automated end-to-end quality control (QC) and analysis of (cf)MeDIP-seq data. The pipeline starts from the raw FASTQ files all the way to QC, alignment, methylation quantifications and aggregation. The pipeline was developed by [Yong Zeng](mailto:yzeng@uhnresearch.ca) based on some prior work of Wenbin Ye, [Eric Zhao](https://github.com/pughlab/cfMeDIP-seq-analysis-pipeline).
55

66
### Features
7-
- **Portability**: MEDIPIPE was developed with [Snakemake](https://snakemake.readthedocs.io/en/stable/index.html), which will automatically deploy the execution environments. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
7+
- **Portability**: MEDIPIPE was developed with [Snakemake](https://snakemake.readthedocs.io/en/stable/index.html), which will automatically deploy the execution environments. Users can also specify the version of softwares to be install in YAML files. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
88
- **Flexibility**: MEDIPIPE can deal with single-end or paired-end reads, which comes along with or without spike-in/UMI sequences. It can be applied to individual samples, as well as to aggregate multiple samples from large-scale profiling.
99

1010
### Citation
@@ -16,7 +16,7 @@ This schematic diagram shows you how pipeline works:
1616

1717

1818
## Installation
19-
1) Make sure that you have a Conda-based Python3 distribution(e.g.,the [Miniconda](https://docs.conda.io/en/latest/miniconda.html)). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential cnflicts. The installation of [Mamba](https://github.com/mamba-org/mamba) is also recommended:
19+
1) Make sure that you have a Conda-based Python3 distribution(e.g.,the [Miniconda](https://docs.conda.io/en/latest/miniconda.html)). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential conflicts. The installation of [Mamba](https://github.com/mamba-org/mamba) is also recommended:
2020

2121
```bash
2222
$ conda install -n base -c conda-forge mamba
@@ -39,7 +39,7 @@ This schematic diagram shows you how pipeline works:
3939
> **IMPORTANT**: EXTRA ENVIRONMENTS WILL BE INSTALLED, MAKE SURE YOU STILL HAVE INTERNET ACCESS.
4040
* **Step 1:** Prepare reference, samples FASTQ and aggregation files according to [templates](./test/README.md).
4141
* **Step 2:** Specify input configuration file by following the instructions [here](./test/README.md).
42-
* **NOTE:** For testing run, you can simply run the SED command below to specify files in Step1,2.The toy dataset will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.
42+
* **NOTE:** For testing run, you can simply run the SED command below to specify files in Step1, 2. This test dataset aims for a swift extra enviroments installation, which will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.
4343

4444
```bash
4545
$ sed -i 's,/path/to,'"$PWD"',g' ./test/*template.*
@@ -50,6 +50,8 @@ This schematic diagram shows you how pipeline works:
5050
--use-conda --cores 4 -p
5151
```
5252

53+
* **Step 3 (Optional):** You can perform the full-scale test run using another [toy dataset](./test/README.md) by following step 1, 2.
54+
5355
5) Run on HPCs
5456

5557
You can also submit this pipeline to clusters with the template ./workflow/sbatch_Snakefile_template.sh. This template is for SLURM, however, it could be modified to different resource management systems. More details about cluster configuration can be found at [here](https://snakemake.readthedocs.io/en/stable/executing/cluster.html).

assets/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ You can download reference genome, pre-build BWA index and annotated regions (e.
1010

1111
```bash
1212
## eg: ./download_build_reference.sh hg38 /your/genome/data/path/hg38
13-
$ ./workflow/download_reference.sh [GENOME] [DEST_DIR]
13+
$ ./assets/Reference/download_reference.sh [GENOME] [DEST_DIR]
1414
```
1515

1616
* Build reference genomes index
1717
If your sequencing libraries come with spike-ins, you can build new aligner index after combining spike-in genome with human genome. The new index information will be appended to corresponding manifest file.
1818

1919
```bash
20-
## eg: ./build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38
21-
$ ./workflow/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]
20+
## eg: ./assets/Reference/build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38
21+
$ ./assets/Reference/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]
2222
```
2323

2424

assets/Reference/build_reference_index.sh

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
#!/bin/bash
22

33
## the script will build BWA index for combined human and spike-in genomes.
4-
## "Usage: ./build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]"
5-
## "Example: ./build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38"
6-
## "Example: ./build_reference_index.sh hg19 ./data/BAC_F19K16_F24B22.fa hg19_BAC_F19K16_F24B22 /cluster/projects/tcge/DB/cfmedip-seq-pepeline/hg19"
4+
## "Usage: ./assets/Reference/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]"
5+
## "Example: ./assets/Reference/build_reference_index.sh hg38 ./assets/Spike-in_genomes/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38"
76

87
#################
98
## initilizaiton
@@ -33,7 +32,7 @@ cat ${hg_fa} ${SPIKEIN_FA} > ${DEST_DIR}/${INDEX_PREFIX}.fa
3332
cd ${DEST_DIR}
3433

3534
echo "=== Building bwa index for mereged genomes ..."
36-
conda activate tcge-cfmedip-seq-pipeline
35+
conda activate MEDIPIPE
3736

3837
bwa index -a bwtsw ${INDEX_PREFIX}.fa
3938

assets/Reference/download_reference.sh

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
## "A TSV file [DEST_DIR]/[GENOME].tsv will be generated. Use it for pipeline."
88
## "Supported genomes: hg19 and hg38"; Arabidopsis TAIR10 genome will be downloaded,
99
## as well as building bwa index for merged genomes.
10-
## "Usage: ./download_build_reference.sh [GENOME] [DEST_DIR]"
11-
## "Example: ./download_build_reference.sh hg38 /your/genome/data/path/hg38"
10+
## "Usage: ./assets/Reference/download_build_reference.sh [GENOME] [DEST_DIR]"
11+
## "Example: ./assets/Reference/download_build_reference.sh hg38 /your/genome/data/path/hg38"
1212

1313

1414
#################
@@ -47,7 +47,7 @@ if [[ "${GENOME}" == "hg38" ]]; then
4747
PROM="https://www.encodeproject.org/files/ENCFF140XLU/@@download/ENCFF140XLU.bed.gz"
4848
ENH="https://www.encodeproject.org/files/ENCFF212UAV/@@download/ENCFF212UAV.bed.gz"
4949

50-
REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
50+
#REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
5151

5252
fi
5353

@@ -68,7 +68,7 @@ if [[ "${GENOME}" == "hg19" ]]; then
6868
ENH="https://storage.googleapis.com/encode-pipeline-genome-data/hg19/ataqc/reg2map_honeybadger2_dnase_enh_p2.bed.gz"
6969

7070
## Arabidopsis
71-
REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
71+
# REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
7272

7373
fi
7474

@@ -84,12 +84,12 @@ wget -c -O $(basename ${REF_MITO_FA}) ${REF_MITO_FA}
8484
wget -c -O $(basename ${CHRSZ}) ${CHRSZ}
8585

8686
## TAIR10
87-
wget -c -O $(basename ${REF_FA_TAIR10}) ${REF_FA_TAIR10}
88-
sed -i -e 's/^>/>tair10_chr/' TAIR10_chr_all.fas
89-
gzip TAIR10_chr_all.fas
87+
#wget -c -O $(basename ${REF_FA_TAIR10}) ${REF_FA_TAIR10}
88+
#sed -i -e 's/^>/>tair10_chr/' TAIR10_chr_all.fas
89+
#gzip TAIR10_chr_all.fas
9090

9191
## combine genomes
92-
cat $(basename ${REF_FA}) TAIR10_chr_all.fas.gz > ${GENOME}_tair10.fa.gz
92+
# cat $(basename ${REF_FA}) TAIR10_chr_all.fas.gz > ${GENOME}_tair10.fa.gz
9393

9494
## annotated regions
9595
wget -N -c ${BLACKLIST}

conda_env.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,6 @@ dependencies:
77
- snakemake=6.12.3
88
- bwa=0.7.17
99
- trim-galore=0.6.7
10-
- samtools=1.9
10+
- samtools=1.17
1111
- graphviz=2.40.1
1212
- picard=2.26.6

figures/MEDIPIPE_Flowchart.png

-121 KB
Loading

test/Fastq/toy_sample1_R1.fastq.gz

61.7 MB
Binary file not shown.

test/Fastq/toy_sample1_R2.fastq.gz

62.6 MB
Binary file not shown.

test/Fastq/toy_sample2_R1.fastq.gz

61.8 MB
Binary file not shown.

test/Fastq/toy_sample2_R2.fastq.gz

62.7 MB
Binary file not shown.

0 commit comments

Comments
 (0)