You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> Ewels P, Magnusson M, Lundin S, Käller M. MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016 Oct 1;32(19):3047-8. doi: 10.1093/bioinformatics/btw354. Epub 2016 Jun 16. PubMed PMID: 27312411; PubMed Central PMCID: PMC5039924.
**Analysis of Chromosome Conformation Capture data (Hi-C)**.
3
+
[](https://github.com/nf-core/hic/actions?query=workflow%3A%22nf-core+CI%22)
It was designed to process Hi-C data from raw FastQ files (paired-end Illumina
20
-
data) to normalized contact maps.
21
-
The current version supports most protocols, including digestion protocols as
22
-
well as protocols that do not require restriction enzymes such as DNase Hi-C.
23
-
In practice, this workflow was successfully applied to many data-sets including
24
-
dilution Hi-C, in situ Hi-C, DNase Hi-C, Micro-C, capture-C, capture Hi-C or
25
-
HiChip data.
26
-
27
-
Contact maps are generated in standard formats including HiC-Pro, and cooler for
28
-
downstream analysis and visualization.
29
-
Addition analysis steps such as compartments and TADs calling are also available.
30
-
31
-
The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool
32
-
to run tasks across multiple compute infrastructures in a very portable manner.
33
-
It comes with docker / singularity containers making installation trivial and
34
-
results highly reproducible.
19
+
<!-- TODO nf-core: Write a 1-2 sentence summary of what data the pipeline is for and what it does -->
20
+
**nf-core/hic** is a bioinformatics best-practice analysis pipeline for Analysis of Chromosome Conformation Capture data (Hi-C).
21
+
22
+
The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from [nf-core/modules](https://github.com/nf-core/modules) in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
23
+
24
+
<!-- TODO nf-core: Add full-sized test dataset and amend the paragraph below if applicable -->
25
+
On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the [nf-core website](https://nf-co.re/hic/results).
35
26
36
27
## Pipeline summary
37
28
38
-
1. HiC-Pro data processing ([`HiC-Pro`](https://github.com/nservant/HiC-Pro))
39
-
1. Mapping using a two steps strategy to rescue reads spanning the ligation
2. Install any of [`Docker`](https://docs.docker.com/engine/installation/), [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/), [`Podman`](https://podman.io/), [`Shifter`](https://nersc.gitlab.io/development/shifter/how-to-use/) or [`Charliecloud`](https://hpc.github.io/charliecloud/) for full pipeline reproducibility _(please only use [`Conda`](https://conda.io/miniconda.html) as a last resort; see [docs](https://nf-co.re/usage/configuration#basic-configuration-profiles))_
57
39
58
-
3. Download the pipeline and test it on a minimal dataset with a single command
40
+
3. Download the pipeline and test it on a minimal dataset with a single command:
59
41
60
-
```bash
42
+
```console
61
43
nextflow run nf-core/hic -profile test,<docker/singularity/podman/shifter/charliecloud/conda/institute>
to see if a custom config file to run nf-core pipelines already exists for your Institute.
66
-
If so, you can simply use `-profile <institute>`in your command.
67
-
This will enable either `docker` or `singularity` and set the appropriate execution
68
-
settings for your local compute environment.
46
+
> * Please check [nf-core/configs](https://github.com/nf-core/configs#documentation) to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use `-profile <institute>` in your command. This will enable either `docker` or `singularity` and set the appropriate execution settings for your local compute environment.
47
+
> * If you are using `singularity` then the pipeline will auto-detect this and attempt to download the Singularity images directly as opposed to performing a conversion from Docker images. If you are persistently observing issues downloading Singularity images directly due to timeout or network issues then please use the `--singularity_pull_docker_container` parameter to pull and convert the Docker image instead. Alternatively, it is highly recommended to use the [`nf-core download`](https://nf-co.re/tools/#downloading-pipelines-for-offline-use) command to pre-download all of the required containers before running the pipeline and to set the [`NXF_SINGULARITY_CACHEDIR` or `singularity.cacheDir`](https://www.nextflow.io/docs/latest/singularity.html?#singularity-docker-hub) Nextflow options to be able to store and re-use the images from a central location for future pipeline runs.
48
+
> * If you are using `conda`, it is highly recommended to use the [`NXF_CONDA_CACHEDIR` or `conda.cacheDir`](https://www.nextflow.io/docs/latest/conda.html) settings to store the environments in a central location for future pipeline runs.
69
49
70
50
4. Start running your own analysis!
71
51
72
-
```bash
73
-
nextflow run nf-core/hic -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input '*_R{1,2}.fastq.gz' --genome GRCh37
52
+
<!-- TODO nf-core: Update the example "typical command" below used to run the pipeline -->
53
+
54
+
```console
55
+
nextflow run nf-core/hic -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input samplesheet.csv --genome GRCh37
74
56
```
75
57
76
58
## Documentation
77
59
78
-
The nf-core/hic pipeline comes with documentation about the pipeline: [usage](https://nf-co.re/hic/usage) and [output](https://nf-co.re/hic/output).
79
-
80
-
For further information or help, don't hesitate to get in touch on [Slack](https://nfcore.slack.com/channels/hic).
81
-
You can join with [this invite](https://nf-co.re/join/slack).
60
+
The nf-core/hic pipeline comes with documentation about the pipeline [usage](https://nf-co.re/hic/usage), [parameters](https://nf-co.re/hic/parameters) and [output](https://nf-co.re/hic/output).
82
61
83
62
## Credits
84
63
85
64
nf-core/hic was originally written by Nicolas Servant.
86
65
66
+
We thank the following people for their extensive assistance in the development of this pipeline:
67
+
68
+
<!-- TODO nf-core: If applicable, make list of people who have also contributed -->
69
+
87
70
## Contributions and Support
88
71
89
72
If you would like to contribute to this pipeline, please see the [contributing guidelines](.github/CONTRIBUTING.md).
90
73
91
74
For further information or help, don't hesitate to get in touch on the [Slack `#hic` channel](https://nfcore.slack.com/channels/hic) (you can join with [this invite](https://nf-co.re/join/slack)).
92
75
93
-
## Citation
76
+
## Citations
77
+
78
+
<!-- TODO nf-core: Add citation for pipeline after first release. Uncomment lines below and update Zenodo doi and badge at the top of this file. -->
79
+
<!-- If you use nf-core/hic for your analysis, please cite it using the following doi: [10.5281/zenodo.XXXXXX](https://doi.org/10.5281/zenodo.XXXXXX) -->
94
80
95
-
If you use nf-core/hic for your analysis, please cite it using the following
<!-- TODO nf-core: Add bibliography of tools and data used in your pipeline -->
82
+
An extensive list of references for the tools used by the pipeline can be found in the [`CITATIONS.md`](CITATIONS.md) file.
97
83
98
84
You can cite the `nf-core` publication as follows:
99
85
@@ -102,11 +88,3 @@ You can cite the `nf-core` publication as follows:
102
88
> Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
103
89
>
104
90
> _Nat Biotechnol._ 2020 Feb 13. doi: [10.1038/s41587-020-0439-x](https://dx.doi.org/10.1038/s41587-020-0439-x).
105
-
106
-
In addition, references of tools and data used in this pipeline are as follows:
107
-
108
-
>**HiC-Pro: An optimized and flexible pipeline for Hi-C processing.**
109
-
>
110
-
> Nicolas Servant, Nelle Varoquaux, Bryan R. Lajoie, Eric Viara, Chongjian Chen, Jean-Philippe Vert, Job Dekker, Edith Heard, Emmanuel Barillot.
0 commit comments