首页 > 解决方案 > Snakemake - 多对一使用扩展异常

问题描述

我有一个使用多个床文件的分区遗传性的有效 Snakefile。这会产生一个完美的工作列表,snakemake -np因此这个文件只需要一个小的调整(我希望!)。

我的问题出现在以下merge_peaks规则中。

在这个阶段,我有 25 个 bed 文件,需要运行 5 次调用merge_peaks,每个 extension 调用一次ext=[100,200,300,400,500],所以我每次只需要调用包含相关扩展名的 5 个 bed 文件。

例如对于以下merge_peaks输出文件peak_files/Fullard2018_peaks.mrgd.blrm.100.bed,我只需要以下 5 个床文件ext=100用作输入:

peak_files/fullard2018_NpfcATAC_1.blrm.100.bed
peak_files/fullard2018_NpfcATAC_2.blrm.100.bed
peak_files/fullard2018_NpfcATAC_3.blrm.100.bed
peak_files/fullard2018_NpfcATAC_4.blrm.100.bed
peak_files/fullard2018_NpfcATAC_5.blrm.100.bed

这是我的配置文件:

samples:
    fullard2018_NpfcATAC_1:
    fullard2018_NpfcATAC_2:
    fullard2018_NpfcATAC_3:
    fullard2018_NpfcATAC_4:
    fullard2018_NpfcATAC_5:
index: /home/genomes_and_index_files/hg19.chrom.sizes

这是蛇文件:

# read config info into this namespace
configfile: "config.yaml"
print (config['samples'])

rule all:
    input:
        expand("peak_files/{sample}.blrm.{ext}.bed", sample=config['samples'], ext=[100,200,300,400,500]),
        expand("LD_annotation_files/Fullard2018.{ext}.{chr}.l2.ldscore.gz", sample=config['samples'], ext=[100,200,300,400,500], chr=range(1,23))

rule annot2bed:
    input:
        folder = "Reference/baseline"
    params:
        file = "Reference/baseline/baseline.{chr}.annot.gz"
    output:
        "LD_annotation_files/baseline.{chr}_no_head.bed"
    shell:
        "zcat {params.file} | tail -n +2 |awk -v OFS=\"\t\" '{{print \"chr\"$1, $2-1, $2, $3, $4}}' "
        "| sort -k1,1 -k2,2n > {output}"

rule extend_bed:
    input:
        "peak_files/{sample}_peaks.blrm.narrowPeak"
    output:
        "peak_files/{sample}.blrm.{ext}.bed"
    params:
        ext = "{ext}",
        index = config["index"]
    shell:
        "bedtools slop -i {input} -g {params.index} -b {params.ext} > {output}"

rule merge_peaks:
    input:
        expand("peak_files/{sample}.blrm.{ext}.bed", sample = config['samples'], ext=[100,200,300,400,500])
    output:
        "peak_files/Fullard2018_peaks.mrgd.blrm.{ext}.bed"
    shell:
        "cat {input} | bedtools sort -i stdin | bedtools merge -i stdin > {output}"
rule intersect_mybed:
    input:
        annot = rules.annot2bed.output,
        mybed = rules.merge_peaks.output
    output:
        "LD_annotation_files/Fullard2018.{ext}.{chr}.annot.gz"
    params:
        uncompressed = "LD_annotation_files/Fullard2018.{ext}.{chr}.annot"
    shell:
        "echo -e \"CHR\tBP\tSNP\tCM\tANN\" > {params.uncompressed}; "
        "/share/apps/bedtools intersect -a {input.annot} -b {input.mybed} -c | "
        "sed 's/^chr//g' | awk -v OFS=\"\t\" '{{print $1, $2, $4, $5, $6}}' >> {params.uncompressed}; "
        "gzip {params.uncompressed}"

rule ldsr:
    input:
        annot = "LD_annotation_files/Fullard2018.{ext}.{chr}.annot.gz",
        bfile_folder = "Reference/1000G_plinkfiles",
        snps_folder = "Reference/hapmap3_snps"
    output:
        "LD_annotation_files/Fullard2018.{ext}.{chr}.l2.ldscore.gz"
    conda:
        "envs/p2-ldscore.yaml"
    params:
        bfile = "Reference/1000G_plinkfiles/1000G.mac5eur.{chr}",
        ldscores = "LD_annotation_files/Fullard2018.{ext}.{chr}",
        snps = "Reference/hapmap3_snps/hm.{chr}.snp"
    log:
        "logs/LDSC/Fullard2018.{ext}.{chr}_ldsc.txt"
    shell:
        "export MKL_NUM_THREADS=2;" # Export arguments are  workaround as ldsr uses all available cores
        "export NUMEXPR_NUM_THREADS=2;" # Numbers must match cores parameter in cluster config
        "Reference/ldsc/ldsc.py --l2 --bfile {params.bfile} --ld-wind-cm 1 "
        "--annot {input.annot} --out {params.ldscores} --print-snps {params.snps} 2> {log}"

当前发生的情况是,每次调用的所有 25 个床文件都被输入到合并峰值规则中,而我每次只需要输入带有相关扩展名的 5 个。我正在努力弄清楚如何正确使用扩展功能或替代方法,以便在每次调用规则时仅包含和合并相关的床文件。

我认为这个问题提出了类似的问题,但并不是我想要的,因为它不使用配置文件 - Snakemake: rule for using many inputs for one output with multiple sub-groups

我喜欢 Snakemake,但我的蟒蛇有点冒险。

非常感谢。

标签: expandmany-to-onesnakemake

解决方案


如果我理解正确,您设法为每个扩展名的每个样本创建一个文件(总共 25 个文件),现在您想要合并具有相同扩展名的文件。所以你需要的最终输出应该是:

peak_files/Fullard2018_peaks.mrgd.blrm.100.bed, 
peak_files/Fullard2018_peaks.mrgd.blrm.200.bed, 
peak_files/Fullard2018_peaks.mrgd.blrm.300.bed, 
peak_files/Fullard2018_peaks.mrgd.blrm.400.bed, 
peak_files/Fullard2018_peaks.mrgd.blrm.500.bed

(为了测试,我创建了 25 个输入文件以按扩展名合并):

mkdir -p peak_files
for i in 100 200 300 400 500
do
    touch peak_files/fullard2018_NpfcATAC_1.blrm.${i}.bed
    touch peak_files/fullard2018_NpfcATAC_2.blrm.${i}.bed
    touch peak_files/fullard2018_NpfcATAC_3.blrm.${i}.bed
    touch peak_files/fullard2018_NpfcATAC_4.blrm.${i}.bed
    touch peak_files/fullard2018_NpfcATAC_5.blrm.${i}.bed
done

这个蛇文件应该可以完成这项工作。当然,您可以移动samplesexts配置条目:

samples= ['fullard2018_NpfcATAC_1', 
          'fullard2018_NpfcATAC_2',
          'fullard2018_NpfcATAC_3', 
          'fullard2018_NpfcATAC_4', 
          'fullard2018_NpfcATAC_5']

exts= [100, 200, 300, 400, 500]

rule all:
    input:
        expand('peak_files/Fullard2018_peaks.mrgd.blrm.{ext}.bed', ext= exts),

rule merge_peaks:
    input:
        lambda wildcards: expand('peak_files/{sample}.blrm.{ext}.bed', 
            sample= samples, ext= wildcards.ext),
    output:
        'peak_files/Fullard2018_peaks.mrgd.blrm.{ext}.bed',
    shell:
        r"""
        cat {input} > {output}
        """

中的lambda函数merge_peaks是说对于每个扩展名 ext 给我一个文件列表,“样本”中的每个样本一个文件


推荐阅读