首页 > 解决方案 > Azure 数据工厂 v2 中的 HDInsight/Spark 活动没有为 spark-submit 指定 --files 参数的选项

问题描述

我在 Azure 中创建了一个 HDInsight 群集(v4,Spark 2.4),并希望通过 Azure 数据工厂 v2 活动在此群集上运行 Spark.Ne 应用程序。在 Spark Activity 中,可以指定 jar 的路径、--class 参数和要传递给 Spark 应用程序的参数。运行时,参数会自动以“-args”为前缀。但是能够设置“--files”是必要的,因为它告诉 spark-submit 哪些文件需要部署到工作节点。在这种情况下,它用于分发具有 UDF 定义的 dll。这些文件是 Spark 运行所必需的。由于 UDF 是 Spark 应用程序的关键组件,因此我认为这应该是可能的。

火花活动设置

如果我通过 SSH 连接到集群并直接运行 spark-submit 命令并指定 --files 参数,则 Spark 应用程序可以工作,因为文件正在分发到工作节点。

spark-submit --deploy-mode cluster --master yarn --files wasbs://xxx@yyy.blob.core.windows.net/SparkJobs/mySparkApp.dll --class org.apache.spark.deploy.dotnet.DotnetRunner wasbs://xxx@yyy.blob.core.windows.net/SparkJobs/microsoft-spark-2.4.x-0.12.1.jar wasbs://xxx@yyy.blob.core.windows.net/SparkJobs/publish.zip mySparkApp

这些是已遵循的指南:

  1. https://docs.microsoft.com/en-us/dotnet/spark/how-to-guides/deploy-worker-udf-binaries
  2. https://docs.microsoft.com/en-us/dotnet/spark/how-to-guides/hdinsight-deploy-methods
  3. https://docs.microsoft.com/en-us/dotnet/spark/tutorials/hdinsight-deployment

标签: apache-sparkhadoop-yarnazure-data-factory-2azure-hdinsight.net-spark

解决方案


您可以将参数/参数传递给 Azure 数据工厂中的 Pyspark 脚本,如下所示:

在此处输入图像描述

代码:

{
    "name": "SparkActivity",
    "properties": {
        "activities": [
            {
                "name": "Spark1",
                "type": "HDInsightSpark",
                "dependsOn": [],
                "policy": {
                    "timeout": "7.00:00:00",
                    "retry": 0,
                    "retryIntervalInSeconds": 30,
                    "secureOutput": false,
                    "secureInput": false
                },
                "userProperties": [],
                "typeProperties": {
                    "rootPath": "adftutorial/spark/script",
                    "entryFilePath": "WordCount_Spark.py",
                    "arguments": [
                        "--input-file",
                        "wasb://sampledata@chepra.blob.core.windows.net/data",
                        "--output-file",
                        "wasb://sampledata@chepra.blob.core.windows.net/results"
                    ],
                    "sparkJobLinkedService": {
                        "referenceName": "AzureBlobStorage1",
                        "type": "LinkedServiceReference"
                    }
                },
                "linkedServiceName": {
                    "referenceName": "HDInsight",
                    "type": "LinkedServiceReference"
                }
            }
        ],
        "annotations": []
    },
    "type": "Microsoft.DataFactory/factories/pipelines"
}

如何在 ADF 中传递参数:

在此处输入图像描述

在此处输入图像描述

在 Azure 数据工厂中传递参数的一些示例:

{
    "name": "SparkSubmit",
    "properties": {
        "description": "Submit a spark job",
        "activities": [
            {
                "type": "HDInsightMapReduce",
                "typeProperties": {
                    "className": "com.adf.spark.SparkJob",
                    "jarFilePath": "libs/spark-adf-job-bin.jar",
                    "jarLinkedService": "StorageLinkedService",
                    "arguments": [
                        "--jarFile",
                        "libs/sparkdemoapp_2.10-1.0.jar",
                        "--jars",
                        "/usr/hdp/current/hadoop-client/hadoop-azure-2.7.1.2.3.3.0-3039.jar,/usr/hdp/current/hadoop-client/lib/azure-storage-2.2.0.jar",
                        "--mainClass",
                        "com.adf.spark.demo.Demo",
                        "--master",
                        "yarn-cluster",
                        "--driverMemory",
                        "2g",
                        "--driverExtraClasspath",
                        "/usr/lib/hdinsight-logging/*",
                        "--executorCores",
                        "1",
                        "--executorMemory",
                        "4g",
                        "--sparkHome",
                        "/usr/hdp/current/spark-client",
                        "--connectionString",
                        "DefaultEndpointsProtocol=https;AccountName=<YOUR_ACCOUNT>;AccountKey=<YOUR_KEY>",
                        "input=wasb://input@<YOUR_ACCOUNT>.blob.core.windows.net/data",
                        "output=wasb://output@<YOUR_ACCOUNT>.blob.core.windows.net/results"
                    ]
                },
                "inputs": [
                    {
                        "name": "input"
                    }
                ],
                "outputs": [
                    {
                        "name": "output"
                    }
                ],
                "policy": {
                    "executionPriorityOrder": "OldestFirst",
                    "timeout": "01:00:00",
                    "concurrency": 1,
                    "retry": 1
                },
                "scheduler": {
                    "frequency": "Day",
                    "interval": 1
                },
                "name": "Spark Launcher",
                "description": "Submits a Spark Job",
                "linkedServiceName": "HDInsightLinkedService"
            }
        ],
        "start": "2015-11-16T00:00:01Z",
        "end": "2015-11-16T23:59:00Z",
        "isPaused": false,
        "pipelineMode": "Scheduled"
    }
}

推荐阅读