首页 > 解决方案 > 是否可以在没有 Lambda 的情况下通过 AWS STEP FUNCTION 为 AWS EMR 执行 Step Concurrency?

问题描述

这是我的场景,我正在尝试创建 4 个 AWS EMR 集群,其中每个集群将分配有 2 个作业,因此它就像使用 Step Function 编排的具有 8 个作业的 4 个集群。

我的流程应该是这样的:

4 个集群将同时启动并行运行 8 个作业,其中每个集群将并行运行 2 个作业。

现在,AWS 最近推出了这项功能,可以在单个集群中同时运行 2 个(或)多个作业,使用EMR 中的StepConcurrencyLevel来减少集群的运行时间,这可以使用 EMR 控制台、AWS CLI(或)甚至通过 AWS lambda 来执行.

但是,我想执行这个过程,使用 AWS Step Function 在单个集群中并行启动 2 个(或)多个作业,其状态机语言类似于此处引用的格式https://docs.aws.amazon.com/step-functions /latest/dg/connect-emr.html

我尝试引用许多站点来执行此过程,在那里我通过控制台(或)通过 AWS lambda 中的 boto3 格式获得解决方案,但我找不到通过 Step Function 本身执行此操作的解决方案。 ..

有什么解决办法吗!?

提前致谢..

标签: amazon-web-servicesamazon-emraws-step-functions

解决方案


因此,我浏览了更多网站并找到了解决我问题的方法...

我面临的问题是 StepConcurrencyLevel,我可以使用 AWS 控制台(或)通过 AWS CLI(或)甚至通过 Python 使用 BOTO3 添加它......但我期待使用状态机语言的解决方案,我找到了一个......

我们所要做的就是在使用状态机语言创建集群时,我们必须在其中指定 StepConcurrencyLevel,例如 2(或)3,其中默认值为 1。一旦设置好,然后在该集群下创建 4 个步骤并运行状态机。

集群将识别设置的并发数并相应地运行步骤。

我的样品流程:

->我的编排的 JSON 脚本

 {
  "StartAt": "Create_A_Cluster",
  "States": {
    "Create_A_Cluster": {
      "Type": "Task",
      "Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
      "Parameters": {
        "Name": "WorkflowCluster",
        "StepConcurrencyLevel": 2,
        "Tags": [
          {
            "Key": "Description",
            "Value": "process"
          },
          {
            "Key": "Name",
            "Value": "filename"
          },
          {
            "Key": "Owner",
            "Value": "owner"
          },
          {
            "Key": "Project",
            "Value": "roject"
          },
          {
            "Key": "User",
            "Value": "user"
          }
        ],
        "VisibleToAllUsers": true,
        "ReleaseLabel": "emr-5.28.1",
        "Applications": [
          {
            "Name": "Spark"
          }
        ],
        "ServiceRole": "EMR_DefaultRole",
        "JobFlowRole": "EMR_EC2_DefaultRole",
        "LogUri": "s3://prefix/prefix/log.txt/",
        "Instances": {
          "KeepJobFlowAliveWhenNoSteps": true,
          "InstanceFleets": [
            {
              "InstanceFleetType": "MASTER",
              "TargetSpotCapacity": 1,
              "InstanceTypeConfigs": [
                {
                  "InstanceType": "m4.xlarge",
                  "BidPriceAsPercentageOfOnDemandPrice": 90
                }
              ]
            },
            {
              "InstanceFleetType": "CORE",
              "TargetSpotCapacity": 1,
              "InstanceTypeConfigs": [
                {
                  "InstanceType": "m4.xlarge",
                  "BidPriceAsPercentageOfOnDemandPrice": 90
                }
              ]
            }
          ]
        }
      },
      "Retry": [
        {
          "ErrorEquals": [
            "States.ALL"
          ],
          "IntervalSeconds": 5,
          "MaxAttempts": 1,
          "BackoffRate": 2.5
        }
      ],
      "Catch": [
        {
          "ErrorEquals": [
            "States.ALL"
          ],
          "Next": "Fail_Cluster"
        }
      ],
      "ResultPath": "$.cluster",
      "OutputPath": "$.cluster",
      "Next": "Add_Steps_Parallel"
    },
    "Fail_Cluster": {
      "Type": "Task",
      "Resource": "arn:aws:states:::sns:publish",
      "Parameters": {
        "TopicArn": "arn:aws:sns:us-west-2:919490798061:rsac_error_notification",
        "Message.$": "$.Cause"
      },
      "Next": "Terminate_Cluster"
    },
    "Add_Steps_Parallel": {
      "Type": "Parallel",
      "Branches": [
        {
          "StartAt": "Step_One",
          "States": {
            "Step_One": {
              "Type": "Task",
              "Resource": "arn:aws:states:::elasticmapreduce:addStep.sync",
              "Parameters": {
                "ClusterId.$": "$.ClusterId",
                "Step": {
                  "Name": "The first step",
                  "ActionOnFailure": "TERMINATE_CLUSTER",
                  "HadoopJarStep": {
                    "Jar": "command-runner.jar",
                    "Args": [
                      "spark-submit",
                      "--deploy-mode",
                      "cluster",
                      "--master",
                      "yarn",
                      "--conf",
                      "spark.dynamicAllocation.enabled=true",
                      "--conf",
                      "maximizeResourceAllocation=true",
                      "--conf",
                      "spark.shuffle.service.enabled=true",
                      "--py-files",
                      "s3://prefix/prefix/pythonfile.py",
                      "s3://prefix/prefix/pythonfile.py"
                    ]
                  }
                }
              },
              "Retry": [
                {
                  "ErrorEquals": [
                    "States.ALL"
                  ],
                  "IntervalSeconds": 5,
                  "MaxAttempts": 1,
                  "BackoffRate": 2.5
                }
              ],
              "Catch": [
                {
                  "ErrorEquals": [
                    "States.ALL"
                  ],
                  "ResultPath": "$.err_mgs",
                  "Next": "Fail_SNS"
                }
              ],
              "ResultPath": "$.step1",
              "Next": "Terminate_Cluster_1"
            },
            "Fail_SNS": {
              "Type": "Task",
              "Resource": "arn:aws:states:::sns:publish",
              "Parameters": {
                "TopicArn": "arn:aws:sns:us-west-2:919490798061:rsac_error_notification",
                "Message.$": "$.err_mgs.Cause"
              },
              "ResultPath": "$.fail_cluster",
              "Next": "Terminate_Cluster_1"
            },
            "Terminate_Cluster_1": {
              "Type": "Task",
              "Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync",
              "Parameters": {
                "ClusterId.$": "$.ClusterId"
              },
              "End": true
            }
          }
        },
        {
          "StartAt": "Step_Two",
          "States": {
            "Step_Two": {
              "Type": "Task",
              "Resource": "arn:aws:states:::elasticmapreduce:addStep",
              "Parameters": {
                "ClusterId.$": "$.ClusterId",
                "Step": {
                  "Name": "The second step",
                  "ActionOnFailure": "TERMINATE_CLUSTER",
                  "HadoopJarStep": {
                    "Jar": "command-runner.jar",
                    "Args": [
                      "spark-submit",
                      "--deploy-mode",
                      "cluster",
                      "--master",
                      "yarn",
                      "--conf",
                      "spark.dynamicAllocation.enabled=true",
                      "--conf",
                      "maximizeResourceAllocation=true",
                      "--conf",
                      "spark.shuffle.service.enabled=true",
                      "--py-files",
                      "s3://prefix/prefix/pythonfile.py",
                      "s3://prefix/prefix/pythonfile.py"
                    ]
                  }
                }
              },
              "Retry": [
                {
                  "ErrorEquals": [
                    "States.ALL"
                  ],
                  "IntervalSeconds": 5,
                  "MaxAttempts": 1,
                  "BackoffRate": 2.5
                }
              ],
              "Catch": [
                {
                  "ErrorEquals": [
                    "States.ALL"
                  ],
                  "ResultPath": "$.err_mgs_1",
                  "Next": "Fail_SNS_1"
                }
              ],
              "ResultPath": "$.step2",
              "Next": "Terminate_Cluster_2"
            },
            "Fail_SNS_1": {
              "Type": "Task",
              "Resource": "arn:aws:states:::sns:publish",
              "Parameters": {
                "TopicArn": "arn:aws:sns:us-west-2:919490798061:rsac_error_notification",
                "Message.$": "$.err_mgs_1.Cause"
              },
              "ResultPath": "$.fail_cluster_1",
              "Next": "Terminate_Cluster_2"
            },
            "Terminate_Cluster_2": {
              "Type": "Task",
              "Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync",
              "Parameters": {
                "ClusterId.$": "$.ClusterId"
              },
              "End": true
            }
          }
        }
      ],
      "ResultPath": "$.steps",
      "Next": "Terminate_Cluster"
    },
    "Terminate_Cluster": {
      "Type": "Task",
      "Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync",
      "Parameters": {
        "ClusterId.$": "$.ClusterId"
      },
      "End": true
    }
  }
}

在此脚本(或)AWS Step Function 的状态机语言中,在创建集群时,我提到 StepConcurrencyLevel 为 2,并在集群下方添加了 2 个 Spark 作业作为 Steps。

当我在 Step Function 中运行此脚本时,我能够编排集群并在集群中同时运行 2 个步骤,而无需直接在 AWS EMR 控制台(或)通过 AWS CLI(或)甚至通过 BOTO3 进行配置。

我只是使用状态机语言在 AWS Step Function 下的单个集群中执行同时运行 2 个步骤的编排,而没有其他服务(如 lambda 或 livy API 或 BOTO3 等)的帮助......

这就是流程图的外观: 用于并发步骤执行的 AWS Step Function Workflow

为了更准确地了解我在上述状态机语言中插入 StepConcurrencyLevel 的位置,请参见此处:

"Create_A_Cluster": {
  "Type": "Task",
  "Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
  "Parameters": {
    "Name": "WorkflowCluster",
    "StepConcurrencyLevel": 2,
    "Tags": [
      {
        "Key": "Description",
        "Value": "process"
      },

Create_A_Cluster下。

谢谢你。


推荐阅读