首页 > 解决方案 > 无法提交并发 Hadoop 作业

问题描述

我正在Hadoop 2.7我的本地机器上运行,HBase 1.4以及Phoenix 4.15. 我编写了一个应用程序,它提交 map reduce 作业,通过 Phoenix 删除 HBase 中的数据。每个作业都由 a 的单个线程运行,ThreadPoolExecutor如下所示:

public class MRDeleteTask extends Task {

    private final Logger LOGGER = LoggerFactory.getLogger(MRDeleteTask.class);
    private String query;
    public MRDeleteTask(int id, String q) {
        this.setId(id);
        this.query = q;
    }

    @Override
    public void run() {
        LOGGER.info("Running Task: " + getId());
        try {
            Configuration configuration = HBaseConfiguration.create();
            Job job = Job.getInstance(configuration, "phoenix-mr-job-"+getId());
            LOGGER.info("mapper input: " + this.query);
            PhoenixMapReduceUtil.setInput(job, DeleteMR.PhoenixDBWritable.class, "Table", QUERY);
            job.setMapperClass(DeleteMR.DeleteMapper.class);
            job.setJarByClass(DeleteMR.class);
            job.setNumReduceTasks(0);
            job.setOutputFormatClass(NullOutputFormat.class);
            job.setOutputKeyClass(ImmutableBytesWritable.class);
            job.setOutputValueClass(Writable.class);
            TableMapReduceUtil.addDependencyJars(job);
            boolean result = job.waitForCompletion(true);

        }
        catch (Exception e) {
            LOGGER.info(e.getMessage());
        }
    }
}

如果 ThreadPoolExecutor 中只有 1 个线程,一切都很好。如果同时提交多个这样的 Hadoop 作业,则不会发生任何事情。根据日志,错误如下所示:

4439 [pool-1-thread-2] INFO  MRDeleteTask  - java.util.concurrent.ExecutionException: java.io.IOException: Unable to rename file: [/tmp/hadoop-user/mapred/local/1595274269610_tmp/tmp_phoenix-4.15.0-HBase-1.4-client.jar] to [/tmp/hadoop-user/mapred/local/1595274269610_tmp/phoenix-4.15.0-HBase-1.4-client.jar]

4439 [pool-1-thread-1] INFO  MRDeleteTask  - java.util.concurrent.ExecutionException: ExitCodeException exitCode=1: chmod: /private/tmp/hadoop-user/mapred/local/1595274269610_tmp/phoenix-4.15.0-HBase-1.4-client.jar: No such file or directory

使用返回的 future提交任务ThreadPoolExecutor.submit()并检查它们的状态future.isDone()

标签: javahadoopmapreducehbasephoenix

解决方案


这些作业没有提交给 YARN,而是从 Intellij 在本地运行。将以下内容添加到作业配置中解决了该问题:

conf.set("mapreduce.framework.name", "yarn");

推荐阅读