首页 > 解决方案 > Beam 写入 avro 文件序列化错误

问题描述

我按照在Beam 文档中编写AVRO 文件的示例进行操作。但这给了我一个错误Caused by: java.io.NotSerializableException: org.apache.avro.Schema$RecordSchemap.run().waitUntilFinish()但是,如果我从 AVRO 文件中读取并将其写入另一个 AVRO 输出,则它可以正常工作。我的目标是从任意输入源编写 AVRO 文件。有没有人见过类似的问题?你是怎么解决的?

public class WriteAvro {

public interface CsvToAvroOptions extends PipelineOptions {

    @Description("Path of the file to read from")
    @Default.String("test.avro")
    String getInputFile();

    void setInputFile(String value);
}

static void run(CsvToAvroOptions options) throws IOException {
    final Schema schema = new Schema.Parser().parse(Resources.getResource("person.avsc").openStream());
    Pipeline p = Pipeline.create(options);
    // This works fine
    // PCollection<GenericRecord> input = p.apply(AvroIO.readGenericRecords(schema).from(options.getInputFile()));

    // This doesn't work
    PCollection<GenericRecord> input =
            p.apply("ReadLines", TextIO.read().from(options.getInputFile()))
                    .apply(ParDo.of(new DoFn<String, GenericRecord>() {
                        @ProcessElement
                        public void processElement(ProcessContext c) {
                            GenericRecord record = new GenericData.Record(schema);
                            record.put("name", "John Doe");
                            record.put("age", 42);
                            record.put("siblingnames", Lists.newArrayList("Jimmy", "Jane"));
                            c.output(record);
                        }
                    }))
                    .setCoder(AvroCoder.of(GenericRecord.class, schema));

    input.apply(AvroIO.writeGenericRecords(schema).to("prefix"));
    p.run().waitUntilFinish();
}


public static void main(String[] args) throws IOException {
    CsvToAvroOptions options =
            PipelineOptionsFactory.fromArgs(args).withValidation().as(CsvToAvroOptions.class);

    run(options);
}
}

标签: javagoogle-cloud-dataflowapache-beamavro

解决方案


Schema不可序列化导致此错误。您可以将架构存储为文本并在设置 DoFn 时对其进行解析。

这是你如何做到的。


public interface CsvToAvroOptions extends PipelineOptions {

    @Description("Path of the file to read from")
    @Default.String("test.avro")
    String getInputFile();

    void setInputFile(String value);
}



  private static class ConstructAvroRecordsFn extends DoFn<String, GenericRecord> {

    private final String schemaJson;
    private Schema schema;

    ConstructAvroRecordsFn(Schema schema){
      schemaJson = schema.toString();
    }

    @Setup
    public void setup(){
      schema = new Schema.Parser().parse(schemaJson);
    }
    @ProcessElement
    public void processElement(ProcessContext c) {
      GenericRecord record = new GenericData.Record(schema);
      record.put("name", "John Doe");
      record.put("age", 42);
      record.put("siblingnames", Lists.newArrayList("Jimmy", "Jane"));
      c.output(record);
    }
  }

static void run(CsvToAvroOptions options) throws IOException {
    final Schema schema = new Schema.Parser().parse(Resources.getResource("person.avsc").openStream());
  Pipeline p = Pipeline.create(options);
    // This works fine
    // PCollection<GenericRecord> input = p.apply(AvroIO.readGenericRecords(schema).from(options.getInputFile()));

    // This doesn't work
    PCollection<GenericRecord> input =
            p.apply("ReadLines", TextIO.read().from(options.getInputFile()))
                    .apply(ParDo.of(new ConstructAvroRecordsFn(schema)))
                    .setCoder(AvroCoder.of(GenericRecord.class, schema));

    input.apply(AvroIO.writeGenericRecords(schema).to("prefix"));
    p.run().waitUntilFinish();
}


public static void main(String[] args) throws IOException {
    CsvToAvroOptions options =
            PipelineOptionsFactory.fromArgs(args).withValidation().as(CsvToAvroOptions.class);

    run(options);
}
}

推荐阅读