hadoop - Hadoop HDFS: Read/Write parallelism?
问题描述
Couldn't find enough information on internet so asking here:
Assuming I'm writing a huge file to disk, hundreds of Terabytes, which is a result of mapreduce (or spark or whatever). How would mapreduce write such a file to HDFS efficiently (potentially parallel?) which could be read later in a parallel way as well?
My understanding is that HDFS is simply block based (128MB e.g.). so in order to write the second block, you must have wrote the first block (or at least determine what content will go to block 1). Let's say it's a CSV file, it is quite possible that a line in the file will span two blocks -- how could we read such CSV to different mapper in mapreduce? Does it have to do some smart logic to read two blocks, concat them and read the proper line?
解决方案
Hadoop uses RecordReaders and InputFormats as the two interfaces which read and understand bytes within blocks.
By default, in Hadoop MapReduce each record ends on a new line with TextInputFormat, and for the scenario where just one line crosses the end of a block, the next block must be read, even if it's just literally the \r\n
characters
Writing data is done from reduce tasks, or Spark executors, etc, in that each task is responsible for writing only a subset of the entire output. You'll generally never get a single file for non-small jobs, and this isn't an issue because the input arguments to most Hadoop processing engines are meant to scan directories, not point at single files
推荐阅读
- python - 下订单后如何更新库存水平?
- python - 基于时间值的if语句创建新列
- unit-testing - 如何扁平化IO(IO())?
- python - PyQt 在一行中输入多个值
- sql - 如何通过 SQL 获取保留报告?还是使用其他语言更快?
- c++ - 从库类函数中导出 DLL 字符串
- arrays - Python 3如何检测和替换数组中的邻居
- google-chrome-extension - 在扩展更新后发送消息时如何避免“扩展上下文无效”错误?
- postgresql - 如何对所有字母 a 到 z ( az ) 进行循环查询?
- c++ - Bash 告诉我权限被拒绝,然后告诉我如果我 sudo 命令无效