首页 > 解决方案 > Beeline query output coming in JSON format instead of csv table

问题描述

I an using a Beeline Query like below,the underlying data sitting in HDFS comes from a mainframe server. All I want is to execute a query and dump it to a csv (or any tabular format):

beeline -u 'jdbc:hive2://server.com:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=binary' -–showHeader=false --outputformat=csv2 -e "SELECT * FROM tbl LIMIT 2;"> tables1.csv

My issues are:

The format is not clean, there are extra rows at top and bottom ;
It appears as JSOn and not a table.
Some numbers seem hexadecimal    format.

+-----------------------------------------------------------------------------------------------------------------------------+
|  col1:{"col1_a":"00000"   col1_b:"0"  col1_c:{"col11_a":"00000"   col11_tb:{"mo_acct_tp":"0"  col11_c:"0"}}   col1_d:"0"}|  
+-----------------------------------------------------------------------------------------------------------------------------+

I want a regular csv with column names on top and no nesting.

标签: hivehortonworks-data-platformbeeline

解决方案


Please help us understanding your data in a better way.

Is your table has data like below when you run the select query either in beeline or hive?:

> select * from test;
+------------------------------------------------------------------------------------------------------------------------+--+
|                                                        test.col                                                        |
+------------------------------------------------------------------------------------------------------------------------+--+
| {"col1_a":"00000","col1_b":"0","col1_c":{"col11_a":"00000","col11_tb":{"mo_acct_tp":"0","col11_c":"0"}},"col1_d":"0"}  |
+------------------------------------------------------------------------------------------------------------------------+--+

If yes, you might have to parse out the data from the Json Objects which would as below:

select
get_json_object(tbl.col, '$.col1_a') col1_a
, get_json_object(tbl.col, '$.col1_b') col1_b
, get_json_object(tbl.col, '$.col1_c.col11_a') col1_c_col11_a 
, get_json_object(tbl.col, '$.col1_c.col11_tb.col11_c') col1_c_col11_tb_col11_c
, get_json_object(tbl.col, '$.col1_c.col11_tb.mo_acct_tp') col1_c_col11_tb_mo_acct_tp
, get_json_object(tbl.col, '$.col1_d') col1_d
from test tbl
INFO  : Completed executing command(queryId=hive_20180918182457_a2d6230d-28bc-4839-a1b5-0ac63c7779a5); Time taken: 1.007 seconds
INFO  : OK
+---------+---------+-----------------+--------------------------+-----------------------------+---------+--+
| col1_a  | col1_b  | col1_c_col11_a  | col1_c_col11_tb_col11_c  | col1_c_col11_tb_mo_acct_tp  | col1_d  |
+---------+---------+-----------------+--------------------------+-----------------------------+---------+--+
| 00000   | 0       | 00000           | 0                        | 0                           | 0       |
+---------+---------+-----------------+--------------------------+-----------------------------+---------+--+
1 row selected (2.058 seconds)

Then you can use this query in your command line to export results into a file.

>beeline -u 'jdbc:hive2://server.com:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=binary' --showHeader=false --outputformat=csv2 -e "select
get_json_object(tbl.col, '$.col1_a') col1_a
, get_json_object(tbl.col, '$.col1_b') col1_b
, get_json_object(tbl.col, '$.col1_c.col11_a') col1_c_col11_a 
, get_json_object(tbl.col, '$.col1_c.col11_tb.col11_c') col1_c_col11_tb_col11_c
, get_json_object(tbl.col, '$.col1_c.col11_tb.mo_acct_tp') col1_c_col11_tb_mo_acct_tp
, get_json_object(tbl.col, '$.col1_d') col1_d
from corpde_commops.test tbl;" > test.csv

If you need the column names in the file then turn the --showHeader=true

Final output would be:

>cat test.csv 
col1_a,col1_b,col1_c_col11_a,col1_c_col11_tb_col11_c,col1_c_col11_tb_mo_acct_tp,col1_d
00000,0,00000,0,0,0

I clearly don't see anything wrong in your beeline statement.

If your data is not as above example, the solution might be in a different way..

All the best.


推荐阅读