首页 > 解决方案 > 使用 python 从 hive 读取数据时的性能问题

问题描述

我在配置单元中有一个包含 351 837(110 MB 大小)记录的表,我正在使用 python 读取这个表并写入 sql server。

在这个过程中,从 hive 读取数据到 pandas 数据帧需要很长时间。当我加载整个记录(351k)时,需要 90 分钟。

为了改进,我采用了以下方法,例如从 hive 读取 10k 行并写入 sql server。但是从 hive 中读取 10k 行并将其分配给 Dataframe 仅需要 4-5 分钟的时间。

def execute_hadoop_export():
       """
       This will run the steps required for a Hadoop Export.  
       Return Values is boolean for success fail
       """
       try:

           hql='select * from db.table '
           # Open Hive ODBC Connection
           src_conn = pyodbc.connect("DSN=****",autocommit=True)
           cursor=src_conn.cursor()
           #tgt_conn = pyodbc.connect(target_connection)

           # Using SQLAlchemy to dynamically generate query and leverage dataframe.to_sql to write to sql server...
           sql_conn_url = urllib.quote_plus('DRIVER={ODBC Driver 13 for SQL Server};SERVER=Xyz;DATABASE=Db2;UID=ee;PWD=*****')
           sql_conn_str = "mssql+pyodbc:///?odbc_connect={0}".format(sql_conn_url)
           engine = sqlalchemy.create_engine(sql_conn_str)
           # read source table.
           vstart=datetime.datetime.now()
           for df in pandas.read_sql(hql, src_conn,chunksize=10000):

               vfinish=datetime.datetime.now()

               print 'Finished 10k rows reading from hive and it took', (vfinish-vstart).seconds/60.0,' minutes'
           # Get connection string for target from Ctrl.Connnection

               df.to_sql(name='table', schema='dbo', con=engine, chunksize=10000, if_exists="append", index=False) 
               print 'Finished 10k rows writing into sql server and it took', (datetime.datetime.now()-vfinish).seconds/60.0, ' minutes'
               vstart=datetime.datetime.now()
           cursor.Close()


       except Exception, e:
           print str(e)

输出:

结果

在python中读取配置单元表数据的最快方法是什么?

更新配置单元表结构

CREATE TABLE `table1`(
  `policynumber` varchar(15), 
  `unitidentifier` int, 
  `unitvin` varchar(150), 
  `unitdescription` varchar(100), 
  `unitmodelyear` varchar(4), 
  `unitpremium` decimal(18,2), 
  `garagelocation` varchar(150), 
  `garagestate` varchar(50), 
  `bodilyinjuryoccurrence` decimal(18,2), 
  `bodilyinjuryaggregate` decimal(18,2), 
  `bodilyinjurypremium` decimal(18,2), 
  `propertydamagelimits` decimal(18,2), 
  `propertydamagepremium` decimal(18,2), 
  `medicallimits` decimal(18,2), 
  `medicalpremium` decimal(18,2), 
  `uninsuredmotoristoccurrence` decimal(18,2), 
  `uninsuredmotoristaggregate` decimal(18,2), 
  `uninsuredmotoristpremium` decimal(18,2), 
  `underinsuredmotoristoccurrence` decimal(18,2), 
  `underinsuredmotoristaggregate` decimal(18,2), 
  `underinsuredmotoristpremium` decimal(18,2), 
  `umpdoccurrence` decimal(18,2), 
  `umpddeductible` decimal(18,2), 
  `umpdpremium` decimal(18,2), 
  `comprehensivedeductible` decimal(18,2), 
  `comprehensivepremium` decimal(18,2), 
  `collisiondeductible` decimal(18,2), 
  `collisionpremium` decimal(18,2), 
  `emergencyroadservicepremium` decimal(18,2), 
  `autohomecredit` tinyint, 
  `lossfreecredit` tinyint, 
  `multipleautopoliciescredit` tinyint, 
  `hybridcredit` tinyint, 
  `goodstudentcredit` tinyint, 
  `multipleautocredit` tinyint, 
  `fortyfivepluscredit` tinyint, 
  `passiverestraintcredit` tinyint, 
  `defensivedrivercredit` tinyint, 
  `antitheftcredit` tinyint, 
  `antilockbrakescredit` tinyint, 
  `perkcredit` tinyint, 
  `plantype` varchar(100), 
  `costnew` decimal(18,2), 
  `isnocontinuousinsurancesurcharge` tinyint)
CLUSTERED BY ( 
  policynumber, 
  unitidentifier) 
INTO 50 BUCKETS

注意:我也尝试过使用 sqoop 导出选项,但我的配置单元表已经是桶格式。

标签: pythonsql-serverpandaspython-2.7hive

解决方案


我尝试过多处理,我可以将它从 2 小时减少 8-10 分钟。请在下面找到脚本。

from multiprocessing import Pool
import pandas as pd
import datetime
from query import hivetable
from write_tosql import write_to_sql
p = Pool(37)
lst=[]
#we have 351k rows so generating series to use in hivetable method
for i in range(1,360000,10000):
    lst.append(i)
print 'started reading ',datetime.datetime.now()
#we have 40 cores in  cluster 
p = Pool(37)
s=p.map(hivetable, [i for i in lst])
s_df=pd.concat(s)
print 'finished reading ',datetime.datetime.now()
print 'Started writing to sql server ',datetime.datetime.now()
write_to_sql(s_df)
print 'Finished writing to sql server ',datetime.datetime.now()

---------query.py 文件------

import pyodbc
from multiprocessing import Pool
from functools import partial
import pandas as pd

conn = pyodbc.connect("DSN=******",autocommit=True)

def hivetable(row):
    query = 'select * from (select row_number() OVER (order by policynumber) as rownum, * from dbg.tble ) tbl1 where rownum between '+str(row) +' and '+str(row+9999)+';'
    result = pd.read_sql(query,conn)
    return result

---------Write_tosql.py 文件---------

import sqlalchemy
import urllib
import pyodbc
def write_to_sql(s_df):
    sql_conn_url = urllib.quote_plus('DRIVER={ODBC Driver 13 for SQL Server};SERVER=ser;DATABASE=db;UID=sqoop;PWD=#####;')
    sql_conn_str = "mssql+pyodbc:///?odbc_connect={0}".format(sql_conn_url)
    engine = sqlalchemy.create_engine(sql_conn_str)
    s_df.rename(columns=lambda x: remove_table_alias(x), inplace=True)
    s_df.to_sql(name='tbl2', schema='dbo', con=engine, chunksize=10000, if_exists="append", index=False)
def remove_table_alias(columnName):
    try:
        if(columnName.find(".") != -1):
            return columnName.split(".")[1]
        return columnName
    except Exception, e:
        print "ERROR in _remove_table_alias ",str(e)

任何其他解决方案都将帮助我减少时间。


推荐阅读