首页 > 解决方案 > 如何更好地优化此 Hive 查询以提高速度?

问题描述

我要做的是更好地优化 PySpark 中的这些 Hive 查询,如下所示,以提高速度。现在,在对大表进行 Hive 表比较时,最终会花费大量时间(对于某些人来说是 221 分钟的挂钟实时时间)。我的问题是,查看下面的脚本进行协调,是否有关于提高查询性能和速度的方法的建议?谢谢。

更新:

特别值得注意的是:

  # Initializing Passed Parameters

    FIRST_TBL = sys.argv[1]

    SECOND_TBL = sys.argv[2]


    # Defining Spark Context and Hive Context

    GLBL_CONF = SparkConf().setAppName("No TouchPoint Refine Load Wrapper")

    SC = SparkContext(conf=GLBL_CONF)

    HC = HiveContext(SC)

    CLMS_DF = HC.sql("describe {0}".format(FIRST_TBL))

    CLMS = CLMS_DF.select("col_name").collect()

    TYPES = CLMS_DF.select("data_type").collect()

    CLMS_LIST=[]

    TYPES_LIST=[]

    CLMS_STR=""

    for c in CLMS:

        CLMS_LIST.append(c.col_name)

    for t in TYPES:

        TYPES_LIST.append(t.data_type)

    for i,clms in enumerate(CLMS_LIST):

        clms_lower = clms.lower()

        if(not("add_user_nm" in clms_lower or "add_usr_nm" in clms_lower or "add_user" in clms_lower or "add_usr" in clms_lower or "add_tms" in clms_lower or "updt_user" in clms_lower or "updt_usr" in clms_lower or "updt_user_nm" in clms_lower or "updt_usr_nm" in clms_lower or "updt_tms" in clms_lower or clms_lower in EXCLUDED_CLMS_LIST)):

            if("partition information" in clms_lower):

                break

            else:

                if(TYPES_LIST[i].lower() == 'date' or TYPES_LIST[i].lower() == 'timestamp'):

                    max1 = HC.sql("select to_date(max({0})) as max_dt from {1} where to_date({0})<>'2099-12-31'".format(clms_lower,FIRST_TBL)).first()

                    max2 = HC.sql("select to_date(max({0})) as max_dt from {1} where to_date({0})<>'2099-12-31'".format(clms_lower,SECOND_TBL)).first()

                    CLMS_STR = CLMS_STR + "case when to_date({0}) = '{1}' or to_date({0}) = '{2}' or to_date({0}) = '2019-05-20' or to_date({0}) = '2019-05-21' or to_date({0}) = '2019-05-22' then cast('2020-06-06' as {3}) else to_date({0}) end as {0},".format(clms_lower,max1['max_dt'],max2['max_dt'],TYPES_LIST[i])

                else:

                    CLMS_STR = CLMS_STR+clms+","

        else:

            LOGGER.info("{0}".format(clms_lower))

    CLMS_STR = CLMS_STR[:-1]

    df1 = HC.sql("select {0} from {1} {2}".format(CLMS_STR,FIRST_TBL,WHERE_CLAUSE))

    df2 = HC.sql("select {0} from {1} {2}".format(CLMS_STR,SECOND_TBL,WHERE_CLAUSE))

 
    df1_minus_df2 = df1.subtract(df2)

    df1_minus_df2_count = df1_minus_df2.count()

 
    df2_minus_df1 = df2.subtract(df1)

    df2_minus_df1_count = df2_minus_df1.count()

 
    df1_count = df1.count()

    df2_count = df2.count()


    df1_minus_df2_log = LOG_PATH + 'First_minus_second_' + str(DATE) + '_' + str(TIME)

    df2_minus_df1_log = LOG_PATH + 'Second_minus_first_' + str(DATE) + '_' + str(TIME)
    df1_minus_df2.limit(10000).write.format("com.databricks.spark.csv").save(df1_minus_df2_log)
    df2_minus_df1.limit(10000).write.format("com.databricks.spark.csv").save(df2_minus_df1_log)

我的目标是拿两个大表,基本上计算计数,然后找到它们的比较差异。

标签: apache-sparkpysparkhiveapache-spark-sqlhiveql

解决方案


推荐阅读