首页 > 解决方案 > 无法将数据写入配置单元中的内部表

问题描述

我正在尝试使用 spark 数据帧中的 spark 2.3 版本将数据写入 hive 内部表

 CREATE TABLE `g_interimc.grpxm31`(
>                  `gr98p_cf` bigint,
>
>                  `gr98p_cp` decimal(11,0),
> 
>                  `grp98mmmb` string,
> 
>                  `grp98oob` string,
> 
>                  `srccd` string,
> 
>                  `gp_n` string)
> 
>                ROW FORMAT SERDE
> 
>                  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> 
>                STORED AS INPUTFORMAT
> 
>                  'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 
>                OUTPUTFORMAT
> 
>                  'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> 
>                LOCATION
> 
>                  'hdfs://gwhdnha/mnoo1/raw/cat/eilkls/g_interimc/grpxm31
> 
>                TBLPROPERTIES (
> 
>                  'bucketing_version'='2',
> 
>                  'transactional'='true',
> 
>                  'transactional_properties'='default',



  dataframe.write.mode("overwrite").insertInto("g_interimc.grpxm31")
Exception in thread "main" org.apache.spark.sql.AnalysisException:
Spark has no access to table `g_interimc`.`grpxm31`. Clients can access this table only if
they have the following capabilities: CONNECTORREAD,HIVEFULLACIDREAD,HIVEFULLACIDWRITE,HIVEMANAGESTATS,HIVECACHEINVALIDATE,CONNECTORWRITE.

This table may be a Hive-managed ACID table, or require some other capability that Spark

currently does not implement;     at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfNoAccess(ExternalCatalogUtils.scala:280)
        at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfRO(ExternalCatalogUtils.scala:297)
     org.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:93)
atorg.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:85)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)

        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)

        at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)

        at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:85)

        at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:83)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)

        at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)

 

标签: apache-sparkhiveapache-spark-sqlhiveql

解决方案


推荐阅读