apache-spark - 无法将数据写入配置单元中的内部表
问题描述
我正在尝试使用 spark 数据帧中的 spark 2.3 版本将数据写入 hive 内部表
CREATE TABLE `g_interimc.grpxm31`(
> `gr98p_cf` bigint,
>
> `gr98p_cp` decimal(11,0),
>
> `grp98mmmb` string,
>
> `grp98oob` string,
>
> `srccd` string,
>
> `gp_n` string)
>
> ROW FORMAT SERDE
>
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
>
> STORED AS INPUTFORMAT
>
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
>
> OUTPUTFORMAT
>
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
>
> LOCATION
>
> 'hdfs://gwhdnha/mnoo1/raw/cat/eilkls/g_interimc/grpxm31
>
> TBLPROPERTIES (
>
> 'bucketing_version'='2',
>
> 'transactional'='true',
>
> 'transactional_properties'='default',
dataframe.write.mode("overwrite").insertInto("g_interimc.grpxm31")
Exception in thread "main" org.apache.spark.sql.AnalysisException: Spark has no access to table `g_interimc`.`grpxm31`. Clients can access this table only if they have the following capabilities: CONNECTORREAD,HIVEFULLACIDREAD,HIVEFULLACIDWRITE,HIVEMANAGESTATS,HIVECACHEINVALIDATE,CONNECTORWRITE. This table may be a Hive-managed ACID table, or require some other capability that Spark currently does not implement; at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfNoAccess(ExternalCatalogUtils.scala:280) at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfRO(ExternalCatalogUtils.scala:297) org.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:93) atorg.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:85) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288) at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:85) at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:83) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
尝试使用 spark 数据框编写使用 spark 2.3 版本的 hive 内部表,hive 版本是 3.1.0.3...</p>
在火花代码结束时,我必须从配置单元表中删除数据,而不是仅删除数据
解决方案
推荐阅读
- tensorflow - 将 LSTM 用于固定大小的输入和可变大小的输入的区别
- android-studio - 如何仅使用arcore通过触摸屏幕来删除锚点?
- javascript - Vue JS - 如何在使用对象单击按钮时更改按钮的背景
- puppeteer - __puppeteer_evaluation_script__ 在 Chrome 开发工具中为空
- docker - Docker Zulip:不使用 nginx 代理设置提供 webpack 捆绑文件
- javascript - 如何让 textareas 在自定义组件中工作?
- php - 为什么 IntelliJ IDEA 在使用 PHP 和 Xdebug 时不会在断点处停止?
- sql - SQL Query Count 用户玩过的游戏数
- swift - 如何在 Swift 中将类树转换为结构树
- r - 由多个实际参数匹配的 R 形式参数“精确”