首页 > 解决方案 > 使用 spark scala 远程连接 hbase

问题描述

我在我的窗口(这是我的本地)中配置了 Hadoop 和 spark,并且我在其中有 hbase 的 vm(同一台机器)中设置了 cloudera。
我正在尝试使用火花流提取数据并将其放入 vm 中的 hbase 中。

是否有可能做到这一点?

我的尝试:

打包 hbase

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}


object Connect {

  def main(args: Array[String]){
  val conf = HBaseConfiguration.create()
val tablename = "Acadgild_spark_Hbase"

val HbaseConf = HBaseConfiguration.create()
  HbaseConf.set("hbase.zookeeper.quorum","192.168.117.133")
  HbaseConf.set("hbase.zookeeper.property.clientPort","2181")

  val connection = ConnectionFactory.createConnection(HbaseConf);

  val admin = connection.getAdmin();

 val listtables=admin.listTables()

listtables.foreach(println)

  }
}

错误:

18/08/08 21:05:09 INFO ZooKeeper: Initiating client connection, connectString=192.168.117.133:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$13/1357491107@12d1bfb1
18/08/08 21:05:15 INFO ClientCnxn: Opening socket connection to server 192.168.117.133/192.168.117.133:2181. Will not attempt to authenticate using SASL (unknown error)
18/08/08 21:05:15 INFO ClientCnxn: Socket connection established to 192.168.117.133/192.168.117.133:2181, initiating session
18/08/08 21:05:15 INFO ClientCnxn: Session establishment complete on server 192.168.117.133/192.168.117.133:2181, sessionid = 0x16518f57f950012, negotiated timeout = 40000
18/08/08 21:05:16 WARN ConnectionUtils: Can not resolve quickstart.cloudera, please check your network
java.net.UnknownHostException: quickstart.cloudera
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source)
    at java.net.InetAddress.getAddressesFromNameService(Unknown Source)
    at java.net.InetAddress.getAllByName0(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getByName(Unknown Source)
    at org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:233)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1126)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1148)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3055)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3047)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:460)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:444)
    at azure.iothub$.main(iothub.scala:35)
    at azure.iothub.main(iothub.scala)

标签: scalaapache-sparkhadoophbasespark-streaming

解决方案


基于此错误,您无法quickstart.cloudera在代码中使用,因为网络堆栈正在使用 DNS 尝试访问它,但您的外部路由器不知道您的 VM。


您需要使用localhost,然后确保已正确配置 VM 以使用您需要连接的端口。

但是,我认为 Zookeeper 正在将该主机名返回到您的代码中。因此,您必须在主机操作系统计算机上编辑 Hosts 文件以添加行项目。

例如

127.0.0.1 localhost quickstart.cloudera

或者,您可以在zookeeper-shellCloudera Manager 或 Cloudera Manager(在 HBase 配置中)中四处寻找并编辑以quickstart.cloudera返回地址192.168.117.133


推荐阅读