首页 > 解决方案 > Kerberos 主体没有预期的格式

问题描述

我尝试在我的大数据平台上部署 Janus Graph 服务器,它支持 Kerberos 身份验证。

首先,JanusGraph 可以成功访问 Zookeper 并进行身份验证。但是当它尝试连接 HBase 时出现问题,如下所示;

Connecting to FusionInsight2/172.16.250.11:21302
1751 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.security.UserGroupInformation  - PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
1753 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.security.HBaseSaslRpcClient  - Creating SASL GSSAPI client. Server's Kerberos principal name is developuser@HADOOP.COM
1753 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.security.UserGroupInformation  - PrivilegedActionException as:root (auth:SIMPLE) cause:java.io.IOException: Kerberos principal does not have the expected format: developuser@HADOOP.COM

根据此日志,我的主要格式不正确。它等待主要格式,如username/hostname@realm.

但是当我klist使用 root 用户在我的服务器上执行命令时,结果如下;

[root@localhost bin]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: developuser@HADOOP.COM

Valid starting       Expires              Service principal
08/08/2018 16:14:15  08/09/2018 16:14:15  krbtgt/HADOOP.COM@HADOOP.COM

我的默认主体格式与等待的不同。所以我很困惑当我去 HBase 时是否应该使用这个默认主体。

这是我的配置文件;

gremlin.graph=org.janusgraph.core.JanusGraphFactory

#HBASE CONFIGURATIONS
storage.backend=hbase
storage.hostname=host1,host2,host3

storage.hbase.ext.hbase.security.authentication=kerberos
storage.hbase.ext.hbase.regionserver.kerberos.principal=developuser@HADOOP.COM

storage.hbase.ext.hbase.zookeeper.property.clientPort=24002

另一个有趣的地方是,我可以使用 hbase shell 命令连接 HBase shell,让我们看看控制台输出;

[root@localhost bin]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/client/HBase/hbase/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/client/HDFS/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Debug is  true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Acquire TGT from Cache
Principal is developuser@HADOOP.COM
Commit Succeeded 

Debug is  true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Acquire TGT from Cache
Principal is developuser@HADOOP.COM
Commit Succeeded 

2018-08-09 08:26:27,592 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-08-09 08:26:27,606 WARN  [main] shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
INFO: Watching file:/opt/client/HBase/hbase/conf/log4j.properties for changes with interval : 60000
2018-08-09 08:26:28,016 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b251fb9 connecting to ZooKeeper ensemble=fusioninsight3:24002,fusioninsight2:24002,fusioninsight1:24002
2018-08-09 08:26:28,016 INFO  [main] zookeeper.ZooKeeper: Initiating client connection, connectString=fusioninsight3:24002,fusioninsight2:24002,fusioninsight1:24002 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@230a73f2
2018-08-09 08:26:28,018 INFO  [main] zookeeper.ClientCnxn: zookeeper.request.timeout is not configured. Using default value 120000.
2018-08-09 08:26:28,018 INFO  [main] zookeeper.ClientCnxn: zookeeper.client.bind.port.range is not configured.
2018-08-09 08:26:28,018 INFO  [main] zookeeper.ClientCnxn: zookeeper.client.bind.address is not configured.
2018-08-09 08:26:28,018 INFO  [main-SendThread(fusioninsight1:24002)] client.FourLetterWordMain: connecting to fusioninsight1 24002
2018-08-09 08:26:28,020 INFO  [main-SendThread(fusioninsight1:24002)] zookeeper.ClientCnxn: Got server principal from the server and it is zookeeper/hadoop.hadoop.com
2018-08-09 08:26:28,020 INFO  [main-SendThread(fusioninsight1:24002)] zookeeper.ClientCnxn: Using server principal zookeeper/hadoop.hadoop.com
Debug is  true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Acquire TGT from Cache
Principal is developuser@HADOOP.COM
Commit Succeeded 

2018-08-09 08:26:28,021 INFO  [main-SendThread(fusioninsight1:24002)] zookeeper.Login: successfully logged in.
2018-08-09 08:26:28,021 INFO  [Thread-8] zookeeper.Login: TGT refresh thread started.
2018-08-09 08:26:28,021 INFO  [main-SendThread(fusioninsight1:24002)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2018-08-09 08:26:28,022 INFO  [Thread-8] zookeeper.Login: TGT valid starting at:        Wed Aug 08 16:14:15 EET 2018
2018-08-09 08:26:28,022 INFO  [Thread-8] zookeeper.Login: TGT expires:                  Thu Aug 09 16:14:15 EET 2018
2018-08-09 08:26:28,022 INFO  [Thread-8] zookeeper.Login: TGT refresh sleeping until: Thu Aug 09 11:27:45 EET 2018
2018-08-09 08:26:28,022 INFO  [main-SendThread(fusioninsight1:24002)] zookeeper.ClientCnxn: Opening socket connection to server fusioninsight1/172.16.250.10:24002. Will attempt to SASL-authenticate using Login Context section 'Client'
2018-08-09 08:26:28,023 INFO  [main-SendThread(fusioninsight1:24002)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.16.235.1:53950, server: fusioninsight1/172.16.250.10:24002
2018-08-09 08:26:28,024 INFO  [main-SendThread(fusioninsight1:24002)] zookeeper.ClientCnxn: Session establishment complete on server fusioninsight1/172.16.250.10:24002, sessionid = 0x1200000a81d68c48, negotiated timeout = 90000
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.3.1, rUnknown, Fri May 18 09:54:11 CST 2018

hbase(main):001:0> 

我还检查了hbase-site.xml文件,但主体不同;

<property>
<name>hbase.regionserver.kerberos.principal</name>
<value>hbase/hadoop.hadoop.com@HADOOP.COM</value>
</property>

当我在配置文件中使用此主体时,它会引发无效的凭据异常。

那么,您对解决这个问题有什么想法或建议吗?如果您有任何想法,请与我分享。

更新 1

如果我用区域服务器的主体更改主体,我将面临这个无效的凭证异常;

1584 [main-SendThread(hadoop.hadoop.com:24002)] DEBUG org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x14000537d430a17b, packet:: clientPath:null serverPath:null finished:false header:: 5,4  replyHeader:: 5,17180208329,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a32313330327214ffffffc9fffffffa784b69650425546a1bae467573696f6e496e73696768743210ffffffb6ffffffa6118ffffffaaffffffb5ffffffebffffffc5ffffffd12c100183,s{17180136443,17180136443,1533718229066,1533718229066,0,0,0,0,68,0,17180136443} 
1592 [main-SendThread(hadoop.hadoop.com:24002)] DEBUG org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x14000537d430a17b, packet:: clientPath:null serverPath:null finished:false header:: 6,8  replyHeader:: 6,17180208329,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'acl,'switch,'master,'running,'recovering-regions,'tokenauth,'draining,'namespace,'hbaseid,'table} 
1741 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.security.token.AuthenticationTokenSelector  - No matching token found
1742 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl  - RPC Server Kerberos principal name for service=ClientService is hbase/hadoop.hadoop.com@HADOOP.COM
1742 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl  - Use KERBEROS authentication for service ClientService, sasl=true
1757 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl  - Connecting to FusionInsight2/172.16.250.11:21302
1761 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.security.UserGroupInformation  - PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
1763 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.security.HBaseSaslRpcClient  - Creating SASL GSSAPI client. Server's Kerberos principal name is hbase/hadoop.hadoop.com@HADOOP.COM
1764 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.security.UserGroupInformation  - PrivilegedActionException as:root (auth:SIMPLE) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
1765 [hconnection-0x29a0cdb-shared--pool1-t1] DEBUG org.apache.hadoop.security.UserGroupInformation  - PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:643)
1766 [hconnection-0x29a0cdb-shared--pool1-t1] WARN  org.apache.hadoop.hbase.ipc.RpcClientImpl  - Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
1766 [hconnection-0x29a0cdb-shared--pool1-t1] ERROR org.apache.hadoop.hbase.ipc.RpcClientImpl  - SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
        at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
        at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
        at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
        at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
        ... 26 more

谢谢你。

标签: hbasekerberosjanusgraph

解决方案


推荐阅读