HiveServer2: Thrift SASL related exception when using custom PasswdAuthenticationProvider
Asked Answered
B

1

7

I've created a custom implementation of the PasswdAuthenticationProvider interface, based on OAuth2. I think the code is irrelevant for the problem I'm experiencing, nevertheless, it can be found here.

I've configured hive-site.xml with the following properties:

<property>
   <name>hive.server2.authentication</name>
   <value>CUSTOM</value>
</property>
<property>
   <name>hive.server2.custom.authentication.class</name>
   <value>com.telefonica.iot.cosmos.hive.authprovider.OAuth2AuthenticationProviderImpl</value>
</property>

Then I've restarted the Hive service and I've connected a JDBC based remote client with success. This is an example of a successful run found in /var/log/hive/hiveserver2.log:

2016-02-01 11:52:44,515 INFO  [pool-5-thread-5]: authprovider.HttpClientFactory (HttpClientFactory.java:<init>(66)) - Setting max total connections (500)
2016-02-01 11:52:44,515 INFO  [pool-5-thread-5]: authprovider.HttpClientFactory (HttpClientFactory.java:<init>(67)) - Setting default max connections per route (100)
2016-02-01 11:52:44,799 INFO  [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(65)) - Doing request: GET https://account.lab.fiware.org/user?access_token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1
2016-02-01 11:52:44,800 INFO  [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(76)) - Response received: {"organizations": [], "displayName": "frb", "roles": [{"name": "provider", "id": "106"}], "app_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "email": "[email protected]", "id": "frb"}
2016-02-01 11:52:44,801 INFO  [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(104)) - User frb authenticated
2016-02-01 11:52:44,868 INFO  [pool-5-thread-5]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(188)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V6
2016-02-01 11:52:44,871 INFO  [pool-5-thread-5]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.
2016-02-01 11:52:44,873 INFO  [pool-5-thread-5]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.

The problem is after that the following error appears in a recurrent manner:

2016-02-01 11:52:48,227 ERROR [pool-5-thread-4]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
    at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
    at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
    at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
    ... 4 more
2016-02-01 11:53:18,323 ERROR [pool-5-thread-5]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
    at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
    at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
    at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
    ... 4 more

Why? I've seen in several other questions this occurs when using the default value of hive.server2.authentication, i.e. SASL, and the client is not doing the handshake. But in my case, the value of such a property is CUSTOM. I cannot understand it, and any help would be really appreciated.

EDIT 1

I've found there are periodical requests to the HiveServer2... from the HiveServer2 itself! These are the requests that are resulting in Thrift SASL errors:

$ sudo tcpdump -i lo port 10000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
...
...
10:18:48.183469 IP dev-fiwr-bignode-11.hi.inet.ndmp > dev-fiwr-bignode-11.hi.inet.55758: Flags [.], ack 7, win 512, options [nop,nop,TS val 1034162147 ecr 1034162107], length 0
^C
21 packets captured
42 packets received by filter
0 packets dropped by kernel
[fiware-portal@dev-fiwr-bignode-11 ~]$ sudo netstat -nap | grep 55758
tcp        0      0 10.95.76.91:10000           10.95.76.91:55758           CLOSE_WAIT  7190/java           
tcp        0      0 10.95.76.91:55758           10.95.76.91:10000           FIN_WAIT2   -                   
[fiware-portal@dev-fiwr-bignode-11 ~]$ ps -ef | grep 7190
hive      7190     1  1 10:10 ?        00:00:10 /usr/java/jdk1.7.0_71//bin/java -Xmx1024m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/lib/hive/lib/hive-service-0.13.0.2.1.7.0-784.jar org.apache.hive.service.server.HiveServer2 -hiveconf hive.metastore.uris=" " -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive
1011     14158 12305  0 10:19 pts/1    00:00:00 grep 7190

Any idea?

EDIT 2

More research about the connections sent from HiveServer2 to HiveServer2. Data packets always sent 5 bytes, the following ones (hexadecimal): 22 41 30 30 31

Any idea about these connections?

Bobble answered 1/2, 2016 at 11:17 Comment(0)
B
0

I finally "fixed" this. Since the message was sent by the Ambari agent running in the HiveServer2 machine (some king of weird ping), I simply added an iptables rule blocking all the connections to TCP/10000 port on the loopback interface:

iptables -A INPUT -i lo -p tcp --dport 10000 -j DROP

Of course, now Ambari warns the HiveServer2 is not alive (the pings are droped). And the above rule must be removed if I want to restart the server from Ambari (there is another alive check in the starting script); then after the restart I can enable the rule again. Well, I can live with that.

Bobble answered 14/4, 2016 at 5:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.