Great find. I agree with upgrading storm-hive to  newer hive. Maybe even  overdue.  Would be great if you can provide the PRs. Thanks 
Roshan 

Sent from Yahoo Mail for iPhone
On Tuesday, June 12, 2018, 3:47 AM, Abhishek Raj <[EMAIL PROTECTED]> wrote:

Circling back to this, I was able to figure out what the problem was. storm-hive dependencies are compiled with hive version  0.14.0 which is very old. We are using hive 2.3.2 so obviously there were a lot of differences between the two versions. The fix was to override storm-hive's hive dependencies in our pom with a newer version. To be more verbose, we copied the following into our pom.

|  <dependency> |
|  |  <groupId>org.apache.hive.hcatalog</groupId> |
|  |  <artifactId>hive-hcatalog-streaming</artifactId> |
|  |  <version>2.3.2</version> |
|  |  <exclusions> |
|  |  <exclusion> |
|  |  <groupId>org.slf4j</groupId> |
|  |  <artifactId>slf4j-log4j12</artifactId> |
|  |  </exclusion> |
|  |  <exclusion> |
|  |  <groupId>org.apache.calcite</groupId> |
|  |  <artifactId>calcite-core</artifactId> |
|  |  </exclusion> |
|  |  <exclusion> |
|  |  <groupId>org.apache.calcite</groupId> |
|  |  <artifactId>calcite-avatica</artifactId> |
|  |  </exclusion> |
|  |  </exclusions> |
|  |  </dependency> |
|  |  |
|  |  <dependency> |
|  |  <groupId>org.apache.hive.hcatalog</groupId> |
|  |  <artifactId>hive-hcatalog-core</artifactId> |
|  |  <version>2.3.2</version> |
|  |  <exclusions> |
|  |  <exclusion> |
|  |  <groupId>org.slf4j</groupId> |
|  |  <artifactId>slf4j-log4j12</artifactId> |
|  |  </exclusion> |
|  |  <exclusion> |
|  |  <groupId>org.apache.calcite</groupId> |
|  |  <artifactId>calcite-avatica</artifactId> |
|  |  </exclusion> |
|  |  <exclusion> |
|  |  <groupId>org.apache.calcite</groupId> |
|  |  <artifactId>calcite-core</artifactId> |
|  |  </exclusion> |
|  |  </exclusions> |
|  |  </dependency> |
|  |  <dependency> |
|  |  <groupId>org.apache.hive</groupId> |
|  |  <artifactId>hive-cli</artifactId> |
|  |  <version>2.3.2</version> |
|  |  <exclusions> |
|  |  <exclusion> |
|  |  <groupId>org.slf4j</groupId> |
|  |  <artifactId>slf4j-log4j12</artifactId> |
|  |  </exclusion> |
|  |  <exclusion> |
|  |  <groupId>org.apache.calcite</groupId> |
|  |  <artifactId>calcite-core</artifactId> |
|  |  </exclusion> |
|  |  <exclusion> |
|  |  <groupId>org.apache.calcite</groupId> |
|  |  <artifactId>calcite-avatica</artifactId> |
|  |  </exclusion> |
|  |  </exclusions> |
|  |  </dependency> |

  
To go a little more deeper about the problem, this is what changed between the two hive versions giving us the "Unexpected DataOperationType: UNSET". createLockRequest in 2.3.2 explicitly passes a operation type "INSERT" while acquiring a lock, but in 0.14.0 no operation type is being passed. So the operation type ends up defaulting to UNSET and throwing an error. This is the commit where the change occurred.
In light of the fact that there are several different threads with users facing the same issue, imho, storm-hive's hive dependencies should be updated with newer hive releases and there should be a way for users to explicitly specify which hive release they want to use storm-hive with. The documentation for storm-hive should also be updated to reflect this requirement.
Happy to provide prs if that sounds like a good idea.
Thanks.
On Fri, Jun 8, 2018 at 3:21 PM, Abhishek Raj <[EMAIL PROTECTED]> wrote:

Hi. We faced a similar problem earlier when trying HiveBolt in storm with hive on emr. We were seeing 

java.lang.IllegalStateExceptio n: Unexpected DataOperationType: UNSET agentInfo=Unknown txnid:130551

in hive logs. Any help here would be appreciated. 

On Fri, Jun 8, 2018 at 10:26 AM, Milind Vaidya <[EMAIL PROTECTED]> wrote:

Here are some details from the meta store logs:
018-06-08T03:34:20,634 ERROR [pool-13-thread-197([])]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invok eInternal(204)) - java.lang.IllegalStateExceptio n: Unexpected DataOperationType: UNSET agentInfo=Unknown txnid:130551    at org.apache.hadoop.hive.metasto re.txn.TxnHandler.enqueueLockW ithRetry(TxnHandler.java:1000)    at org.apache.hadoop.hive.metasto re.txn.TxnHandler.lock(TxnHand ler.java:872)    at org.apache.hadoop.hive.metasto re.HiveMetaStore$HMSHandler. lock(HiveMetaStore.java:6366)    at sun.reflect.GeneratedMethodAcc essor11.invoke(Unknown Source)    at sun.reflect.DelegatingMethodAc cessorImpl.invoke(DelegatingMe thodAccessorImpl.java:43)    at java.lang.reflect.Method.invok e(Method.java:498)    at org.apache.hadoop.hive.metasto re.RetryingHMSHandler.invokeIn ternal(RetryingHMSHandler. java:148)    at org.apache.hadoop.hive.metasto re.RetryingHMSHandler.invoke( RetryingHMSHandler.java:107)    at com.sun.proxy.$Proxy32.lock(Un known Source)    at org.apache.hadoop.hive.metasto re.api.ThriftHiveMetastore$ Processor$lock.getResult(Thrif tHiveMetastore.java:14155)    at org.apache.hadoop.hive.metasto re.api.ThriftHiveMetastore$ Processor$lock.getResult(Thrif tHiveMetastore.java:14139)    at org.apache.thrift.ProcessFunct ion.process(ProcessFunction. java:39)    at org.apache.hadoop.hive.metasto re.TUGIBasedProcessor$1.run( TUGIBasedProcessor.java:110)    at org.apache.hadoop.hive.metasto re.TUGIBasedProcessor$1.run( TUGIBasedProcessor.java:106)    at java.security.AccessController .doPrivileged(Native Method)    at javax.security.auth.Subject.do As(Subject.java:422)    at org.apache.hadoop.security.Use rGroupInformation.doAs(UserGro upInformation.java:1836)    at org.apache.hadoop.hive.metasto re.TUGIBasedProcessor.process( TUGIBasedProcessor.java:118)    at org.apache.thrift.server.TThre adPoolServer$WorkerProcess. run(TThreadPoolServer.java: 286)    at java.util.concurrent.ThreadPoo lExecutor.runWorker(ThreadPool Executor.java:1149)    at java.util.concurrent.ThreadPoo lExecutor$Worker.run(ThreadPoo lExecutor.java:624)    at java.lang.Thread.run(Thread.ja va:748)
Here are some details about the environment
Source :
Storm Topo