Hive执行异常org.apache.hadoop.hdfs.BlockMissingException

news/2024/5/20 1:29:53 标签: hive, hadoop, hdfs

今天hive在执行的时候出现了报错,内容如下:

Caused by: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-2040810143-192.168.144.145-1612269795515:blk_1077591653_3851069 file=/hbase/data/default/cycle_middle_data/c4cc4a321e4779c75d810ba0698079c3/info/395d0f3895cf407b88921f9cbc8a54de
	at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:880)
	at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:863)
	at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:842)
	at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:998)
	at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1361)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1325)
	at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:93)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:808)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1568)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1772)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1597)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
	at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:852)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:833)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:347)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:256)
	... 19 more

	at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387)
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
	at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
	at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
	at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162)
	at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	... 1 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

这个是访问不到数据块,然后我检查了一下数据块

hdfs fsck /hbase/data/default/cycle_middle_data/c4cc4a321e4779c75d810ba0698079c3/info/395d0f3895cf407b88921f9cbc8a54de

发现数据库是有问题的 ,可能是在加装硬盘的时候数据出问题了

 

将受损的数据块删掉

hdfs fsck /hbase/data/default/cycle_middle_data/c4cc4a321e4779c75d810ba0698079c3/info/395d0f3895cf407b88921f9cbc8a54de -delete 

再次执行hive的时候还会报找不到habse中的内容

Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /hbase/data/default/cycle_middle_data/c4cc4a321e4779c75d810ba0698079c3/info/395d0f3895cf407b88921f9cbc8a54de
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:85)
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1909)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:735)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:415)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

 然后将habse的表清空一下就可以了

truncate '表名'

 


http://www.niftyadmin.cn/n/860488.html

相关文章

SpringCloud学习笔记01——注册中心Eureka

从今天开始来记录学习SpringCloud的学习笔记。 今天来看一下注册中心怎么创建,概念性的东西我就不多说了一搜一大堆,而且比我解释的都好。 先看一下我的目录结构,每个模块都是单独的一个服务,即微服务。 直接上代码。 1.POM依赖…

SpringCloud学习笔记03——注册中心Nacos

今天来记录一下阿里的注册中心Nacos的使用。 Nacos是阿里开发的一款微服务的注册中心组件,不需要写代码直接下载就可以使用了。 GitHub主页:https://github.com/alibaba/nacos GitHub的Release下载页:https://github.com/alibaba/nacos/re…

SpringCloud学习笔记04——Gateway网关

上一篇博客写了nacos的部分&#xff0c;今天来记录一下gateway的使用过程&#xff0c;在已经有了注册中心的基础上&#xff0c;再去添加gateway就已经很简单了。 依赖如下&#xff1a; <dependencies><!-- nacos客户端依赖包 --><dependency><groupId>…

Hive报错处理

报错 FAILED: SemanticException [Error 10265]: This command is not allowed on an ACID table test.log_test with a non-ACID transaction manager. Failed command: select * from log_test 解决办法 客户端 SET hive.txn.managerorg.apache.hadoop.hive.ql.lockmgr.D…

CDH-Flume从Kafka同步数据到hive

启动Flume命令 flume-ng agent -n a -c /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/flume-ng/conf/ -f ./kafka2hiveTest.conf -Dflume.root.loggerINFO,console hive建表 语句 #分桶开启事务并分区 create table log_test(ip string,username string,reque…

SpringCloud学习笔记05——Nacos进阶配置内容

上一篇博客已经记录了Nacos的一个基础使用&#xff0c;这篇博客将记录一些进阶的使用内容。 1.微服务配置集群名称 如果微服务部署在了不同的集群中&#xff0c;可以通过一些配置进行集群名称指定 spring:cloud:nacos:discovery:cluster-name: HZ # 集群名称 效果如下&#…

SparkStreaming——在RDD中查询redis

问题描述&#xff1a; 在读取kafka数据时需要从redis查询出来上一条数据和当前数据进行计算。 解决步骤&#xff1a; 1.进入依赖 <!-- https://mvnrepository.com/artifact/com.redislabs/spark-redis --><dependency><groupId>com.redislabs</groupI…