因HIVE元數據與HDFS上的數據不一致引起的問題修復
本文轉載自微信公眾號「明哥的IT隨筆」,作者IT明哥。轉載本文請聯系明哥的IT隨筆公眾號。
前言
大家好,我是明哥!
本片博文是“大數據問題排查系列”之一,講述某HIVE SQL 作業因為 HIVE 中的元數據與 HDFS中實際的數據不一致引起的一個問題的排查和修復。
以下是正文。
問題現象
客戶端報錯如下:
- Unable to move source xxx to destination xxx
客戶端報錯
問題分析
客戶端的報錯信息,并沒有完全展現問題背后的全貌。我們進入 hiveserver2 所在節點查看hiveserver2的日志,可以看到如下相關信息:
- 2021-09-01 11:47:46,795 INFO org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool: Thread-1105]: Loading data to table hs_ods.ods_ses_acct_assure_scale partition (part_date=20210118) from hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000
- 2021-09-01 11:47:46,795 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-1105]: HMS client filtering is enabled.
- 2021-09-01 11:47:46,795 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-1105]: Trying to connect to metastore with URI thrift://hs01:9083
- 2021-09-01 11:47:46,795 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-1105]: Opened a connection to metastore, current connections: 54
- 2021-09-01 11:47:46,796 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-1105]: Connected to metastore.
- 2021-09-01 11:47:46,928 INFO org.apache.hadoop.hive.ql.exec.MoveTask: [HiveServer2-Background-Pool: Thread-1105]: Partition is: {part_date=20210118}
- 2021-09-01 11:47:46,945 INFO org.apache.hadoop.hive.common.FileUtils: [HiveServer2-Background-Pool: Thread-1105]: Creating directory if it doesn't exist: hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118
- 2021-09-01 11:47:46,947 ERROR hive.ql.metadata.Hive: [HiveServer2-Background-Pool: Thread-1105]: Failed to move: {}
- 2021-09-01 11:47:46,947 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-1105]: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000 to destination hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118
- 2021-09-01 11:47:46,948 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-1105]: Completed executing command(queryId=hive_20210901114731_d7a78302-fb2a-4b45-9472-db6a9787f710); Time taken: 15.489 seconds
- 2021-09-01 11:47:46,957 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-1105]: Error running hive query:
- org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000 to destination hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118
- at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:329) ~[hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:258) ~[hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) ~[hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_271]
- at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_271]
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.3.2.jar:?]
- at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_271]
- at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_271]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_271]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_271]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_271]
- Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000 to destination hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118
- at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:3449) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:3405) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:3400) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3697) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1614) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- ... 11 more
- Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000/000000_0 to destination hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/000000_0
- at org.apache.hadoop.hive.ql.metadata.Hive.handlePoolException(Hive.java:3422) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:3367) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3697) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1614) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- ... 11 more
- Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000/000000_0 to destination hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/000000_0
- at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:3449) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:3405) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive.access$200(Hive.java:175) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:3350) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:3335) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- ... 4 more
- Caused by: java.io.IOException: rename for src path: hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/.hive-staging_hive_2021-09-01_11-47-31_216_694180642957006705-35/-ext-10000/000000_0 to dest path:hdfs://hs01:8020/user/hundsun/dap/hive/hs_ods/ods_ses_acct_assure_scale/part_date=20210118/000000_0 returned false
- at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:3347) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:3335) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
- ... 4 more
- 2021-09-01 11:47:46,962 INFO org.apache.hadoop.hive.conf.HiveConf: [HiveServer2-Handler-Pool: Thread-474]: Using the default value passed in for log id: d9b4646b-3c22-4589-8912-a68b92efcca7
- 2021-09-01 11:47:46,962 INFO org.apache.hadoop.hive.ql.session.SessionState: [HiveServer2-Handler-Pool: Thread-474]: Updating thread name to d9b4646b-3c22-4589-8912-a68b92efcca7 HiveServer2-Handler-Pool: Thread-474
- 2021-09-01 11:47:46,962 INFO org.apache.hive.service.cli.operation.OperationManager: [d9b4646b-3c22-4589-8912-a68b92efcca7 HiveServer2-Handler-Pool: Thread-474]: Closing operation: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=29e01dfe-8b9b-427a-a610-1d58c526faca]
- 2021-09-01 11:47:46,973 INFO org.apache.hadoop.hive.conf.HiveConf: [d9b4646b-3c22-4589-8912-a68b92efcca7 HiveServer2-Handler-Pool: Thread-474]: Using the default value passed in for log id: d9b4646b-3c22-4589-8912-a68b92efcca7
- 2021-09-01 11:47:46,973 INFO org.apache.hadoop.hive.ql.session.SessionState: [d9b4646b-3c22-4589-8912-a68b92efcca7 HiveServer2-Handler-Pool: Thread-474]: Resetting thread name to HiveServer2-Handler-Pool: Thread-474
- 2021-09-01 11:47:47,025 INFO org.apache.hive.service.cli.thrift.ThriftCLIService: [HiveServer2-Handler-Pool: Thread-474]: Session disconnected without closing properly.
- 2021-09-01 11:47:47,025 INFO org.apache.hive.service.cli.thrift.ThriftCLIService: [HiveServer2-Handler-Pool: Thread-474]: Closing the session: SessionHandle [d9b4646b-3c22-4589-8912-a68b92efcca7]
- 2021-09-01 11:47:47,025 INFO org.apache.hive.service.CompositeService: [HiveServer2-Handler-Pool: Thread-474]: Session closed, SessionHandle [d9b4646b-3c22-4589-8912-a68b92efcca7], current sessions:0
從以上信息可以看到,hivesql 使用的是spark執行引擎,且 spark 作業已經執行成功了,但是在hiveser2做收尾工作時,移動臨時文件到目標目錄下時報錯了,最底層的報錯信息是 rename for src path xxx to dest path xxx returned false.
順藤摸瓜,接下來去 hdfs 上查看問題。通過 hdfs dfs -ls 發現 hdfs上目標文件已經存在了,且通過時間信息可以發現該文件是幾天前創建的,跟當前sql作業的執行沒有關系:
hdfs-destination-path
問題原因
通過上述排查分析,問題直接原因已經清晰:hive sql 底層的spark作業已經執行成功,對應的數據已經計算完畢,但在移動臨時結果文件到最終目標目錄時,因為hdfs上最終目標目錄已經存在且目標目錄下存在同名文件,rename操作無法完成,作業執行失敗。
回頭看下我們的 sql,其本質就是個對分區表某個分區的 insert overwrite, 照道理來說,應該會覆蓋目標分區對應的目錄下的數據文件(即先刪除舊的數據文件,再創建新數據文件),但為什么這里沒有執行刪除動作呢?
這其實是因為該分區表在 HIVE 中的元數據與 HDFS 中的數據不一致。通過 show create table 和 show partitions 可以發現,在HIVE元數據中該分區表只有一個分區,但HDFS上存在該表其它分區對應的目錄和文件:
show create table
show partitions
所以問題的根本原因是:該分區表在 HIVE中的元數據與HDFS上實際的數據不一致,當執行 insert overwrite 操作時,hive 通過存儲在 metastore 中的元數據信息發現目標分區并不存在,也就不會嘗試去執行hdfs上該分區對應目錄的刪除操作了,而實際上hdfs上該分區對應的目錄和文件都是存在的,所以作業底層的 rename 操作失敗了。
問題解決
知道了問題的直接原因和根本原因,解決方法也就順理成章了:修復 hive 元數據跟hdfs實際的數據一致即可。
可以使用命令 msck repair table xxx來修復hive表的元數據:
元數據修復完畢,通過show partitions xx 發現,hive中已經可以查到原來遺失的分區。然后重新執行上述 insert overwrite 語句,作業成功執行。
問題總結
- 當 HIVE 中的元數據與 HDFS 上實際的數據不一致時,一些正常的 HIVE SQL 操作可能會執行失敗。
- HIVE 中的元數據與 HDFS 上實際的數據不一致的原因有很多,常見的有:
- 使用了 HIVE 外表,由于外表的特性,在HIVE 中刪除外表或外表的某些分區時, HDFS上對應的目錄和文件仍會存在,此時就會造成不一致;(我們這里就是這種情況)
- HIVE 的元數據和底層HDFS的數據是從其他集群同步過來的,但同步過程中有問題,比如時間沒對齊狀態不一致;(跨集群同步處理不善,會有這種情況)
- HIVE中的元數據或HDFS上的數據有損壞和丟失。(集群運維管理不規范,會造成這種現象)
- 可以通過 msck repair table xxx 來修復 hive表的元數據: MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS];