问题:未清空磁盘被添加到磁盘组触发坏块

高达 数据和云 03月26日

导读:当我们生产系统中遇到ASM磁盘组容量快被耗尽时,添加磁盘扩容是处理该问题较为常用的手段之一,几乎每个专业的DBA都操作过。但是设想一下,如果添加到ASM磁盘组的磁盘没有提前被清空,会出现什么样的情况呢?本文分享一起客户近期碰到的未清空磁盘被添加到磁盘组触发坏块(Read datafile mirror)的案例,在此提醒大家注意。


问题描述

收到系统维护人员通知,Oracle数据库软件目录突然异常爆满,需要及时清理。登陆环境后检查发现告警日志不断的刷新日志,刷新的内容为检测到有坏块。

部分告警日志内容如下:

Reading datafile '+xx01/xxx85' for corruption at rdba: 0x1c4b3afc (file x3,  block 474692348)Read datafile mirror 'xxx02' (file x3, block 47xx48) found same  corrupt data (no logical check)Read datafile mirror ' xxx 53' (file x3, block 47xx48) found valid dataHex  dump of (file x3, block 47xx48) in trace file /xxx130931.trcRepaired corruption at (file x3,  block 47  xxx  48)Hex  dump of (file x3, block 47xxx24) in trace file /xx931.trcCorrupt  block relative dba: 0x1c308c08 (file x3, block 47xx4)Bad  header found during buffer readData  in bad block: type: 0 format: 6 rdba: 0x34363835 last change scn: 0x3833.35313431 seq: 0x30  flg: 0x37 spare1: 0x31 spare2: 0x36 spare3: 0xf00 consistency value in tail: 0x30520300 check value in block header: 0x36 computed block checksum: 0x6060Reading  datafile '+xxxx8685' for corruption at rdba: 0x1c308c08 (file x3, block 47xx24)Read datafile mirror 'xxxx2'  (file x3, block 47xx24) found same corrupt data (no logical check)Read  datafile mirror 'xxx3' (file x3, block 47xx24) found valid dataHex  dump of (file x3, block 47xxx24) in trace file /xxx0931.trcSat  Nov 09 12:48:17 2019Hex  dump of (file x3, block 14xxx7) in trace file /xxx22.trcCorrupt  block relative dba: 0x1ed647db (file x3, block 14xxx7)Bad  header found during buffer readData  in bad block: type: 73 format: 6 rdba: 0x5454415f last change scn: 0x0e00.00440052 seq: 0x0  flg: 0x00 spare1: 0x53 spare2: 0x54 spare3: 0x0 consistency value in tail: 0x01006541 check value in block header: 0xa00 block checksum disabledReading  datafile '+xxxx17527' for corruption at rdba: 0x1ed647db (file x3, block 14xxx7)Read datafile mirror 'xx002'  (file x3, block 14xxx7) found same corrupt data (no logical check)Read  datafile mirror 'xxx0' (file x3, block 14xxx7) found valid dataHex  dump of (file x3, block 14xx7) in trace file /xxx2.trcRepaired corruption at (file x3,  block 14xxx7)


问题分析

通过告警日志中出现的信息,我们查看这些问题数据块发现,涉及的类型包含表和索引等。

select  relative_fno,owner,segment_name ,segment_type   from dba_extents where file_id = x3 and 35xxxx9 between block_id and  block_id + blocks -1;RELATIVE_FNO      OWNER       SEGMENT_NAME       SEGMENT_TYPE----------------------    --------------      -------------------------      ---------------------1024                           IxxxL                 PxxxT                          INDEX RELATIVE_FNO      OWNER       SEGMENT_NAME       SEGMENT_TYPE----------------------    --------------      -------------------------      --------------------- 124                            IxxxM                 OxxxT                         TABLE 


使用DBV 进行检查校验:

……Page 278199 is marked  corruptCorrupt block relative  dba: 0x21843eb7 (file x4, block 2xx9)Bad header found during  dbv:Data in bad block: type: 0 format: 4 rdba: 0x0000ffff last change scn: 0x0000.00000000 seq: 0x0  flg: 0x1d spare1: 0x0 spare2: 0xa spare3: 0x0 consistency value in tail: 0x31040000 check value in block header: 0x1500 computed block checksum: 0xe403 Page 278200 is marked  corruptCorrupt block relative  dba: 0x21843eb8 (file x4, block 2xx0)Bad header found during  dbv:Data in bad block: type: 48 format: 0 rdba: 0x000a0018 last change scn: 0x3031.31060000 seq: 0x30  flg: 0x30 spare1: 0x30 spare2: 0x0 spare3: 0x19 consistency value in tail: 0x000b0000 check value in block header: 0x31 block checksum disabled…………此处省略n行

相关Trace 中记录:

Corrupt block relative  dba: 0x2180ba80 (file x4, block 4xx4)Bad header found during  user buffer readData in bad block: type: 82 format: 0 rdba: 0x534e4901 last change scn: 0x4546.464f2e54 seq: 0x52  flg: 0x5f spare1: 0x0 spare2: 0x0 spare3: 0x5453 consistency value in tail: 0x0908bdf2 check value in block header: 0x4e49 computed block checksum: 0x66c6Reading datafile '+xxx05'  for corruption at rdba: 0x2180ba80 (file x4, block 4xx4)ksfdrfms:Mirror Read  file=+xxx905 fob=0x246076cb80 bufp=0x7f9a07619c00 blkno=47744 nbytes=8192ksfdrfms: Read success  from mirror side=1 logical extent number=0 disk=xxx2 path=/dev/axxx1Mirror I/O done from ASM  disk /dev/axxx1Read datafile mirror 'xxx02'  (file x4, block 4xx4) found same corrupt data (no logical check)ksfdrnms:Mirror Read  file=+xxx7905 fob=0x246076cb80 bufp=0x7f9a07619c00 nbytes=8192ksfdrnms: Read success  from mirror side=2 logical extent number=1 disk=xxx3 path=/dev/axxx4Mirror I/O done from ASM  disk /dev/axxx4Read datafile mirror 'xxx3'  (file x4, block 4xx4) found valid dataHex dump of (file x4,  block 4xx4)

仔细观察发现,每次的坏块报错都十分相似,如下所示:

Read datafile mirror 'xxx2'(file x3, block 47xxx48) found same corrupt data (no logical check)  

我们进一步细看日志,发现有一共同特点是基本都是磁盘名为 xxx2与其他磁盘名中都发现了相同的数据块, 并且这些数据块中有效的数据块都在其他磁盘中,反而无效的数据坏块却全都在磁盘/dev/axxx1 (也就是磁盘名:xxx2) , 因此猜测可能和这块磁盘的相关操作有关,进一步了解与发现,这块磁盘之前原本就是磁盘组xxx1 中的一块盘,但由于某些原因导致这块磁盘不在该磁盘组,然后他们在异常时间的前一天又重新添加该磁盘,最后真相浮出水面,由于 /dev/axxx1 的旧数据尚未被清空,导致添加磁盘后,旧块与新块冲突,数据库异常报错,撑爆软件目录。

而xxx1 磁盘组的冗余度是 NORMAL ,简单举例说明下 ,oracle根据镜像个数不同,磁盘组的冗余度被划分为以下3种:

1)外部冗余(External redundancy):数据没有镜像。这种情况适用于已经使用底层存储软件对数据做过镜像的系统。

2)普通冗余(Normal redundancy): 1路镜像。这种冗余度适用于大部分系统。

3)高冗余(High redundancy) : 2路镜像。这种冗余度适合保存系统的重要数据,当然这也意味着会占用更多的空间。

Oracle镜像数据是通过failuregroup (失败组)的方式来实现的。也就是说由于xxx1 磁盘组是normal 冗余,在保留一份镜像的同时Oracle会保证每一个Extent和它对应的镜像不会保存在相同的failure group中,从而确保了当failure group中的某一个或多个磁盘,甚至整个failure group全部丢失时也不会有数据丢失;当磁盘/dev/axxx1重新加入到磁盘组中时,ASM再平衡功能会让磁盘组中所有磁盘上的文件extent 均衡的分布,该过程是由后台进程RBAL进行处理。当分布的镜像与磁盘/dev/axxx1 中的旧数据存在冲突时,将报错。

问题解决

直接剔除问题磁盘,dd磁盘,清除旧数据,再重新添加回来,问题解决,故障恢复。

alter diskgroup xxx1  drop disk 'Oxxxx2';
dd if=/dev/zero of=/dev/asxxx1 bs=1M count=256


出处:墨天轮“云和恩墨技术通讯”专栏(https://www.modb.pro/topic/5927),下载原文请关注“数据和云”公众号后回复关键词“云和恩墨技术通讯”。

推荐阅读:144页!分享珍藏已久的数据库技术年刊




点击下图查看更多 ↓

云和恩墨大讲堂 | 一个分享交流的地方

长按,识别二维码,加入万人交流社群


请备注:云和恩墨大讲堂

  点个“在看”
你的喜欢会被看到❤
    阅读原文

    + 关注

    + 订阅

    阅读:1387

    6

    扫描二维码推荐公众号

    微信公众号