Google Compute instance won't mount persistent disk, maintains ~100% CPU -


during routine use of web server (saving posts via wordpress), instance jumped 400% cpu usage , wouldn't come down below 100%. restarting , stopping/starting instance didn't change anything.

looking @ last bit of serial output:

[    0.678602] md: waiting devices available before autodetect [    0.679518] md: if don't use raid, use raid=noautodetect [    0.680548] md: autodetecting raid arrays. [    0.681284] md: scanned 0 , added 0 devices. [    0.682173] md: autorun ... [    0.682765] md: ... autorun done. [    0.683716] vfs: cannot open root device "sda1" or unknown-block(0,0): error -6 [    0.685298] please append correct "root=" boot option; here available partitions: [    0.686676] kernel panic - not syncing: vfs: unable mount root fs on unknown-block(0,0) [    0.688489] cpu: 0 pid: 1 comm: swapper/0 not tainted 3.19.0-30-generic #34~14.04.1-ubuntu [    0.689287] hardware name: google google, bios google 01/01/2011 [    0.689287]  ffffea00008ae400 ffff880024ee7db8 ffffffff817af477 000000000000111e [    0.689287]  ffffffff81a7c6c0 ffff880024ee7e38 ffffffff817a9338 ffff880024ee7dd8 [    0.689287]  ffffffff00000010 ffff880024ee7e48 ffff880024ee7de8 ffff880024ee7e38 [    0.689287] call trace: [    0.689287]  [<ffffffff817af477>] dump_stack+0x45/0x57 [    0.689287]  [<ffffffff817a9338>] panic+0xc1/0x1f5 [    0.689287]  [<ffffffff81d3e5f3>] mount_block_root+0x210/0x2a9 [    0.689287]  [<ffffffff81d3e822>] mount_root+0x54/0x58 [    0.689287]  [<ffffffff81d3e993>] prepare_namespace+0x16d/0x1a6 [    0.689287]  [<ffffffff81d3e304>] kernel_init_freeable+0x1f6/0x20b [    0.689287]  [<ffffffff81d3d9a7>] ? initcall_blacklist+0xc0/0xc0 [    0.689287]  [<ffffffff8179fab0>] ? rest_init+0x80/0x80 [    0.689287]  [<ffffffff8179fabe>] kernel_init+0xe/0xf0 [    0.689287]  [<ffffffff817b6d98>] ret_from_fork+0x58/0x90 [    0.689287]  [<ffffffff8179fab0>] ? rest_init+0x80/0x80 [    0.689287] kernel offset: 0x0 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [    0.689287] ---[ end kernel panic - not syncing: vfs: unable mount root fs on unknown-block(0,0) 

(not sure if it's obvious that, i'm using standard ubuntu 14.04 image)

i've tried taking snapshots , mounting them on new instances, , i've deleted instance , mounted disk on new one, still same issue , same serial output.

i hope data has not been hopelessly corrupted. not sure if has suggestions on recovering data persistent disk?

note accepted answer for: google compute engine vm instance: vfs: unable mount root fs on unknown-block did not work me.

i posted on question, question worded better, i'll re-post here.

what causes this?

that million dollar question. after inspecting gce vm, found out there 14 different kernels installed taking several hundred mb's of space. of kernels didn't have corresponding initrd.img file, , therefore not bootable (including 3.19.0-39-generic).

i never went around trying install random kernels, , once removed, no longer appear available upgrades, i'm not sure happened. seriously, happened?

edit: new response google cloud support.

i received disconcerting response. may explain additional, errant kernels.

"on rare occasions, vm needs migrated 1 physical host another. in such case, kernel upgrade , security patches might applied google."

how recover instance...

after several back-and-forth emails, received response support allowed me resolve issue. mindful, have change things match unique vm.

  1. take snapshot of disk first in case need roll of changes below.

  2. edit properties of broken instance disable option: "delete boot disk when instance deleted"

  3. delete broken instance.

    important: ensure not select option delete boot disk. otherwise, disk removed permanently!!

  4. start new temporary instance.

  5. attach broken disk (this appear /dev/sdb1) temporary instance

  6. when temporary instance booted up, following:

in temporary instance:

# run fsck fix disk corruption issues $ sudo fsck.ext4 -a /dev/sdb1  # mount disk broken vm $ sudo mkdir /mnt/sdb $ sudo mount /dev/sdb1 /mnt/sdb/ -t ext4  # find out uuid of broken disk. in case, uuid of sdb1 d9cae47b-328f-482a-a202-d0ba41926661 $ ls -alt /dev/disk/by-uuid/ lrwxrwxrwx. 1 root root 10 jan 6 07:43 d9cae47b-328f-482a-a202-d0ba41926661 -> ../../sdb1 lrwxrwxrwx. 1 root root 10 jan 6 05:39 a8cf6ab7-92fb-42c6-b95f-d437f94aaf98 -> ../../sda1  # update uuid in grub.cfg (if necessary) $ sudo vim /mnt/sdb/boot/grub/grub.cfg 

note: ^^^ deviated support instructions.

instead of modifying boot entries set root=uuid=[uuid character string], looked entries set root=/dev/sda1 , deleted them. deleted every entry didn't set initrd.img file. top boot entry correct parameters in case ended being 3.19.0-31-generic. yours may different.

# flush changes disk $ sudo sync  # shut down temporary instance $ sudo shutdown -h 

finally, detach hdd temporary instance, , create new instance based off of fixed disk. boot.

assuming boot, have lot of work do. if have half many unused kernels me, might want purge unused ones (especially since missing corresponding initrd.img file).

i used second answer (the terminal-based one) in this askubuntu question purge other kernels.

note: make sure don't purge kernel booted in with!


Comments

Popular posts from this blog

get url and add instance to a model with prefilled foreign key :django admin -

css - Make div keyboard-scrollable in jQuery Mobile? -

ruby on rails - Seeing duplicate requests handled with Unicorn -