New GCP Pricing Options: Google Cloud Capacity Reservations and ...


        Sometimes because of wrong configs in the sshd config file, the ssh server may stop. The ssh server will not start until we make that file proper and restart the service. But for making these changes we have to access the machine.
By default, we don’t have direct root login into the machine. We usually login to one user which is a sudo user and using sudo privileges, we access the root. If something happens to the sudoers file or if some wrong entry made in the sudoers file, the root access will be revoked.

    These are some of the commonly occurred situations where users loose access or superuser privilege in the VM instance. Most of the users terminate and leave the instance in this situation.

We can recover these instances. It is simple and can be done in a few steps.

1.  Delete your instance WITHOUT deleting your boot disk (It's a good idea to take a snapshot of your disk before deleting the instance so that you have a backup to recover).
2.  Create a temporary instance and attach the boot disk in question as a secondary disk.
3. Connect to this instance and run $ sudo lsblk to get a list of block device shown as below

First of all, you mount partitions, not disks. So, mount /dev/sdb won't work, mount /dev/sdb1 will (assuming you want to mount the 1st partition of sdb). To be able to access the drive with cd /name you need to either mount it at /name. To mount it at /name do the following: 
sudo mkdir name
sudo chmod 755 /name
Open /etc/fstab using VI Editor by below command
vi /etc/fstab
Then, add the below line to /etc/fstab
/dev/sdb1 /name ext3 defaults 0 1
Save the File and then do
mount /name ==>> this will mount the disk in /name directory 
4. Do  cd /name and you will see the directories and files of the attached disk.

5. Browse the file which you have corrupted, repair/restore it and save the file, and shut down the instance.

6. Delete the newly launched instance with deleting the disk.

7. From the detached disk, create a snapshot and launch a VM instance of the same Configuration

8. Connect to The instance and it should be able to run just fine now.