ESXTOP HANDY TOOL

                                                             Metrics and Thresholds

Display

Metric Threshold Explanation

CPU

%RDY

10

Overprovisioning of vCPUs, excessive usage of vSMP or a limit (check %MLMTD) has been set. See Jason’s explanation for vSMP VMs

CPU

%CSTP

3

Excessive usage of vSMP. Decrease number of vCPUs for this particular VM. This should lead to increased scheduling opportunities.

CPU

%SYS

20

The percentage of time spent by system services on behalf of the world. Most likely caused by high IO VM. Check other metrics and VM for possible root cause

CPU

%MLMTD

0

The percentage of time the vCPU was ready to run but deliberately wasn’t scheduled because that would violate the “CPU limit” settings. If larger than 0 the world is being throttled due to the limit on CPU.

CPU

%SWPWT

5

VM waiting on swapped pages to be read from disk. Possible cause: Memory over commitment.

MEM

MCTLSZ

1

If larger than 0 host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommitted.

MEM

SWCUR

1

If larger than 0 host has swapped memory pages in the past. Possible cause: Over commitment.

MEM

SWR/s

1

If larger than 0 host is actively reading from swap(vswp). Possible cause: Excessive memory over commitment.

MEM

SWW/s

1

If larger than 0 host is actively writing to swap(vswp). Possible cause: Excessive memory over commitment.

MEM

CACHEUSD

0

If larger than 0 host has compressed memory. Possible cause: Memory over commitment.

MEM

ZIP/s

0

If larger than 0 host is actively compressing memory. Possible cause: Memory over commitment.

MEM

UNZIP/s

0

If larger than 0 host has accessing compressed memory. Possible cause: Previously host was overcommitted on memory.

MEM

N%L

80

If less than 80 VM experiences poor NUMA locality. If a VM has a memory size greater than the amount of memory local to each processor, the ESX scheduler does not attempt to use NUMA optimizations for that VM and “remotely” uses memory via “interconnect”. Check “GST_ND(X)” to find out which NUMA nodes are used.

NETWORK

%DRPTX

1

Dropped packets transmitted, hardware overworked. Possible cause: very high network utilization

NETWORK

%DRPRX

1

Dropped packets received, hardware overworked. Possible cause: very high network utilization

DISK

GAVG

25

Look at “DAVG” and “KAVG” as the sum of both is GAVG.

DISK

DAVG

25

Disk latency most likely to be caused by array.

DISK

KAVG

2

Disk latency caused by the VMkernel, high KAVG usually means queuing. Check “QUED”.

DISK

QUED

1

Queue maxed out. Possibly queue depth set to low. Check with array vendor for optimal queue depth value.

DISK

ABRTS/s

1

Aborts issued by guest(VM) because storage is not responding. For Windows VMs this happens after 60 seconds by default. Can be caused for instance when paths failed or array is not accepting any IO for whatever reason.

DISK

RESETS/s

1

The number of commands reset per second.
DISK CONS/s 20

SCSI Reservation Conflicts per second. If many SCSI Reservation Conflicts occur performance could be degraded due to the lock on the VMFS.

 

 

Conversion of Dynamic Disk to Basic Disk by VMware Converter Tool.

First Power off your Source Virtual Machine and open VMware Converter tool.

Select Radio button on Source Type “Powered Off ” & choose VMware Infrastructure Virtual Machine in drop down list.

Mention your Source vCenter/Host name to connect it.

C1

Select the location of destination server in left pane and server name in right and click on next.

Select the Destination vCenter/Host where new VM needs to be presented and click next.

Enter the new machine name which should be different from original one in destination Virtual machine wizard and click next.

Change the hardware version if you want and click next.

Click Edit on Data to Copy section.

C2

Now choose “Select Volumes to Copy” in drop down list of Data Copy Type.

C3

You can convert your Virtual disk from Thick to Thin as by default it select Thick type and click next.

Review the wizard and finish it.

Once conversion will be completed you can power on new VM

You will see new VM will have all Basic disk.

 

Cannot add VMFS datastore to ESXi host

ISSUE

Keep existing signature” and “Assign a new signature” is grayed  out and only option is “Format the disk” is available while adding a VMFS Datastore to ESXi host in Cluster.

CAUSE

This issue is related to Non-Persistent Mount Volume between hosts in Cluster.

SOLUTION

First find the volume which are Non-Persistent by running below command on ESXi Host.

#esxcfg-volume -l

By running this you can find the volumes and names.

To persistently mount the volume use, “esxcfg-volume -M” followed by either the UUID or the name of the volume

#esxcfg-volume -M TEMPDATASTORE

After running above commands then rescan storage on hosts and you will see the datastore listed.

Find VM Snapshots and Space They are Consuming with Creation Date

Snapshots are one of the best features of virtualizing a server. Snapshots have saved my bacon countless times. It’s easy to get used to simply right-clicking on a VM, creating a snapshot and forgetting about it.
However, this can cause a problem by consuming lots of storage if not kept in check.
When a snapshot is created it does a couple things; it creates a differencing disk to begin to accumulate all changes since the snapshot time and, if a snapshot is made while the VM is running, it commits the memory of that VM to disk.
Let’s find out how we can use PowerShell to build a report to figure out just how much space each checkpoint is consuming.
Luckily, finding out how much space a snapshot is consuming is pretty easy. It’s just a property that’s returned whenever you run Get-Snapshot. All you need to is run below command in PowerCLI to enumerate all of the VMs, pass that to Get-Snapshot and, for this case, I wanted to see the VM name Snapshot size in GB in decimal value 2 with Snapshot Creation Date.

Connect-VIServer “vCenter Server Name” -User “User Name” -Password “password”
Get-VM | Get-Snapshot | select VM,@{Name=’SizeGB’;Expression={[math]::Round($_.SizeGB,2)}}, Created |Export-Csv d:\temp\list.csv
Disconnect-VIServer -confirm A

Exit

Backup of & Restore of ESXi Configuration

Using the vSphere CLI

To back up the configuration data for an ESXi host using the vSphere CLI, run this command:

vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -s
output_file_name


If you are using vSphere CLI for Windows, run this command:

vicfg-cfgbackup.pl –server=ESXi_host_IP_address –username=root -s output_file_name

Where ESXi_host_IP_address is the IP address of the ESXi host and output_file_name is the name of the backup file you create.

Note: From vSphere CLI for Windows, ensure you are executing the command from C:\Program Files\VMware\VMware vSphere CLI\bin

For example:
vSphere CLI:

vicfg-cfgbackup --server=10.0.0.1 --username=root -s ESXi_test1_backup.tgz

vSphere CLI for Windows:

vicfg-cfgbackup.pl –server=10.0.0.1 –username=root -s ESXi_test1_backup.tgz

Note: Use the --password=root_password option (where root_password is the root password for the host) to avoid being prompted for the root user password when you run the script.
A backup text file is saved in the current working directory where you run the vicfg-cfgbackup script. You can also specify a full output path for the file.

Using the vSphere PowerCLI

To back up the configuration data for an ESXi host using the vSphere PowerCLI, run this command:

Get-VMHostFirmware -VMHost ESXi_host_IP_address -BackupConfiguration -DestinationPath output_directory

Where ESXi_host_IP_address is the IP address of the ESXi host and output_directory is the name of the directory where the output file will be created.

For example:

Get-VMHostFirmware -VMHost 10.0.0.1 -BackupConfiguration -DestinationPath C:\Downloads

Note: A backup file is saved in the directory specified with the -DestinationPath option.

Using the ESXi Command Line

To synchronize the configuration changed with persistent storage, run this command:

vim-cmd hostsvc/firmware/sync_config

To backup the configuration data for an ESXi host, run this command:
vim-cmd hostsvc/firmware/backup_config
Note: The command should output a URL in which a web browser may be used to download the file. The backup file is located in the/scratch/downloads directory as configBundle-HostFQDN.tgz

Restoring ESXi host configuration data

Using the vSphere CLI

Note: When restoring configuration data, the build number of the host must match the build number of the host that created the backup file. Use the -f option (force) to override this requirement.

To restore the configuration data for an ESXi host using the vSphere CLI:

  1. Power off all virtual machines that are running on the host that you want to restore.
  2. Log in to a server where the vCLI is installed.
  3. Run the vicfg-cfgbackup script with the -l flag to load the host configuration from the specified backup file:

    vSphere CLI:

    vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -l backup_file

    vSphere CLI for Windows:

    vicfg-cfgbackup.pl –server=ESXi_host_IP_address –username=root -l backup_file

    Where ESXi_host_IP_address is the IP address of the ESXi host and backup_file is the name of the backup file to use for the restore.

    For example:

    vicfg-cfgbackup --server=10.0.0.1 --username=root -l ESXi_test1_backup.txt

    vSphere CLI for Windows:

    vicfg-cfgbackup.pl –server=10.0.0.1 –username=root -l ESXi_test1_backup.txt

    Notes:

    • When you run this command, you are prompted for confirmation before proceeding. You can override this safety feature using the -q option.
    • Use the --password=root_password option (where root_password is the root password for the host) to avoid being prompted for the root user password when you run the script.
To restore an ESXi host to the stock configuration settings, run the command:

vicfg-cfgbackup –server=ESXi_host_IP_address –username=root -r

For example:

vicfg-cfgbackup --server=10.0.0.1 --username=root -r

Using the vSphere PowerCLI

Note: When restoring configuration data, the build number of the host must match the build number of the host that created the backup file. Use the -force option to override this requirement.

  1. Put the host into maintenance mode by running the command:

    Set-VMHost -VMHost ESXi_host_IP_address -State ‘Maintenance’

    Where ESXi_host_IP_address is the IP address of the ESXi host.

  2. Restore the configuration from the backup bundle by running the command:

    Set-VMHostFirmware -VMHost ESXi_host_IP_address -Restore -SourcePath backup_file -HostUser username -HostPassword password

    Where ESXi_host_IP_address is the IP address of the ESXi host, backup_file is the name of the backup bundle to use for the restore, and username and password are the credentials to use when authenticating with the host.

    For example:

    Set-VMHostFirmware -VMHost 10.0.0.1 -Restore -SourcePath c:\bundleToRestore.tgz -HostUser root -HostPassword exampleRootPassword

Using the ESXi Command Line:

Note: When restoring configuration data, the build number of the host must match the build number of the host that created the backup file.

  1. Put the host into maintenance mode by running the command:

    vim-cmd hostsvc/maintenance_mode_enter

  2. Copy the backup configuration file to a location accessible by the host and run the command:

    In this case, the configuration file was copied to the host’s /tmp directory. For more information, see Using SCP to copy files to or from an ESX host (1918).

vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz
 
Note: Executing this command will initiate an automatic reboot of the host after command completion.

Some Useful Storage Commands

Command to view all LUNs presented to a host

#esxcfg-scsidevs -c

And to check about a specific LUN,

#esxcfg-scsidevs -c | grep naa.id

To find the unique identifier of the LUN,  you may run this command:

# esxcfg-scsidevs -m

To find associated datastore using a LUN id

#esxcfg-scsidevs -m|grep naa.id

To get a list of RDM disks, you may run following command,

#find /vmfs/volumes/ -type f -name ‘*.vmdk’ -size -1024k -exec grep -l ‘^createType=.*RawDeviceMap’ {} \; > /Datastore123/rdmsluns.txt  This command will save the list of all RDM disk to a text file rdmluns.txt and save it to Datastore123.

Now Run following command to find the associated LUNs,
#for i in `cat /tmp/rdmsluns.txt`; do vmkfstools -q $i; done
This command will give you the vml.id of rdm luns,

Now use following cmd to map vml.id to naa.id

#ls -luth /dev/disks/ |grep vml.id    in output of this command you will get LUN id/naa.id.

To mark an RDM device as perennially reserved:

#esxcli storage core device setconfig -d naa.id –perennially-reserved=true   you may create an script to mark all RDMs as perennially reserved in one go.

Confirm that the correct devices are marked as perennially reserved by running this command on the host:

#esxcli storage core device list |less

To verify about an specific lun/device, run this command:

#esxcli storage core device list -d naa.id

The configuration is permanently stored with the ESXi host and persists across restarts.

To remove the perennially reserved flag, run this command

#esxcli storage core device setconfig -d naa.id –perennially-reserved=false

To obtain LUN multipathing information from the ESXi host command line:

To get detailed information regarding the paths.

#esxcli storage core path list

or To list the detailed information of the corresponding paths for a specific device,

#esxcli storage core path list -d naa.ID

To figure out if the device is managed by VMware’s native multipath plugin, the NMP or it is managed by a third-party plugin,

#esxcli storage nmp device list -d naa.id
This command not only confirms that the device is managed by NMP, but will also display the Storage Array Type Plugin (SATP) for path failover and the Path Selection Policy (PSP) for load balancing.

To list LUN multipathing information,

#esxcli storage nmp device list

To check the existing path selection policy

#esxcli storage nmp satp list

To change the multipathing policy
# esxcli storage nmp device set –device naa_id –psp path_policy

(VMW_PSP_MRU or VMW_PSP_FIXED or VMW_PSP_RR)
Note: These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions

To generate a list of all LUN paths currently connected to the ESXi host.

#esxcli storage core path list command

For the detail path information of a specific device

#esxcli storage core path list -d naa.id

To generate a list of extents for each volume and mapping from device name to UUID,

#esxcli storage vmfs extent list command

or To generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.

#esxcli storage filesystem list

To list the possible targets for certain storage operations,

#ls -alh /vmfs/devices/disks

To rescan all HBA Adapters,
#esxcli storage core adapter rescan –all

To rescan a specific HBA.
#esxcli storage core adapter rescan –adapter <vmkernel SCSI adapter name>  Where <vmkernel SCSI adapter name> is the vmhba# to be rescanned.

To get a list of all HBA adapters,
#esxcli storage core adapter list command
Note: There may not be any output if there are no changes.

To search for new VMFS datastores, run this command,
#vmkfstools -V

To check which VAAI primitives are supported.

#esxcli storage core device vaai status get -d naa.id

The esxcli storage san namespace has some very useful commands. In the case of fiber channel you can get information about which adapters are used for FC, and display the WWNN (nodename) and WWPN (portname) information, speed and port state

#esxcli storage san fc list

To display FC event information:

# esxcli storage san fc events get

VML ID

For example: vml.02000b0000600508b4000f57fa0000400002270000485356333630
Breaking apart the VML ID for a closer understanding: The first 4 digits are VMware specific and

the next 2 digits are the LUN identifier in hexadecimal.

In the preceding example, the LUN is mapped to LUN ID 11 (hex 0b).

NAA id

NAA stands for Network Addressing Authority identifier. EUI stands for Extended Unique Identifier. The number is guaranteed to be unique to that LUN.

The NAA or EUI identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same.

Path Identifiervmhba<Adapter>:C<Channel>:T<Target>:L<LUN> 
This identifier is now used exclusively to identify a path to the LUN. When ESXi detects that paths associated to one LUN, each path is assigned this Path Identifier. The LUN also inherits the same name as the first path, but it is now used an a Runtime Name, and not used as readily as the above mentioned identifiers as it may be different depending on the host you are using. This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:C0:T0:L0 = Adapter 1, Channel 0, Target 0, and LUN 0.

To determine the firmware for a Qlogic HBA on an ESXi/ESX host 5.1

(QLogic)

To determine the firmware for a QLogic fibre adapter, run these commands on the ESXi/ESX host:

Go to /proc/scsi/qla####.

Where #### is the model of the Qlogic HBA

Run the ls command to see all of the adapters in the directory.

The output appears similar to:

1 2 HbaApiNode

Run the command:

head -2 #

Where # is the HBA number.

You see output similar to:

QLogic PCI to Fibre Channel Host Adapter for QLA2340 :

Firmware version: 3.03.19, Driver version 7.07.04.2vmw

To determine the firmware for a QLogic iSCSI hardware initiator on an ESXi/ESX host:

Go to /proc/scsi/qla####.

Where #### is the model of the Qlogic HBA

Run the ls command to see all of the adapters in the directory.

You see output similar to:

1 2 HbaApiNode

Run the command:

head -4 #

Where # is the HBA number.

You see output similar to:

QLogic iSCSI Adapter for ISP 4022:

Driver version 3.24

Code starts at address = 0x82a314

Firmware version 2.00.00.45

(Emulex)

To determine the firmware for a Emulex HBA on an ESXi/ESX host 5.1

Go to /proc/scsi/lpfc.

Note: the lpfc may be appended with model number appended. For example, /proc/scsi/lpfc820

Run the ls command to see all of the adapters in the directory.

You see output similar to:

1 2

Run the command:

head -5 #

where # is the HBA number.

You see output similar to:

Emulex LightPulse FC SCSI 7.3.2_vmw2

Emulex LP10000DC 2Gb 2-port PCI-X Fibre Channel Adapter on PCI bus 42 device 08 irq 42

SerialNum: BG51909398

Firmware Version: 1.91A5 (T2D1.91A5)

Notes:

To determine the firmware for a Emulex HBA on an ESXi/ESX host 5.5

In ESXi 5.5, you do not see native drivers in the /proc nodes. To view native driver details, run the command:

/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval –a

To Get Hardware Details

Information:

# esxcfg-info | less -I

Identify the SCSI shared storage devices with the following command:

For ESX/ESXi 4.x, ESXi 5.x and 6.0, run the command:

# esxcfg-scsidevs -l | egrep -i ‘display name|vendor’

Run this command to find additional peripherals and devices:

# lspci –vvv

Installation of VIB

#esxcli software vib install –d /vmfs/volumes/datastore_name/driver_file_name.zip

Removing of VIB

#esxcli software vib remove –n –f

ESX Monitoring Steps

Configure SNMP Communities

esxcli system snmp set –communities public

Configure the SNMP Agent to Send SNMP v1 or v2c Traps

If the SNMP agent is not enabled, enable it by typing

esxcli system snmp set –enable true

esxcli system snmp set –targets target.example.com@162/public

Send a test trap to verify that the agent is configured correctly by typing

esxcli system snmp test

The agent sends a warmStart trap to the configured target

Creating ESX Logs from Command Line

vm-support

Creating /var/tmp/esx-(Hostname).tgz

cp  /var/tmp/esx-Z2T3GBGLPLM26-2014-12-17–11.24.tgz /vmfs/volumes/glplx94_vmdata_iso_01/ESX_Logs

Rename the CTK.VMDK

Go to datastore

Go the machine folder

Rename the file mv xyz-ctk.vmdk xyz-ctk_old.vmdk

then power on the machine

Install VMware tools without Reboot

/s /v/qn ADDLOCAL=ALL REBOOT=ReallySuppress

To Read File in ESX

vi <filename>

  • Esc+ : + /+ <search keyword>
  • Use n to see next instance of search
  • For exiting the file use esc+:+q!

cat <filename> | grep –i <keyword>   

cat <filename> | grep –e<keyword> -e <keyword>

Less <filename>

Shift + G (To go to End)

To Read last 100 Lines of file

Tail -f <filename> -n 100

To get VM Snapshot Details

get-vm | get-snapshot | format-list vm,name,SizeMB,Created,IsCurrent | out-file c:\a.txt

To Get Array Details from ESXi 5.1

esxcli hpssacli cmd -q “controller all show status”

To Get VM Created Date

Get-VIEvent -maxsamples 10000 -Start (Get-Date).AddDays(-60) | where {$_.Gettype().Name-eq “VmCreatedEvent” -or $_.Gettype().Name-eq “VmBeingClonedEvent” -or $_.Gettype().Name-eq “VmBeingDeployedEvent”} |Sort CreatedTime -Descending |Select CreatedTime, UserName,FullformattedMessage | Format-Table –AutoSize

Find AMS Version

 #esxcli software vib list | grep ams

Configure SATP Claimrule for Changing Path Selection Policy according to Storage Vendor

1. First we need to find out for a specific LUN, what the current path selection policy is running.

#esxcli storage nmp device list -d naa.60060e80132892005020289200001001

Look into result for Storage Array Type and Path Selection Policy.

2. The next step is to find out what storage array Vendor and Model type this LUN is coming from, because we need this info to create a new SATP claiming rule.

#esxcli storage core device list -d naa.60060e80132892005020289200001001

Look into result for Vendor Name and Model Name

3. Now need to check the current SATP rule configured in ESXi host.

#esxcli storage nmp satp rule list

RESULT:

Name      Vendor      Model       Rule Group        Claim Options         Default PSP
VMW_SATP_DEFAULT_AA     HITACHI     system inq_data[128]={0x44 0x46 0x30 0x30}     VMW_PSP_RR
VMW_SATP_DEFAULT_AA     HITACHI     System
The first line tells ESXi that if you find a storage of Vendor Hitachi with specific claim options “inq_data[128]={0x44 0x46 0x30 0x30}” (which I don’t fully understand), then the VMW_PSP_RR policy should be used.
The second line says to apply the system default connected to VMW_SATP_DEFAULT_AA for all Hitachi arrays.

4. Let’s check what the default for VMW_SATP_DEFAULT_AA is configured.

#esxcli storage nmp satp list

RESULT:

VMW_SATP_DEFAULT_AA     VMW_PSP_FIXED     Supports non-specific active/active arrays

So above result is stating that default SATP rule is used to apply PSP “VMW_PSP_FIXED”.

5. Now We’re telling the ESXi to use VMW_SATP_DEFAULT_AA with a PSP of “VMW_PSP_RR” when Venor and Model match our specification:

#esxcli storage nmp satp rule add -V HITACHI -M “OPEN-V” -P VMW_PSP_RR -s VMW_SATP_DEFAULT_AA
#esxcli storage core claimrule load

6. To check how this worked out, check the satp rule list again:

#esxcli storage nmp satp rule list

Name     Vendor     Model     Rule Group     Claim Options     Default PSP
VMW_SATP_DEFAULT_AA     HITACHI     system inq_data[128]={0x44 0x46 0x30 0x30}     VMW_PSP_RR
VMW_SATP_DEFAULT_AA     HITACHI     OPEN-V     user     VMW_PSP_RR
VMW_SATP_DEFAULT_AA     HITACHI     system

7. Wait for 5 Minutes to Automatically detect the Claimrule or reboot the host to Manual detection.

8. To check if this changed the way the policy was applied to the LUNs, run the command below.

#esxcli storage nmp device list -d naa.60060e80132892005020289200001001

Look into result for below changes which we wanted.

Storage Array Type: VMW_SATP_DEFAULT_AA
Path Selection Policy: VMW_PSP_RR

SLOT SIZE CALCULATION

First we will get Total No of SLOTS in Cluster

Total No of CPU Slots = Total available CPU Resources in a Cluster / CPU Slot Size

(Total available CPU Resource = Total CPU Resource – CPU used)

 

Total No of Memory Slots = Total available Mem Resources in a Cluster / Mem Slot Size

(Total available Mem Resource = Total Mem Resource – Mem used)

The most restrictive number among CPU and Memory slots determines the amount of slots in cluster.

HA slot snap

In above screenshot

Total Slots in Cluster = 234

Used Slots in Cluster = 6

Available Slots in Cluster = 150 WHY?

Answer:-

Available Slots = (Total Slots in Cluster – Used Slots in Cluster) – Slots Configured for Failover Capacity

Now how to Calculate Slots Configured for Failover Capacity (Assuming configured for 1 Host Failure)

Total Available Slots Per ESX Host (78) = Total Slots in Cluster (234) / No of ESX Hosts (3)

Available Slots (150)  = (Total Slots (234) -Used Slots (6)) – Slots reserved for fail over (78)

Windows Server: How to Repair the Boot Files in Windows Server 2008 or 2008 R2 if the Server Won’t Boot

There are a number of possible causes for the failure of a server to boot into Windows. This article deals with a problem in the boot files and demonstrates how to repair them.

Solution

IMPORTANT: Drive Letters Change in WinRE

When booting to the Windows Recovery Environment (WinRE), the drive letters are assigned on a first-come, first-serve basis. For example, the C: drive in Windows will often have a different letter in WinRE. The DiskPart utility can be used to keep track of the drives and what is stored on them.

Restoring Boot Files

 Note: If there is no System Reserved partition. It is okay to select the drive containing the Windows folder .

  • First Partition: 100 MB System Reserved (No drive letter)
  • Second Partition: 60 GB (C:) OS
  • Third Partition:1.5 TB (D:) Data
  • DVD Drive: E:

 Note: The DVD drive’s letter changes from E: to F: in WinRE.

  1. Boot to the Windows Server DVD.
  2. Open the command prompt.
  3. Server 2008 R2
  4. If no driver is needed, press Shift-F10 to open the command prompt.
  5. Continue with step 3.
  6. Server 2008 (or 2008 R2 if a driver is required)
  7. Click Next at the first screen.

Click Repair your computer.

iii. If no driver is needed, click Next and proceed to step vii below.

If a driver is needed, click Load Drivers.

  1. Insert the media containing the needed driver.

Note: The media can be a CD, DVD, or USB storage device

  1. Navigate to the folder containing the driver, select it, and click Open.

vii. Click Command Prompt.

  1. The command prompt appears.

Set the partition as active

4.Type DiskPart at the command prompt.

  1. Type List vol at the DiskPart prompt.
  2. Write down the drive letter of the DVD drive. In this example, it is F.
  3. Write down the drive letter of the system reserved drive. In this example, it is C.
  4. Type Select vol 1 (assuming volume 1 is the System Reserved volume, as it is here).
  5. Type active. This sets the selected volume as active.
  6. Type exit to return to the command line.

Copy Boot Manager

IMPORTANT: Replace the following example drive letters with the actual drive letters obtained in steps 5 and 6 above.

  1. Type Copy f:\BootMgr c:\ at the command prompt. One of two things will happen:
  2. If the file Bootmgr already exists on C:, type N to avoid overwriting it.
  3. If the file Bootmgr doesn’t already exist on C:, it will be copied.

BootRec /FixMBR

  1. Type Bootrec /Fixmbr at the command prompt.

BootRec /Fixboot

  1. Type Bootrec /Fixboot at the command prompt.

Bootrec /rebuildBCD

14.Type Bootrec /rebuildBCD at the command prompt.

  1. If no OS is found, the following appears:

This means that one of the following is true:

* The boot configuration database (BCD) already exists.

* The OS is not there.

* The OS is damaged beyond the ability of BootRec to recognize it.

  1. If BootRec /RebuildBCD succeeds, it will list any installations of Windows that it found. Press Y to accept and add them to the BCD.

The server is now configured to boot from the proper partition. Close the command prompt and reboot the system into normal mode.

Above method cab be also performed through the MS DART CD/ISO.

  • DaRT 8.1 – Windows 8.1, Windows Server 2012 R2
  • DaRT 8.0 – Windows 8, Windows Server 2012
  • DaRT 7.0 – Windows 7, Windows Server 2008 R2
  • DaRT 6.5 – Windows 7, Windows Server 2008 R2
  • DaRT 6.0 – Windows Vista, Windows Server 2008

How to Root vSphere ESXi 5.x

Boot ESXi from Ubuntu/Linux live media and choose try without installing OS
Go to Terminal Prompt
#sudo passwd root
Enter New Password
Retype New Password
#su –
#ls
#cd /
#fdisk -l
#mount /dev/sda5 /mnt
#ls -l /mnt (Look for State.tgz file in mounted drive)
#cd /tmp
#cp /mnt/state.tgz /tmp (Copy the state.tgz file to tmp directory)
#cd tmp
#tar xzf state.tgz
#tar xzf local.tgz
#cd etc
#ls
#nano shadow
(delete the hash characters till first coloumn and save the file)
now exit from etc folder
#tar czf local.tgz etc
#tar czf state.tgz local.tgz
#cp state.tgz /mnt
#umount /dev/sda5
halt the machine
Remove the Linux ISO
and restart the ESXI with blank password