Step by step how to install Nutanix CE nested on VMware ESXi

I know that it is not ideal to run Nutanix CE in nested configuration as performance will be affected, but if you don’t have separate compatible hardware available you are still able to install it and play with it. Since version ce-2015.06.08-beta Nutanix Community Edition can run nested on different VMware products. This post will describe how to install Nutanix CE nested on ESXi.

ESXi host configuration

2 x 8 core Intel Xeon E5-2680
1024GB HDD
VMWare ESXi 5,5 U2

Nutanix CE VM configuration

12 vCPUs ( 2 sockets x 6 cores)
400GB vmdk on SSD
900GB vmdk on HDD
8GB Nutanix CE image as vmdk
1 x Intel e1000 virtual nic


I’m assuming that registration and downloading Nutanix CE is already done.

Step 1 – prepare the boot disk

  1. extract the ce-2015.06.08-beta.img from the archive
  2. rename the ce-2015.06.08-beta.img to ce-flat.vmdk
  3. create disk descriptor file or download it from Joep Piscaer ( blog here ->
  4. Rename ce.txt to ce.vmdk.

Step 2 – prepare the VM

  1. create a VM
    1. create new Centos 4/5/6/7 (64bit) VM
    2. 12vCPUs (minimum of 4 vCPU)
    3. 128GB RAM (minimum of 16GB RAM)
    4. 900GB vmdk for HDD – map as SCSI0:0 (HDD has to be atleast 500+ GB)
    5. 350GB vmdk for SSD – map as SCSI0:1 (SSD has to be atleast 200+ GB)
    6. Intel e1000 NIC
  2. browse the datastore where you stored the VM and upload ce-flat.vmdk and ce.vmdk
  3. edit VM
    1. Enable virtualization for VM – select CPU -> Hardware virtualization -> Expose hardware assisted virtualization to the guest OS.Nutanix CE VM CPU settings
    2. add new SATA controller
    3. attach existing disk ce.vmdk as SATA disk

Step 3 – installation

  1. Start the VM -> press F2 to enter BIOS -> setup SATA disk as default boot disk
  2. boot the server from SATA disk
  3. Optional: Modify SSD IOps requirements
    1. login as root (password nutanix/4u)
    2. edit /home/install/phx_iso/phoenix/ lines “SSD_rdIOPS_thresh = 5000” and “SSD_wrIOPS_thresh = 5000”
  4. Optional: Modify Controller VM (CVM) memory
    1. login as root (password nutanix/4u)
    2. edit /home/install/phx_iso/phoenix/ line “SVM_GB_RAM = 16”
  5. to start installation login as user “install”

Step 4 – enable “Promiscuous mode”

  1. enable “Promiscuous mode” on the vSwitch or virtual machine network where you have nutanix-ce VM connected. Otherwise controller VM is not accessible.
  2. Update 03.08.2016 – you may also need to enable “Forged Transmits”. Thanks to Thiago to pointing this out.


I used these steps to successfully install Nutanix CE on top of VMware ESXi 5.5 U2 running on HP server which is not in the Nutanix HCL.

More information

Joep Piscaer describes how to install Nutanix CE to VMware Fusion which also works for VMware Workstation –


24 thoughts on “Step by step how to install Nutanix CE nested on VMware ESXi

      • I tried severl times but same … now I have to connect in the CVM and it not works with that username/password. Fot that reason I’m asking if I’m missing something (the password is correct, but there is no logging)!?

      • The root user/pass works perfect … but the other one no! I can see that many of them they don’t have problems but after your 3 step described here, how to connect further in the CVM and other stuff. I stucked in the login process at the CVM !

      • SSH can be used to login to host and CVM. HTTP is the usual way to login to Nutanix web based management after install. You need internet access to login to web interface.

  1. Kalle,

    Thanks for adding this post. I tried your steps above but am not able to see my ce-flat.vmdk and ce.vmdk when I browse through datastore while adding hardware.

    Any suggestions?

  2. Hello,

    the CVM is not getting an IP. I cant ping or access PRISM at web URL:
    VM is set-up accordingly, promiscuous mode is enabled at VSS. I use Platform VMware ESXi 5.5 U2.

    May I please you help me further in this case?

    nutanix@NTNX-404bb37d-A-CVM:$ cluster status
    2015-11-13 00:47:48 INFO cluster:1807 Executing action status on SVMs
    The state of the cluster: start
    Lockdown mode: Disabled

    CVM: Up, ZeusLeader
    Zeus UP [5771, 5793, 5794, 5796, 5836, 5849]
    Scavenger UP [8019, 8044, 8045, 8189]
    SSLTerminator UP [8451, 8529, 8530, 8651]
    Hyperint UP [8481, 8539, 8540, 9043]
    Medusa UP [8898, 8923, 8924, 8988, 9719]
    DynamicRingChanger UP [14371, 14401, 14402, 14633]
    Pithos UP [14385, 14422, 14423, 14603]
    Stargate UP [14414, 14472, 14473, 14754, 14757]
    Cerebro UP [14425, 14467, 14468, 14845]
    Chronos UP [14475, 14500, 14501, 14720]
    Curator UP [14495, 14522, 14523, 14898]
    Prism UP [14530, 14582, 14584, 14707, 15279, 15333]
    CIM UP [14568, 14630, 14632, 14702]
    AlertManager UP [14618, 14668, 14669, 14860]
    Arithmos UP [14796, 14832, 14833, 15124]
    SysStatCollector UP [14818, 14863, 14864, 15256]
    Tunnel UP [14838, 14933, 14934]
    ClusterHealth UP [14892, 14972, 14975, 15275, 15326, 15327, 15729, 15731]
    Janus UP [14930, 15074, 15075, 15346]
    NutanixGuestTools UP [14959, 15104, 15105]
    2015-11-13 00:47:49 INFO cluster:1868 Success!

    nutanix@NTNX-404bb37d-A-CVM:$ ifconfig
    eth0 Link encap:Ethernet HWaddr 52:54:00:C9:3E:B8
    inet addr: Bcast: Mask:
    inet6 addr: fe80::5054:ff:fec9:3eb8/64 Scope:Link
    RX packets:17248 errors:0 dropped:0 overruns:0 frame:0
    TX packets:8115 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:2819050 (2.6 MiB) TX bytes:1623503 (1.5 MiB)

    eth1 Link encap:Ethernet HWaddr 52:54:00:A5:68:FD
    inet addr: Bcast: Mask:
    inet6 addr: fe80::5054:ff:fea5:68fd/64 Scope:Link
    RX packets:4984 errors:0 dropped:0 overruns:0 frame:0
    TX packets:4177 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:6302189 (6.0 MiB) TX bytes:755053 (737.3 KiB)

    eth1:1 Link encap:Ethernet HWaddr 52:54:00:A5:68:FD
    inet addr: Bcast: Mask:

    lo Link encap:Local Loopback
    inet addr: Mask:
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:209213 errors:0 dropped:0 overruns:0 frame:0
    TX packets:209213 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:30916753 (29.4 MiB) TX bytes:30916753 (29.4 MiB)

  3. Hi,

    I have used the CE edition ce-2015.11.05-stable.img.gz. i did follow all the steps. but CVM is not starting. can someone please guide what will be the post installation steps?

  4. Pingback: Nutanix Community Edition nested on ESXi |
  5. Pingback: Install Nutanix on ESXi Links | JNewmaster
  6. One thing I’ve found is that you have to make sure that the CE VM’s are on different vSwitches if you run more than one on the same ESXi host. They cannot share the same physical NIC, apparently.

  7. Kalle,
    Thanks and good effort. Precise and perfect guide to setup Nutanix CE.
    Configured Cluster with ce-2016.08.27-stable release.

  8. Pingback: Nested Nutanix Community Edition Home Lab Cluster - CCIE44938
  9. Pingback: Nutanix CE auf VMWARE ESXi gehostet – Ein deutschsprachiger IT Blog
  10. I have my cluster up and running but after some time i am not able to ping CVM ip. When i login to host and check for cluster status it says Can’t open status

  11. I used the latest source for nutanix CE (ce-2017.02.23-stable.img.gz).
    unfortunately i’m not able to booting up those AHV. I have followed each instructions on above articel.
    My VM stuck during booting up the OS.
    [ ****] A start job is running for dev-disk-by\x2duuid-4a29f354\x2d…….

    is there anyone have same case with me?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s