Installing VMware vGPU on vSphere 6.0

vGPU

vGPU is a concept where an user of a virtual desktop can utilise the power of a GPU for running rich 3D applications, such as AutoCAD or other drawing applications, rendering or just watching full HD youtube. In this concept, the virtual desktop gets a direct or shared line to the GPU for 3D processing on any device.

NVIDIA vGPU Profiles

NVIDIA is the catalyst behind this technology and the hypervisor layer has to support this for you to be able to take advantage of this powerful tech. Citrix has been supporting this for a while, Microsoft has RemoteFX, but VMware has not joined this club until very recently with vSphere 6.0.

vGPU is supported on the NVIDIA GRID K1 and K2 cards, with four vGPU profiles each. These vGPU profiles are resource profiles which you can assign to virtual desktops. Below is an overview of these profiles, which includes the resources and intended users:

NVIDIA vGPU Profiles

Important to note is that vGPU profiles have to be planned properly, because a single GPU core can only support a single type of vGPU profile. For example, a single GPU core can host four virtual desktops with profile K240Q (1GB GPU Memory), but not three K240Qs and two K220Qs (512MB GPU Memory):

NVIDIA vGPU Profiles

Proof of Concept

On to the specifics. I did a proof-of-concept project at a large municipality which uses 3D imaging for the entire blueprint of their streets, parcels of land, plumbing, power lines, fibers; everything in and on the ground. They already use a combination of vSGA and vDGA for their purposes, but the vGPU technology would allow them to consolidate more users on a single server and be more flexible with resource deployments.

This proof of concept was running with Cisco UCS C240 servers with dual Intel Xeon 2.60GHz E5-2670 processors, 384GB of RAM, a single Emulex OCe11102-FX 10Gbit CNA and dual NVIDIA GRID K2 cards. We used two profiles during these tests, one for heavy users (K260Q) and one for moderate users (K220Q).

In this setup, we could place 8 heavy users per server (instead of 4 vDGA, twice the amount) or 32 moderate users on a single server. The rest of the server memory and CPU was to be filled with normal virtual desktops.

Installation

For complete step-by-step installation, please refer to the vGPU Deployment Guide (link to be added when GA). I’ll be highlighting the most important steps relating to the vGPU part below, other installation steps have to be taken. The platform was based on the following components:

  • VMware vCenter 6.0 RC
  • VMware ESXi 6.0 RC
  • VMware Horizon View 6.1 EA
  • NVIDIA GRID vGPU Manager EA

Before continuing, the next part assumes you have a working platform with above named components and you’re ready to configure the vGPU part.

BIOS Settings

For vGPU to work properly, you need to configure a few BIOS settings. For each server vendor there are different settings. Cisco, Dell, HP, Supermicro and ‘others’ can be found in the vGPU Deployment Guide, below are the ones for Cisco (as this POC was based on Cisco):

  • CPU Performance: Set to either Enterprise or High Throughput.
  • Energy Performance: Set to Performance.
  • PCI Configuration: Set MMCFG BASE: to 2 GB and Memory Mapped I/O above 4GB: Disabled.
  • VGA Priority: Set to Onboard VGA Primary.

NVIDIA vGPU Manager

The vGPU Manager is the software on an ESXi host that makes the whole thing possible. It creates the link between the GPU and the virtual desktop and is packaged in a VIB format. To install it, copy the VIB to the ESXi server via the datastore browser or SCP and install it:

[root@ESX60-01:~] esxcli software vib install --no-sig-check -v /vmfs/volumes/ISO/vib/NVIDIA-vgx-VMware-346.27-1OEM.600.0.0.2159203.x86_64.vib
Installation Result:
Message: Operation finished successfully. 
Reboot Required: false 
VIBs Installed: NVIDIA_bootbank_NVIDIA-vgx-VMware_vSphere_6_Host_Driver_346.27-1OEM.600.0.0.2159203
VIBs Removed: 
VIBs Skipped:
[root@IUESX60-01:~] esxcli software vib list | grep -i nvidia
NVIDIA-vgx-VMware_ESXi_6.0_Host_Driver  346.27-1OEM.600.0.0.2159203           NVIDIA  VMwareAccepted    2015-02-06

Creating vGPU Virtual Machines

When the ESXi hosts are prepared with the vGPU Manager, you can create the virtual desktop machines for your users. There are two ways you can deploy virtual desktops in Horizon View later on: manual creation and cloning from a template. It’s up to you which option to choose, in our case I used manual cloning due to the small amount of virtual desktops (not hundreds).

There are a few requirements that will shape your virtual desktop. First, you need to choose the virtual hardware to be compatible with ESXi 6.0 and above. The vCPU and vRAM can differ per user type:

  • Entry Level Engineer / Design Reviewer: 4GB RAM, 2 vCPUs
  • Mid Level Engineer: 8 GB RAM, 2-4 vCPUs
  • Advanced Engineer: 16 GB RAM, 4-8 vCPUs

The rest of the virtual desktop specifications can be up to you, the above specifications are required for optimal GPU performance. When creating the virtual desktop, there are three things you need to do to enable it for vGPU. First, you need to add a Shared PCI Device:

Add Shared PCI Device

After adding the Shared PCI Device, you can select the type of PCI device as the NVIDIA GRID vGPU. After that select the vGPU profile, which allocates the GPU Memory on this VM. Lastly, you need to guarantee all memory this these virtual desktops. There’s an easy button for this:

Select vGPU Profile

Install the virtual desktop with your preferred Windows version (7+) and install the NVIDIA GRID driver inside the desktop. This installation can be a next-next-finish, but if for some reason you need to reinstall it, check the “Perform a clean installation” box to reset any customised settings.

Install NVIDIA Driver

After the first virtual desktop is installed, either convert it to a template for a floating desktop pool or clone it a few times for use in a dedicated desktop pool.

Horizon View Pool Configuration

Once the virtual desktop configuration is done, you can start setting up VMware Horizon View. As mentioned before, I’m not going to go into every step, just the vGPU bits. By now you should have a working View Connection Server and coupled it to your vCenter with your ESXi hosts.

To add a desktop pool with vGPU capabilities, you do not have to do a lot different then with a normal desktop pool. The only difference are three options in the Desktop Pool Settings: Default display protocol: PCoIP, Allow users to choose protocol: No, 3D Renderer: NVIDIA GRID VGPU.

Desktop Pool Settings

After adding the pool, grant your users or groups entitlements on this pool and bob should be your uncle. Login with your View Client, select the pool you just created to login to your desktop. Browse to the configuration screen and select the NVIDIA Control Panel to verify the vGPU configuration.

Desktop Pool Settings

Conclusion

vGPU is a bit easier to configure than the previous vDGA and vSGA, while delivering more performance and more consolidation of users per server. The preliminary results of the proof of concept are amazing, boosting performance between 150% and 200% and amount of users per server with 100%. I’m looking forward to finalising the results in a few weeks.

If you would like more information on details for installing and configuring vGPU, check the vGPU Deployment Guide for a step-by-step installation of all components.



Share the wealth!

4 Comments

  1. Great article! Is vmotion/drs/ha supported with vGPU enabled VMs?

    • Martijn

      March 8, 2015 at 14:07

      Hi Michael, thanks! And unfortunately vMotion or HA are not supported on vGPU enabled VMs. I’m hoping that’ll change with later 6.x versions.

  2. Hello,

    where do i get VMware Horizon View 6.1 EA and NVIDIA GRID vGPU Manager EA?
    Is there a beta programm?

    Thanks for your answer.
    Georg

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 Lostdomain

Theme by Anders NorénUp ↑