In this article, I will describe the basic setup for my lab running VMware vSAN on Mac Pro 6,1. There is nothing tricky about the setup and VMware makes it quite easy. The obvious downside is the cost of Apple hardware.

If you are just looking to virtualize macOS try VMware Fusion for your Apple device.

About vSAN Support
The lab example shown here is close, but is not supported by vSAN. This is mostly because the RAID Controller (ATTO H680) is not on the HCL. Also, for true support you would want HCL disks.

Supported Solution
The recommendation is to use the ATTO FC solution, which has recently been certified by VMware. More info from William Lam about the announcement at:

About Version

I have not upgraded the ESXi hosts beyond version 6.0 update 2. If I want to go higher, I should update the Mac Pro 6,1 firmware to the level recommended by the VMware HCL. This means install macOS (only way to get the fw updates); Then, flip back to ESXi. I've done my share of those, and it is a lot of work.

The benefit of installing macOS directly onto the Mac Pro (i.e. quarterly) is the firmware; This opens up support for the latest ESXi and latest virtualized MacOS support.

Swipe In

Let's hit the lab. Time to look at this unsupported vSAN setup. This deployment is probably old enough to still be called VSAN (uppercase v). The uptime of vSAN is 4 years and the hosts are rebooted once every year or two.

Kit: Per Host

There are Four (4) Apple Mac Pro 6,1 Desktops in this setup. The following shows the detail of a single host.

Apple Mac Pro 6,1
1 Processor Socket, 4 Cores
Sonnet xMac Pro Rack Kit
Sonnet Storage Expansion x8 Edition
ATTO H680 Raid Card
6 x Magnetic/Spinning (capacity tier)
2 x SSD (caching tier)
2 x Thunderbolt to Ethernet (GOS Networking)
1 x Thunderbolt to PCIe Intel x540 10G (2 ports for vmkernel)

Note: The Apple on-board SSD can be used for the installation of ESXi, or you can leave that disk as is, preserving the macOS installation. You can also use the disk as a local VMFS.


Let's take a visual look at the setup using the vSphere web client. We start with the Summary page. This simple 4 node cluster delivers ~21 TB of highly available storage.

The lab is lightly loaded right now, but latency remains stable even under high load. Above we can see Apple Labs enjoying < 5ms latency.

The Network

All networking is handled by vSphere Distributed Switches. This design has one DVS for guest operating system consumption and one for vmkernel features such as vSAN, FT, vMotion, etc. We take the defaults and let the DVS handle QoS.

You can easily add more uplinks or more nodes to the cluster. This is a cattle design and each node should be expected to fail. This is only a 4 node, but there is no difference for a 32 node.

Gigabit GOS

Using Thunderbolt to Ethernet adapters on each host

10 GbE VMKernel Services

Using the Intel x540 10 GbE (Sonnet xMAC Pro Server supports PCIe and Thunderbolt).

Physical NICs

Beyond the regular gigabit NICs that come stock on the Mac Pro, we can also add external thunderbolt or PCIe 10Gb cards. I use a mix of the following card types and they all enumerate as Intel x540 (10 GbE). These NICs are all on the VMware HCL.

The Sonnet Twin 10G and the Promise SANLink2 Thunderbolt 2 to 10Gbps. Both are Intel x540.

In the above we can also see the 10GbE NetGear switch (used exclusively for vmkernel services)

Intel x540 PCIe Card

This real PCIe card can go right into the Sonnet xMAC Pro Server chassis!

The Disks

If you have some old Mac Mini 1TB spinning disks laying around, those fit perfectly; I use those for capacity, and then add one SSD per disk group for caching in vSAN. If I don't have any disks laying around, I just pick up cheap ones from Best Buy. For production of course, you should only use HCL disks.


List Disks from esxcli

When grabbing disks to throw in, you can just use the GUI. However, you can also use the commandline if desired. Here we use esxcli to list the existing disks.

[root@esx01:~] esxcli storage core device list | grep -i vmfs
   Devfs Path: /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0
   Devfs Path: /vmfs/devices/disks/t10.ATA_____APPLE_SSD_SM1024G_______________________S218NYAG901426______
   Devfs Path: /vmfs/devices/disks/naa.500108600090cae8
   Devfs Path: /vmfs/devices/disks/naa.500108600090cae9
   Devfs Path: /vmfs/devices/disks/naa.500108600090caea
   Devfs Path: /vmfs/devices/disks/naa.500108600090caeb

Show Disk Detail with partedUtil

Using the output you gathered from esxcli, you can then use partedUtil to see the partition table information for a particular disk.

[root@esx01:~] partedUtil getptbl "/vmfs/devices/disks/naa.500108600090cae8"
121601 255 63 1953525168
1 2048 1953523711 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

In the above, we can see that the partition is 1 (just before the 2048). This is a datastore formatted for VMFS as indicated by the vmfs at the end of the line. This is one I would wipe out and repurpose for vSAN.

Disk Group Design

I run VMware vSAN (hybrid) with an uptime of 4 years. However, my setup is unsupported due to the controller I use (H680). Works great for lab though.

Per Host:

  • 2 Disk Groups
  • Each Disk Group is 4 disks (3 spinning, 1 SSD)
  • Each vertical column makes up a disk group

VMware vSAN

Below we can see the actual Disk Groups in vSAN, as viewed from the vSphere Web Client. One Disk Group is highlighed and we can see the detail at the bottom.

vSphere HA

When configuring vSphere HA, I manually add the Advanced Setting das.isolationAddress0. I configure it with an IPv4 address (i.e. some gateway) that the ESXi management interfaces can reach. This is important when the vSAN network is built to run on a network that cannot reach the ESXi management interfaces. Also, I use crafted NFS volumes for heartbeat datastores.

Storage Types

My finished setup serves up NFS, Block (VMFS local), and vSAN. I create NFS volumes by deploying a virtual machine on vSAN that runs the NFS service (i.e. EMC vVNX). For Block storage I use local VMFS, though we could use iSCSI served up by vSAN. Anything that goes on local storage is not protected, so real workloads should stay on vSAN.

Collecting Stats

My use case is unique in that I must have several types of storage to capture disk I/O metrics (NFS, Block, and vSAN). Each of these are captured slightly differently with VMware automation so having a kit like this with all storage types is great.

Finished Product

Here we can see the four node Mac Pro 6,1 cluster running in the rack. Also pictured (but not needed) is a Mac Mini Edge Cluster. Finally at the bottom is a stand-alone node.


In this article we described an example lab setup using VMware vSAN for the Mac Pro 6,1. We used a storage kit from Sonnet Tech that supports up to 8 disks per host. Finally, we put it all together to create a highly available lab environment.