Deploy vSphere replication using OVF tools

The web client/client integration plugin is such a pain to get working!! Especially when you have to rebuild one of the vR appliance.

On this Blog, I will show you an easier way to deploy the vR/OVF’s  to the vCenter.

To start off with, you will need a copy of the ovftool’s. you can Download a copy from the my.vmware portal link: https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=OVFTOOL420

I would recommend the version 4.2.0 or above to avoid running into deployment bugs.

Installation: 

on an elevated command prompt, change to Program Files\VMware\VMware OVF Tool\

 cd “\Program Files\VMware\VMware OVF Tool\”

use the below syntax to deploy the vR (replace the once’s in the red font with what is on your environment):

ovftool –acceptAllEulas -ds=”DATASTORE_NAME” -n=”SPECIFY VRMS NAME” –net:”Management Network”=”PORT GROUP NAME” –prop:”password”=”VRMS ROOT PASSWORD” –prop:”ntpserver”=”NTP SERVER IP OR FQDN” –prop:”vami.ip0.vSphere_Replication_Appliance”=”SPECIFY VRMS SERVER IP” –vService:installation=com.vmware.vim.vsm:extension_vservice <PATH>\vSphere_Replication_OVF10.ovf vi://administrator@vsphere.local:VMWARE123!@VCENTER IP/?ip=HOST IP

 

 

 

 

 

 

 

Isolating vSphere Replication traffic

On the Esxi host.
* create new vmkport group on the source and destination esxi host/cluster

Sample networking on the host:

* add a second nic to the vR appliance and reboot the appliance.

* Log into the VAMI page of the vR appliance (default url: https://ip:5480
* go into network>address
* under the eth1 info: set a static IP address there

* now go back to vR>Configuration
* fill in the “ip address for Incoming storage traffic” with the IP address of eth1 and click on “apply network settings”

* validate network and port connectivity (from source Esxi host to the Destination vR appliance)
* Network: vmkping -I vmkx REMOTE_vR_IP   (where x is the vmkernel portgroup on the host used for replication)
* port: nc -z vR_IP 31031

* validate network and port connectivity: (from Destination vR appliance to the Destination Esxi )
* curl -v telnet://Destination_ESXI_IP:902

 

Sample Networking configuration (for replication traffic, one way replication)

 

 

 

 

 

 

 

 

DIY Home Server Build

Tech Specs:
Chassis: ANTEC TITAN – EATX TOWER SERVER CHASSIS
Chassis addon: ICY DOCK FLEX-FIT Quattro MB344SP 4 x 2.5″ HDD / SSD Bracket for External 5.25″ Bay
Motherboard: Asus Z10PE-D16 WS
Processor: 2x Intel Xeon E5-2620V3
Memory: 80GB (Kingston KVR 16 GB DDR4 RAM – KVR21R15D4/16 Modules)
Heatsink:  Intel Thermal Solution Cooling Fan for E5-2600 Processors BXSTS200C(replaced with 2X DEEPCOOL NEPTWIN V2 to reduce the noise)
PSU: Corsair CXM Series CX750M – 750 Watt ATX Power Supply
HDD: 8x 1TB WD blue drive WD10EZEX.
SSD: 2x Samsung and 2x WD Blue SSD
NVME: Toshiba XG4 1TB NVME
Raid Controller:2x HP P410 with 256MB cache
NIC(adhoc): Broadcom 5709 4 port Gigabit ethernet adaptor
Netgear Readynas 516 series (6bay)

I opted for this motherboard especially for the VMware certification on it. and the fact that VMware support/Engineering is really picky when it comes to hardware and certification.  (and the BMC).
Aso to the fact that Motherboard is also scalable (max memory support of up to 1024GB!!, DDR4)

Detailed Specifications of the motherboard can be found here
Motherboard Certifications (link): VMware HCL |  Asus
Best buy: 40k to 45K INR

Special thanks to Bytescale technologies (Alpesh.p and Kushal Shah and the backend team for up by sending the 2x4GB modules for testing as the modules that were ordered from a local retailer did not work) Thanks a Tone! you guys are a real lifesaver!!.

Processor: Strictly on a budget, anything that had the highest core and clock speed! Intel Xeon E5-2620V3

Memory: Mostly shipped from the US as the local dealers were quoting a whopping 25k for a 16GB module.
Best buy: 6-9K INR (amazon global buy!)

Heatsink: Intel Thermal Solution Cooling Fan for E5-2600 Processors BXSTS200C gets the job done for a u2 chassis. However, this is a noisy beast!. (literally sounds like a jet plane firing up its engines)
Since the chassis form factor was not a concern (tower), I swapped them out for DEEPCOOL NEPTWIN V2 with 1 silent fan on each of the CPU.

Chassis: Not many where available for an SSI-EEB form factor motherboard back then.  ANTEC TITAN – EATX was the cheapest that I could get my hands on (which had support for 7 3.5″ drives and was spacious enough for upgrades)
Note that the cable management on this chassis is terrible (does not have space for cable management behind the motherboard.

PSU: I choose the Corsair CXM Series CX750M 750W modular power supply to reduce the cable mess. however, I ended up modding the SATA power connectors  to reduce the footprint and the extra long cable’s that where being routed

As you can see from the dates of purchase, these parts were procured over the period of time to build the current specifications (more upgrades to come)

Testbed:

Drives: 

 

Tweaking the raid controller cache for performance:

Moved them to the Titan chassis:

Update(removed the GPU, 6drives for testing the NAS)

The NAS box (ISCSI san):

The N

VME

 

 

Conclusion: Was it worth it?

At the time of building: Definitely Yes!. However, since the mid of 2017 ever since GST came into effect, and other factors, the Cost of memory has skyrocketed.  the 16GB module’s that I had bought for ~6 to 9K is currently at 16-20K per module. 

LGA2011 Series have always been for PC enthusiast. the cost of the processors has not changed much, however considering the consumer and the server baseline, you pay a lot more for the computation cost. 

The management functionality (BMC) on this board does a fair job (cannot be compared to dell’s Drac or Hp’s ILO, It still needs Java 7 to work (for some reason even with the firmware upgrade, I have trouble getting the latest version of java working on the console.)

 

Is it still worth building?

Writing this on the late 2017, Depends on the used case. If it is a white box that you are after, you might get better Computation and memory for a cheaper cost. 

I would still say a no-no if you are on a budget considering the fact that the cost of memory has not gone down for about 9 months now.