Archive for the ‘Disaster Recovery – Fault Tolerance – High Availability’ Category

Boot from SAN / physical server RecoverPoint replica to VM

Wednesday, January 4th, 2017

2017-01-04 Initial Post

A few months ago I looked into this and could not find much information about it. In theory it seemed like it would work. I even asked two different consultants about it and neither had any experience with this. After testing multiple times, I can confirm that this works.

Goal:

Use existing EMC RecoverPoint (hardware appliance) to replicate entire physical server (OS and data drives) to DR site and then mount the replicated LUNs as RDMs to a VM at the DR site.

Currently the physical server uses a local direct-attached disk for its OS partition. It does utilize SAN LUNs for data so it already has connectivity to the SAN. One of the major tasks is to move the OS partition from the local disk to the SAN and then enable boot from SAN.

Result:

This works pretty much exactly as I expected. The VM boots up into Windows fine. DNS updates the AD DC/DNS server in the DR site and clients are able to access the VM. Failback also works, so any changes made while in “VM mode” will be seen by the physical server.

Why would you want to do this?

If you only have a handful of physical servers and you already use RecoverPoint and SRM to replicate VMs, there’s little justification for bringing in something like Double-Take or PlateSpin since those costs thousands and add more complexity and steps to your DR plan. Yes, you have to manually attach the VMs to the RDMs with my method, but you can prep some of this ahead of time with a placeholder VM so during actual failover it really only takes a few minutes per VM. And you could attempt to automate some of this to make it even quicker.

Hardware and software used during test:

  • Server: HP Blade BL460c Gen8 with QLogic QMH2572 HBA and Dell PowerEdge R620 with Emulex LPE1150 HBA
  • OS: Windows Server 2012 R2
  • Storage: EMC VNX5600 and VNX5200
  • Replication: RecoverPoint 4.1.2.3
  • VMware vSphere/ESXi: 5.5U2
  • Imaging: Macrium Reflect 6.1.1366

High-level steps:

  • Configure proper FC zoning, VNX LUNs, and RP CGs in both production and DR sites.
  • Install and use Macrium to image the local C: drive to a new SAN-based LUN C: drive.
  • Reboot the server and configure the HBA to boot from SAN (this is one of the trickier parts because each HBA vendor does it a different way).

To test, failover the RP CG then attach the replicated LUNs to the DR VM.

I don’t have time to detail ever little step, but any competent storage/server/VM admin will be able to figure them out. The point of this post was to make it known that this does work and is a viable option for DR.

Backup Notes – Windows 7, VSS, TrueCrypt

Sunday, May 19th, 2013

2013-05-19 Updated

2012-01-01 Initial Post

Windows 7 Ultimate SP1 (x86 and x64)

I started using this a few months ago and it works really well. When I was using XP, I had made up my own backup batch file using XCopy and other commands. That batch file didn't work well with Windows 7, so that's when I started using Windows 7 Backup. And I'm glad I did. Sometimes things don't work and then I find something better, like when my old printer broke and I ended up getting a Canon PIXMA MX350  wireless all-one-device which has worked out so much better. (more…)

Offline Files in Windows XP and Vista/Windows 7, Misc Notes and How to Move the CSC Folders

Sunday, April 29th, 2012

2012-04-29 Updated

I researched this a while back before Windows 7 even came out, but it looks like Windows 7's offline files feature behaves pretty much the same way as in Vista. (more…)

Install Windows and Exchange Hotfixes on Exchange Server 2003 Cluster, Simplified How-To

Monday, January 10th, 2011

2011-01-10 Updated

2008-12-11 Original post

Tested on Microsoft Cluster Server - Windows Server 2003 R2 Enterprise Edition, SP2 and Exchange Server 2003 Enterprise Edition, SP2 for both Windows and Exchange hotfixes.

These are my simplified instructions based on the instructions from How to apply Exchange service packs and hotfixes. I’ve only used this for updating hotfixes since both Windows Server 2003 and Exchange Server 2003 have not had any service packs out since SP2.

I’ve used the steps below to update a two-node active/passive cluster in production and test environments with no issues. I used http://update.microsoft.com and selected the custom option. Change the node names to suite your configuration.

Prior to any updates to a cluster you should fail over the active node to verify that failover works properly. There isn’t really a technical requirement to perform the failover, but you should perform the failover to ensure that the passive node is working properly before running any updates. This is included in the steps below.

  1. Check the event logs on both nodes for errors and ensure proper system operation.
  2. Make system state backups of both nodes and make a full or incremental backup of all Exchange stores.
  3. In Cluster Administrator, right-click on NODE-01 --> click on Stop Cluster Service. This will automatically start the failover of all cluster groups/resources over to NODE-02.
  4. Check the event logs on both nodes for errors and ensure proper failover and system operation before continuing.
  5. In the services applet on NODE-01, set Cluster Service to disabled.
  6. Before installing any updates, make a note of exactly which updates were installed on the first node so that only those same exact updates are later installed on the other node. You can copy and paste the list of updates from the Microsoft Update screen into a text file and then use the FC command to compare the file from the first updated node to the file from the last updated node. Here's an example command to compare two files:  C:\>FC X:\File1.txt X:\File2.txt. You can install updates directly from a file or from http://update.microsoft.com. Install Windows and Exchange updates on NODE-01 and then reboot the node as necessary. Repeat this step after the reboot until all updates are installed. That is necessary because some updates require that others be installed first. Microsoft Update will show the Exchange updates with the word “Cluster” at the end, showing that it understands that the server is part of a cluster. You’ll get a lot of prompts for the Exchange updates, so don’t walk off during the updates.
  7. Check the event log on NODE-01 for errors. If you find any errors, troubleshoot them before continuing.
  8. In the services applet on NODE-01, set Cluster Service to automatic.
  9. Reboot NODE-01 one last time.
  10. In Cluster Administrator, right-click on NODE-02 --> click on Stop Cluster Service. This will automatically start the failover of all groups/resources over to NODE-01.
  11. Check the event logs on both nodes for errors and ensure proper failover and system operation before continuing.
  12. In the services applet on NODE-02, set Cluster Service to disabled.
  13. Install Windows and Exchange updates on NODE-02 and then reboot the node as necessary. Repeat this step after the reboot until all updates are installed.
  14. Check the event log on NODE-02 for errors. If you find any errors, troubleshoot them before continuing.
  15. In the services applet on NODE-02, set Cluster Service to automatic.
  16. Reboot NODE-02 one last time.
  17. Check the event logs on both nodes for errors and ensure proper system operation. You’re done.

Disaster-proof External Hard Drive – ioSafe Solo

Thursday, April 29th, 2010

2010-04-29 Initial Post

I've never had the need for something like this, so didn't even know that a product like this existed for consumers. The specs and reviews look decent. The only big issue I see from the specs is that the interface is USB 2.0, which is SLOW. It'd take several hours (probably most of a day) to back up 1 TB over USB 2.0. I haven't checked to see if there are similar products that have a faster interface.

(more…)