Mark Minasi's Tech Forum
Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
pdania

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 11
Reply with quote  #1 

Hi Gents,

I've been using HyperV Core exclusively in my Home Lab since I started working for this company, I use 2 x HP Elite 8300 boxes for each HyperV host running from a removable USB Pen drive. Each box has 32GB of Ram and enough disk space to run VMs on local disks. 

Recently, I decided to play with clustering so I set up a third box (same Hardware as above, packed it with enough disks to add up to 4Tb. I use Windows Server 2016 Datacenter edition in eval mode configured as an ISCSI target Server, I set up a 400gb drive as the Quorum disk and the rest I've carved off 2Tb for shared storage, both HyperV hosts can see the shared storage.

Both HyperV Hosts are now successfully clustered and have access to the shared storage above.

I want to move the VMs that are currently installed in each HyperV's local storage to the shared storage but I'm not able to do this, is it not achievable? am I trying to do something that isn't possible?

0
cj_berlin

Avatar / Picture

Senior Member
Registered:
Posts: 406
Reply with quote  #2 
Hi,
what kind of errors are you getting? This should be easily doable.

__________________
Evgenij Smirnov

My personal blog (German): http://www.it-pro-berlin.de/
My stuff on PSGallery: https://www.powershellgallery.com/profiles/it-pro-berlin.de/
0
pdania

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 11
Reply with quote  #3 
I'm accomplishing this task by migrating the VM Storage to the Cluster Volume, I can only do this for one of the nodes in the cluster,  for some reason, the other node can't 'see' the same same volume, the said volume is showing up as 'reserved' and offline in disk management. When I try to bring it online, it comes up with the message, "the specified disk or volume is managed by the Microsft Failover Clustering component. The disk must be in a cluster maintenance mode and the cluster resource status must be online to perform this operation."

Obviously I shut down my lab machines at night, perhaps I'm not turning them on in the right order? should it be Cluster nodes first before Storage? or the other way round?
0
cj_berlin

Avatar / Picture

Senior Member
Registered:
Posts: 406
Reply with quote  #4 
What you are seeing is correct. Only one host can actively access a shared disk, the other nodes use redirected I/O. You need to use Failover Cluster Manager and not Hyper-V Manager to do the move.
__________________
Evgenij Smirnov

My personal blog (German): http://www.it-pro-berlin.de/
My stuff on PSGallery: https://www.powershellgallery.com/profiles/it-pro-berlin.de/
0
pdania

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 11
Reply with quote  #5 
Thanks CJ_Berlin for your response, I greatly appreciate it..

I have gone a bit further with my tests. I understand the concepts a bit better now and as a result, my home lab now has a permanent 2-node cluster connected to a Server 2016 ISCSI shared storage box. I've configured Cluster Shared Volume (allowing both nodes access to the shared storage at the same time) for the shared storage and that actually saved me this weekend, I lost one of the nodes due to a power supply failure and the VM that the node was hosting failed over cleanly to the working node, interestingly, my hyperV failover cluster still has all the default configurations as I'm yet to delve deep into that side of things, but the fact that the VM failed over cleanly has really piqued my interest in perhaps using this solution for hosting our VMs in our Datacenter Dev Estate, where we have over 500 VMs spread across more than 10 Hyper V hosts. Looking at our environment, if a Hyper V host fails, all 50 to 60 VMs will be completely unaccessible until we fix!

We spend thousands of pounds to buy  very high spec machines (we use HyperV core), install all our VMs until it's full and then we buy another one. I think this is wasteful and not cost effective in the long run. 

Using Fibre channel for shared storage is a no go due to the cost but I'm not sure whether ISCSI Shared Storage is suitable for VMs in production, I'm also assuming that it'll be much cheaper than FC so if anyone has some real workd experiance and is willing to guide me in the right direction, I'll be very grateful.

Majority of our Dev Servers don't require much in terms of resources except for the Database Servers. I can provide more info about the type of VMs that we deploy if required.

Thanks for your assistance in advance.

0
dennis-360ict

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 91
Reply with quote  #6 
We've tried fiber and iscsi for the same goals as you. In the end we defaulted back to direct attached storage, so SAS adapters as we did in the '90's. This seems to be a step back but it works really well for us:
- You have to have your network stack really good under control with iscsi (which we kinda had, but in some cases gave us headaches to know where to look and required to have our network monitoring in full control).
- SAS storage devices are cheap compared to the performance ISCSI stuff (at least at Dell, where we shop).
- SAS storage devices like the MD and SC series can connect to 8 servers and 4 servers with redundant connections, which is enough for our case. Due to the outage as a cluster fails (to whatever reaseon, be it hardware or windows update) we don't want a larger cluster than that, so we have about 3 clusters running.
- The MD is not very performant, but very cheap so suitable for dev/test.
- the MD/SC can be extended easily with more disks/subchassis. So MD1xxx's can be added to MD3xxx.
- Md is EOL, so SC is preferred, although its a bit more expensive. But is has lots of software options so it's really flexible and allows for multidatacenter deployments, etc.

so just my 2 ct's.

__________________
-----
Home is where is sleep
360ict.nl/blog
thegood.cloud
0
cj_berlin

Avatar / Picture

Senior Member
Registered:
Posts: 406
Reply with quote  #7 
Well,

iSCSI sort of got a bad rap, initially, mainly due to two reasons:
  1. people who built it weren't prepared to treat iSCSI network the way they would a FC network, i.e. separate switches, no routing, careful config, capacity planning so ended up doing iSCSI over the same network as database and VoIP so storage traffic got deprioritized and therefore slow and flaky.
  2. there are lots of cheap iSCSI boxes with slow SATA disks and shitty NIC drivers but you're not very likely to find a cheap FC box. So iSCSI kind of became 'the protocol for cheap boxes'.

Well-speced and well-built iSCSI SAN will cost you as much as FC and deliver the same performance and path resilience. If you start taking shortcuts and trying to save money, with FC you'll have to give up or tap into refurbished market really quickly, with iSCSI you can go all the way down in price - AND performance + stability.

That all said, if the VMs you run are Windows Servers, you've got to have Datacenter licensing attached to your Hyper-V hosts anyway and, seeing as they're apparently full of disk now, you could build a Storage Spaces direct hyperconverged cluster out of them. This way you would combine locally attached disks with VM mobility and cluster resilience. Might have to add some SSDs in ther, though. Network plays an important role here as well, read up onRDMA,  iWARP and RoCE.


__________________
Evgenij Smirnov

My personal blog (German): http://www.it-pro-berlin.de/
My stuff on PSGallery: https://www.powershellgallery.com/profiles/it-pro-berlin.de/
0
pdania

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 11
Reply with quote  #8 

Hi cj_berlin, Thanks for your reply. I suppose in future storage spaces direct will be a consideration but at the moment, we have more Servers running HyperV Core 2012R2 booting off PEN Drives, We only have 2 x Dell PowerEdge R540 as single Hosts with local storage, they are both running HyperV Core 2019 standard on a mirrored pair of SSDs (we're moving away from booting off usb pendrives) which also rules out Storage Spaces Direct as I believe you can only do this if you have the Data Center Edition of Server 2016.

However, we're planning to purchase new kit, that's what my quest is about, I want us to buy wisely hence the question. Tomorrow, we have a call with a Dell presales representative, so we'll see.

0
dennis-360ict

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 91
Reply with quote  #9 
We tried storage spaces direct (S2D as SSD was already taken), hired hardware for our PoC. they weren't cheap boxes, NVME's at 5k a piece (which required 8 for 4 nodes), mixed mode SSD's and SAS as endtier. Dedicated 10Gb switches, 40Gb RDMA network, etc. The PoC succeeded, but in production it was HELL. Due to the heavy load of our btrieve databases, we had a lot of unexpected and unexplainable results. It's just too big of a black box so it's just guesswork what the system is doing if you run into problems. We've had MVP's look at it and they had the same conclusion.

We also run Ceph as an opensource storage platform and they support the same techniques as S2D but they are against the usage of those techniques due to the unexpected results you might have. And the Ceph ppl havent even heard of S2D.

I think S2D is fine for overall deployments, but it's too big of a black box when you have heavy load. So I would head caution with S2D.

We blogged about our experience here: https://www.360ict.nl/blog/category/s2d/

__________________
-----
Home is where is sleep
360ict.nl/blog
thegood.cloud
0
pdania

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 11
Reply with quote  #10 
Many thanks, I've just had a meeting with the presales guy from our Supplier, from our discussion, I think we're going to settle for a Dell Powervault Storage unit with 4 x External SAS connectors, this will serve as a shared storage for 2 HyperV nodes to start with, in future, we can add another 1 or 2 Hyper V nodes to the cluster depending on requirement, we've started using Azure now so the requirement for VM provisioning is being shared with our Azure space. First, I need to use live optics to produce some stats on our current estate so that they have an idea of how to spec up the 2 nodes and storage unit.

Thanks guys for all your contributions, I'll keep up the posting as this develops.
0
dennis-360ict

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 91
Reply with quote  #11 
Is that the MD or SC powervault version? MD i wouldnt recommend for production with heavy load, we had the MD3420-md1200 with only SSD’s in raid10, but we werent impressed by the performance. The MD is also EOL even though they still sell it.
__________________
-----
Home is where is sleep
360ict.nl/blog
thegood.cloud
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.