Mark Minasi's Tech Forum
Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
JamesNT

Senior Member
Registered:
Posts: 147
Reply with quote  #1 
Everything is Windows Server 2012 R2.

Two Dell R710 servers with 4-port 1G Broadcom adapters.

Question:  Can I use one port for the host and hyper-v virtual machines and then team the other three ports together for live migration and other cluster traffic?  If so, what kind of team settings should I use?

10G is out of the question for now from a pricing standpoint but I'm still trying to get live migration to move as quickly as possible.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
donoli

Senior Member
Registered:
Posts: 598
Reply with quote  #2 
If the VMs are non Windows operating systems, there could be a problem.  Linux/Unix users dislike Broadcom due to their incompatibility.
0
JamesNT

Senior Member
Registered:
Posts: 147
Reply with quote  #3 
Guests are all windows. Keep in mind this is to speed up live migration.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
dennis-360ict

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 94
Reply with quote  #4 
Your setup is possible, but i would recommend to use 1 nic for the cluster/host traffic. But in theory you can use all 4 nics as one big team and just setup QoS. If you like that suggestion, i would recommend reading up on it here: http://www.aidanfinn.com. Although i would describe him as a MS fanboy (one of the few), he's an independent and i find his info real-world reliable.

He also has some recommendations on the different HV team setups. The default setup is switch independable and Dynamic load distribution, but i've had some issues with dynamic load distribution and if you want to go for save you can use "Hyper-v port", although that would limit the load distribution to a per-VM, so less ideal. But in the realworld i haven't seen much VMs who would consume a whole nic (bandwith and cpu) so the impact is less than you would think.

Anyway, read up i would say! And i'm curious as how you will proceed so out of a courtesy, could you describe the steps you are taking in this post?


__________________
-----
Home is where is sleep
360ict.nl/blog
thegood.cloud
0
Wes

Senior Member
Registered:
Posts: 233
Reply with quote  #5 
James!

I may be totally wrong but I don't think Windows teaming is going to get your LM going any faster. I never saw faster than gbit speeds doing so. 10gig is cheaper than you think - $300 on fleabay will get you a couple adapters you can connect directly since you only have two hosts - I used to do it that way and it worked great.
0
JamesNT

Senior Member
Registered:
Posts: 147
Reply with quote  #6 
Wes,

Can you elaborate on connecting to directly?  Do you mean to tell me I wouldn't need a switch at all, just direct plug the two NICS to each other with one cable???

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
Wes

Senior Member
Registered:
Posts: 233
Reply with quote  #7 
That's exactly what I mean. If you get 10gbase-t nics all you need is a cat5e or cat6 (if you're feelin fancy) patch cable. That's the beauty of 2 node clusters.
0
Wes

Senior Member
Registered:
Posts: 233
Reply with quote  #8 
Of course you can do it with sfp nics too; the direct connect cable is just a bit more exotic, but even then not too pricey.
0
JamesNT

Senior Member
Registered:
Posts: 147
Reply with quote  #9 
Well that's just AWESOME-SAUCE!  All this time I was thinking I needed a switch and the switch was the most expensive part - $400 was the cheapest I can find.  I take it I just assign the NIC's IP addresses as normal (no DNS or default gateway specification) and just keep on trucking?

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
Wes

Senior Member
Registered:
Posts: 233
Reply with quote  #10 
Yep.  You can also make it a virtual switch and carve it up into virtual adapters that you VLAN and address separately and use for both LM and CSV traffic if you like.
0
JamesNT

Senior Member
Registered:
Posts: 147
Reply with quote  #11 
I knew that OSI model was more trouble than it was worth when I first saw it I college.

Thanks, Wes!  I think I'll just stick in a 10G adapter in each host and dedicate that to cluster only traffic.  Like I said, I just want live migration for a SQL Server that uses 8G of RAM to take only a few seconds as opposed to almost an entire minute (not to mention the rest of the virtual machines - there are 5 of them).  The 1G adapter the server comes with will suffice for all other traffic.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.