Mark Minasi's Tech Forum
Register Calendar Latest Topics Chat
 
 
 


Reply
  Author   Comment   Page 1 of 2      1   2   Next
JamesNT

Senior Member
Registered:
Posts: 139
Reply with quote  #1 
Need to rank these in order of reliability/performance/cost.  This is what I have.  Would love to hear other opinions.

1.  Direct Attach Storage using a Smart Array.  Example:  Dell MD3200.  Easiest to set up.  Reliable.  Fast.  Only bad comment is Dell is pissy about giving out admin password to fix bad problems.  No warranty means $500 tech support call.

2.  Fiber Channel.  I rank it second because of the damn cost.  This could be first.  Probably the fastest of the list.

3.  iSCSI.  Example:  Dell MD3200i.  May as well just go DAS.

4.  Storage Spaces.  Example:  Windows Storage Spaces with DataOn JBOD DNS-1600.  Super cheap compared to others, even with licensing changes for Server 2016 (SS now in Datacenter only).  But the slowest and hardest to set up.  I'm probably going to ditch this DataON JBOD I have and go back to the MD3200.

Would love to hear other opinions.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
cj_berlin

Avatar / Picture

Senior Member
Registered:
Posts: 226
Reply with quote  #2 
James,

not sure if I can *really* help you but just some points:

Re 4.: Storage Spaces, Scale-Out File Server etc. are still part of Standard licensing. It's Storage Space Direct that got moved to Datacenter. If you separate storage and compute, S2D will not be the cheapest option because it's only supported on certified SYSTEMS not on systems built out of certified COMPONENTS. On the other hand, if you go hyperconverged, i.e. build your storage directly out of your Hyper-V hosts you would be having Datancenter licenses on there anyway. Either way, S2D offers more in terms of redundancy than your other options so can't really compare them 1:1. Fairly easy to setup, though, if you read TFM.

More Re 4: a SMB3 capable file server built on Storage Spaces (not Direct) is going to be the cheapest and the most flexible option in terms of adding and removing capacity, both disk space and NIC bandwidth. Supported, too.

Re 3 vs. 2: 10GB iSCSI will give you approx. the performance of 8GB FC, and at a similar price point if you need to buy the 10G switches and NICs for your hosts. If you have those already, iSCSI will be cheaper. Personally, I've accepted the fact that (inferior, from the engineer's standpoint) iSCSI is winning the race against FC.

Re 1: Shared DAS.. oh well. Ask your backup vendor if they'll support that or have you pump every VM over LAN. Look at your datacenter design. If your cluster is strechted more than, say, 10 meters in cable length, you're stuck. If your cluster is going to grow over four hosts, you're stuck. If you expect the slightest of multipathing, you'r stuck at two hosts.

The decision is stll yours ;-)

__________________
Evgenij Smirnov

My personal blog (German): http://www.it-pro-berlin.de/
My stuff on PSGallery: https://www.powershellgallery.com/profiles/it-pro-berlin.de/
0
JamesNT

Senior Member
Registered:
Posts: 139
Reply with quote  #3 
Evgenij,

Thank you as always.  You're quite correct about the limitations of DAS, but I have found it to be by far the easiest to set up. 

Of course, it's no surprise that each approach has it's fair share of plus and minus.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 803
Reply with quote  #4 
I have to say for performance and reliability FC is number 1. Partly because it's a fibre fabric and most people stay away even if they think they know a little.
2 or possibly joint 2nd is 10 GbE ISCSI but see above, some people think networking is just plumbing...
I'd rate DAS and SS as similar. Both appear easy to set up and configure both have odd limitations and can be missed configured and affect performance.

What do I use at home...USB 3 and Local storage from which I simulate NFS, ISCSI and storage spaces depending on requirements. Most site deployments are 16GB fibre (3Par) or 1GB ISCSI (MSA). Not seeing a lot of 10GbE
I'm not a fan of a MS Storage solution just yet and no customer here is interested or are given the option.

Edited as my tablet mis pselling now effecting phone!

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
JamesNT

Senior Member
Registered:
Posts: 139
Reply with quote  #5 
I'll admit that I haven't run across the limitations of DAS most likely because I'm still in the SMB world.  A two node HYPER-V cluster is awesome for these guys.  If I need more storage space on the MD3200, I just go get an MD1200 to add to it.  I have yet to see an SMB go over two cluster nodes. 

Once I hit Enterprise, I could see myself going iSCSI.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 803
Reply with quote  #6 
James these are Veeam through puts that I've captured on a site with Fibre Channel storage in a metro cluster with backup server and backup appliance in one of the DC's.

We've seen bigger and 800MB/s is common.
The last backup is a full backup for DC's which are always slow :-)

Veeam Throughput.JPG 


__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
JamesNT

Senior Member
Registered:
Posts: 139
Reply with quote  #7 
Very cool!  But you do realize SMB will never pay for fiber channel.  GE, on the other hand, used it all the time.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 803
Reply with quote  #8 
I know.
Sometimes SME won't even spring for a decent backup.

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
JamesNT

Senior Member
Registered:
Posts: 139
Reply with quote  #9 
Don't get me started on that.  To a one, the vast majority don't care about backup.  I've already heard stories of some local businesses going under after the hurricane because of no backup.

JamesNT

__________________
I miss Windows NT 4.0 Service Pack 4.
0
Infradeploy

Avatar / Picture

Senior Member
Registered:
Posts: 166
Reply with quote  #10 
Anyone done anything yet with VSA? I was testing it in a POC and it looked a bit shaky to me.

https://www.hpe.com/us/en/storage/storevirtual.html

__________________
Have SpaceSuit, Will Travel

0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 803
Reply with quote  #11 
Ton yes.
What issues did you see?
There is a freebie version with 1TB capabilities.
It is dependent on the server hardware you run it on. But we use them as storage resources in smaller environments where they want a DR storage solution but don't want to dump old kit

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
Infradeploy

Avatar / Picture

Senior Member
Registered:
Posts: 166
Reply with quote  #12 
The sync between the disks did seem laggy, but could not know for sure because monitoring of that process was lacking. Disk IO seem to be lower for the VM's hosted on the virtual disks.

Immediately got suspicious when the whole thing was so easy to set up [smile]

__________________
Have SpaceSuit, Will Travel

0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 803
Reply with quote  #13 
Last first.

Yes it is easy and the failover is as well.
The solution is build to be easy!

Well the notions is that the storage is a Centos image running Left hand and its provisioned against the VMDK/ VHD in your storage, so you will have a hit. Its biggest selling point, which you seem to have configured is network RAID.
There is a failover manager piece you can get that has more management features.
I'll go look and see what we are checking/ monitoring.

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
Wes

Senior Member
Registered:
Posts: 205
Reply with quote  #14 
There's some incorrect information here.  Shared SAS does *not* mean you're limited to multipathing on 2 hosts.

The MD3200 James mentioned, for example, can talk to 4 multipathed hosts.  I have a number of these in production - despite their quirks they perform great (as long as you configure your disks properly) and there is no tweaking involved... instant 6gbits!

If unlike me you have *real* budget you can do the newer models, md34xx and up and get 12gbit with no fuss.

Between a couple of MD3200s and an MD1200 hanging off one of them, and a couple 10gbase-t ports in each server connected to a couple stacked HP switches with 10gbase-t ports, we have some really nicely performaing 2-node clusters.
0
cj_berlin

Avatar / Picture

Senior Member
Registered:
Posts: 226
Reply with quote  #15 
Quote:
Originally Posted by Wes
There's some incorrect information here.  Shared SAS does *not* mean you're limited to multipathing on 2 hosts.


Well, OK, they can have two controller. So you're limited to 4 hosts if you multipath or to 8 hosts if you don't :-) Still you have a hard limit on possible paths per array.

__________________
Evgenij Smirnov

My personal blog (German): http://www.it-pro-berlin.de/
My stuff on PSGallery: https://www.powershellgallery.com/profiles/it-pro-berlin.de/
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.