Mark Minasi's Tech Forum
Register Calendar Latest Topics Chat
 
 
 


Reply
  Author   Comment  
Michael Pietrzak

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 51
Reply with quote  #1 
Hi folks,

I want to unify my small departments storage have been looking at some small to mid-size iSCSI SAN products.

From my research, it looks like most implementations involve connecting the SAN ethetnet connections to a switch. Now at my place of work, I do not have access to the networking equipment so I cannot configure the switch if needed.

But all of my servers are in general close proximity, (10 feet or less) and so the SAN would be close as well.

The SAN I am looking at has eight ethernet ports on the back of it. Question....Can I directly connect a server to the SAN's ethernet port that has been set to use a specific virtual disk on the SAN? 
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 810
Reply with quote  #2 
Micheal

Few things first.
Yes iSCSI uses Ethernet/ Cat5E/ Cat6 cables.
But I would advise against direct connection.You need failover as you will have "all of your eggs in one basket"

So generally you use 2 different NICs on the Server to connect to 2 different switches, that connect to 2 different management ports on the Storage Array. See the attached picture.
This way you can suffer a NIC failure, switch failure, Storage Array Controller failure, cable failure etc.
You sometimes see Jumbo Packets enabled for storage (9000Bytes i na frams instead of 1500Bytes in a frame) but this is for specific instances.

With regard the SAN, we need more specific info, make, model, controller type as there are many, many different SANs

Generally the 8 ports are probably 4 on one controller and 4 on another controller.
At least 2 of those (1 per controller) will be management ports for the SAN so you can monitor, configure and see whats going on.
2 ports per controller will be for connectivity to the 2 switches I mentioned above.
The other port could be replication/ copy or some other function - need specific info for the Array.

More for info going forward as you move up the IT Food Chain, a Storage Array or Disk Array is specifically what your talking about here. A SAN is a Storage Area Network and can include how the systems communicate, protocols and connectivity type).

SAN Failover.jpg 




__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
Michael Pietrzak

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 51
Reply with quote  #3 
Hi Joe,

Thanks for posting! It is wonderfully useful.

I guess I will keep researching to determine if that level of hardware is what we want versus need. The model I am looking at is the Promise Technology Vess R2XXX series.

The model in particular has 2 10GB SFP+ (?) ports and 4 iSCSI 1Gbps ports.

I am looking to unify our storage in the most efficient way possible. Right now it's a bunch of jbods hung off various servers.

I am using the old StarWind iSCSI software to make some of the storage available to other hosts.

Do you have thoughts as to a more efficient solution?
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 810
Reply with quote  #4 
Few questions before you go much further.
How much space are you looking to provide?
How many Disk IOPS do you need?
What RAID do you want/ would like.

Before you decide what to buy, get the answers to the above.
MS MAPs Toolkit will get 2 of those answers - https://www.microsoft.com/en-IE/download/details.aspx?id=7826
The Disk IOPS and the space needed.
The RAID type - well thats a function of maths, disk capacity, disk number and required redundancy.

Once you decide on a solution, especially if your the only one supporting this, test, test and test again.
You won't have the opportunity after it goes into production.

10Gb is expensive, you need 10Gb HBA's, cables and switches. Depending on the switches you may need 10Gb SFP's as well.
Fibre is almost a par as 10Gb on price, but a different network protocol set and network config setup. It tends to be less problematic due to the fact that only a few know the secret sauces to make it work as opposed to Ethernet and not a lot of people want to plug anything into a Fibre Switch.

Home lab discussion on 10Gb networking - https://www.reddit.com/r/homelab/comments/3k545j/cheapest_10gb_network/ 

iSCSI will use Cat5E/ Cat6 Cables and standard Ethernet switches
Its relatively cheap in comparison to the other main methods.

You can do some testing on this a HPE VAS (Virtual Storage Appliance) with a free 1TB 3 Year license
http://www8.hp.com/ie/en/products/data-storage/free-vsa.html
All you need is an old server with disks

I've some experience with the Promise Array.
Its was a mixed Fibre/ iSCSI Array.
Overall the device was very cost efficient.
You need to test the device and failed disks and hot swap - this is really important
Message me direct is you want more info on that deployment.




__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 810
Reply with quote  #5 
Oh many thanks to Ultan and Evgenij and all the others who started me on the storage path a few years ago.
They answered all my questions and more - so hopefully I'm repaying their faith.


__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
cj_berlin

Avatar / Picture

Senior Member
Registered:
Posts: 227
Reply with quote  #6 
Hi,

late to the party but a few things maybe worth considering.

1. direct Ethernet connection is always a bad thing because of the way Ethernet works (CSMA/CD protocol). If we are talking about Gig links, iSCSI will saturate them in most cases which leads to collisions which, in turn, decreases the very bandwidth there is to saturate. A switch will usually implement some kind of a hold-off algorithm that helps minimise collisions to a degree.
2. seeing as you obviously are in a position to buy a SAN array, you could just buy a switch as well and be done with it. It would be your "SAN switch" then. Place it among your servers and stick a "SAN switch - DO NOT PLUG ANYTHING IN!!!" to it. You would want to separate your iSCSI traffic from production traffic anyway, even if you were to go through your company's core switches.
3. Aggregating multiple ports on SAN array / switch / servers: In my experience, storage multipathing has proven way more robust and in most cases slightly better performing than LACP or any other form of network trunking.
4. Jumbo Frames will help take the load off switches and NICs, probably the SAN as well. I have measured between 3% and 20% increase in performance, depending on overall load.
5. RAID level. Well, this, of course, is a tricky one. You really REALLY need to know your overall I/O profile before you decide that it is safe to go with a parity based RAID level like 5 or 6. The only RAID that will give you predictable results is RAID10 which, of course, is rather expensive in terms of overhead (your effective capacity is half your disk capacity). But read some of the articles referenced under http://baarf.com/ and decide for yourself.
6. Free iSCSI appliances installable on your own software: This one is pretty cool as well and has a 2TB non expiring license. Been around for over 10 years, too: http://www.open-e.com/products/data-storage-software-v7-soho/

@Joe: Thanks for mentioning, mate. But that's what we used to do on this forum and what we'll hopefully continue to do now that it's back.

__________________
Evgenij Smirnov

My personal blog (German): http://www.it-pro-berlin.de/
My stuff on PSGallery: https://www.powershellgallery.com/profiles/it-pro-berlin.de/
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.