Jump to content
njbignell

Performance in a Blade\SAN\VMWare Environment

Recommended Posts

I'm about to implement a Blade system to consolidate the majority of our physical servers along with a SAN to overcome our current fragmented DAS based environment.

 

Everything is looking great but the one thing I can't get my head around is how we are supposed to get decent speed out of an 10Gbe iSCSI SAN compared to DAS when the bandwidth is going to be shared between around 15 different servers.

 

We will have a 2 node SQL cluster and an Exchange box with their data in the SAN (both physical) and the Blades will be pooled into an ESX cluster with around 15 guests and all datastores also in the SAN.

 

Am I missing something or is 10Gbps plenty to handle the traffic from all these servers?

Share this post


Link to post
Share on other sites

Your DAS being FC (if not what)?

 

What vendor are you going with?

By DAS I mean internal SAS drives in each of the servers, maybe the wrong term? Last time I checked the max theoretical limit is 3Gbps for those.

 

We're going with HP. The bundle we're looking at can be found in this PDF. It's the Enterprise Virtualization Infrastructure Bundle on the right.

Share this post


Link to post
Share on other sites

Am I missing something or is 10Gbps plenty to handle the traffic from all these servers?

What you're missing is that there is unlikely to be a point where all of your 15+ servers are going to hit the storage at full pelt simultaneously.

Share this post


Link to post
Share on other sites

DAS can also mean directly attached external storage, using fibre or the like.

 

You should be fine.

I know a guy who has an Oracle RAC cluster, multple Windows servers and multiple linux servers all using the one Equallogic setup over the same sized link with no worries.

 

Like SquallStrife mentioned, you won't max each servers ability at once.

Share this post


Link to post
Share on other sites

Yeah, won't be a problem.

 

We're building high density data centers, around 1,000 VMs on blade servers per containerised DC with all storage on 3Par SANs. I'd originally provisioned for 100GB/s Ethernet and still have allowed for that in the future when the standard settles but 3Par have a lot of experience with this and through a series of real-world scenarios and some complex calculations we soon saw that even at our densities FC is more than sufficient.

 

DCs in the cloud are a little different but 15 servers sharing your SAN will have an easy time of it, should fly along.

 

Cheers

Share this post


Link to post
Share on other sites

Seriously, that Lefthand is pretty poor.

 

I mean, it'll work, but don't expect line rate out of those ports.

Share this post


Link to post
Share on other sites

Seriously, that Lefthand is pretty poor.

Have you got any suggestions? Our budget is around $100K for an entire consolidation platform including all the licensing and support

Share this post


Link to post
Share on other sites

I can provide a contact at a good HP reseller if you want, chances are they could get HP in to help you spec it out.

They managed to get us a 40% discount on some recent kit (blades + EVA + networking), so I can highly recommend them.

Share this post


Link to post
Share on other sites

I can provide a contact at a good HP reseller if you want, chances are they could get HP in to help you spec it out.

They managed to get us a 40% discount on some recent kit (blades + EVA + networking), so I can highly recommend them.

That would be great! At the moment I'm really just trying to hash out a broad design and get some pricing so I can put together a summary and get initial approval then I'll most likely consult with someone on the exact components and modules I'll be needing.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×