Howdy,
What I would be asking is -
What is the underlying requirement in this "mini-datacentre" build?
A - Applications
B - Infrastructure building blocks
C - Connectivity
D - dependencies
How many hosts / ports / storage?
What do the traffic volumes and application spread look like today and how are they expected to ramp up?
Do you just want a Layer2 fabric with routing in the "core/aggregation" layer?
You could just IRF all of the ToR switches together in a big ring...
Do you want a fully meshed Layer 3 routed Spine-leaf like topology?
Very popular these days Sir. Routing right down to the edge, multiple uplinks, equal cost multi-path OSPF etc.
Does this network need to participate with other networks either in other premises or with virtual infrastructure in a public cloud somewhere?
Will you be running Virtual switches in the servers (that we haven't yet mentioned) in the racks?
Should your Server guy be looking at less servers but with 25Gb NICs plugged straight into a bigger 2 switch Core and forget about ToR?
Do we need to think about any overlay networks that our compute minded colleagues might need to achieve some layer 2 stretchiness either between racks or between premises?
Are there security appliances (real or virtual) and or load balancing appliances (again real or virtual) that we need to factor into our design?
Well,
Firstly, I'd look at 5700 in IRF pairs in the racks with a pair of 5940 sat over them. 40Gb Bidi conenctions to keep the cost down - loads of active-active 40Gb non-blocking. Think about including a segregated 1Gb network (can just be a single switch) just to plug the management ports into and keep them off the "prod" LAN.
The (formerly procurve) aruba-OS switches are great in the Campus but IMHO Comware* having IRF, mBGP, VRF capability, finegrained-QoS, storage networking, eVPN, VXLAN VTEP, QinQ, deep debug and front / back airflow makes them the clear winner in a datacentre / server connecting type role.
* - Features vary by model etc :-) talking at least 5700/59x0
Paint us a picture, give us a use case and we'll help best we can.
I am a fan of getting the footprint as small / lean as possible (max perf per watt or max perf per 1U of rack) and then using the least number of biggest bandwidth per $ links to lash it together. You might be able to do it all with 2 good switches :-)
Hope that was useful - plenty to get stuck into.
Ian