[Rocks-Discuss] Best Practice recomendation for 10GB Ethernet.
24, Oct 2013 17:27
> If the 10Gb card is not natively support by Rocks 5.3 (which really
> means RHEL/CentOS 5.4) you will need to build a new kernel roll if you
> want to use this as use your management ("private") network. I'm
> surprised PXE does not work, if the cards really do not support PXE
> (check all BIOS settings) you should not use these for your management
> network. You may need to try things like disabling all the onboard
> NICs first and see if you can PXE out the 10Gb NIC on a reboot.
I tried disabling the onboard and looking for an option to PXE from the 10GB
but to no avail.
> For most of our 10Gb connected clusters we use 10Gb on the private
> (eth0) side of the Frontend and 1Gb on the private for all the compute
> nodes. Then we add another application network for all the compute
> nodes' 10Gb. This allows us to use standard Gb for management and NFS
> and the high speed network for MPI.
> It's twice the cables, but if your 10Gbit network doesn't PXE, you
> should stick with the commodity Ethernet for management.
If I add another 'fast' network on the back, does MPI automatically know to
use the 10GB connection or is there some other configuration I need to do to
enable it? Also, can I add the 'fast' connection to the headnode too?
Secondly, If I wanted to free up the private network after wards, could I
change the 1gb private connections to a 100mb switch and not lose any
> Also what OS are you running, meaning what is the output of "rocks list
I am simply using Rocks 5.3
[root at rgserv11 ~]# rocks list roll
NAME VERSION ARCH ENABLED
area51: 5.3 x86_64 yes
base: 5.3 x86_64 yes
bio: 5.3 x86_64 yes
ganglia: 5.3 x86_64 yes
hpc: 5.3 x86_64 yes
kernel: 5.3 x86_64 yes
os: 5.3 x86_64 yes
sge: 5.3 x86_64 yes
web-server: 5.3 x86_64 yes
I'm probably on my way to a reinstall to use all the OS disks
> mason katz
> On Thu, Apr 15, 2010 at 12:14 AM, Marc Beaumont
> > Hi All,
> > I need a little advice on building my cluster.
> > I am attempting to build using Rocks 5.3 on Dell Poweredge R610's. They
> > have 4 onboard 1GB and I have dual port 10GB SFP+ connect PCI Express
> > in the machines.
> > I would like some advice on what is the best way to setup and configure
> > Rocks on this kind of setup.
> > When I do an install on the hardware as it stands, it configures the
> > network as the 10GB card port 1 and the public on port 2 of the same
> > This is okay, as I can then just setup one of the 1GB interfaces to be
> on my
> > public network after I finish the install. I have to installthe latest
> > drivers for the 10GB card, but this is fairly trivial.
> > Then when I come to install my compute nodes, they will not PXE boot off
> > 10GB cards and the standard boot CD does not have the drivers for these
> > cards so I'm a little stuck.
> > I was wondering if it is better to set it all up using only the 1gb
> > and then add my 'fast' network later using the 10GB cards, thus keeping
> > 1gb private connections as well.
> > I'd rather not have to do this as the 10GB cards should be ample for the
> > private network, but overall, I need a cluster setup working.
> > Is there an alternative way of installing the compute nodes, then
> > the network card drivers and then adding them to the cluster after they
> > installed as this would be a suitable alternative.
> > All and any advice please.
> > Marc Beaumont
> > Senior IT Support Engineer
> > Aircraft Research Association