This suggest that your chipset is not able to handle the full PCI-E
speed on more than 3 ports. This usually depends on the way the PCI-E
links are wired trough the ports and the capacity of the chipset
itself. As an exemple we were never able to reach fullspeed
performance with Myrinet 10g on IBM e325 nodes, because of chipset
limitations. We had to make the node changed to solve the issue.
Running several instances of NPtcp should somewhat show the bandwith
limit of the PCI-E bus on your machine.
Le 17 déc. 07 à 21:51, Allan Menezes a écrit :
> Hi George, The following test peaks at 8392Mpbs: mpirun --prefix
> /opt/opnmpi124b --host a1,a1 -mca btl tcp,sm,self -np 2 ./NPmpi on a1
> and on a2
> mpirun --prefix /opt/opnmpi124b --host a2,a2 -mca btl tcp,sm,self -
> np 2 ./NPmpi
> gives 8565Mbps
> on a1:
> mpirun --prefix /opt/opnmpi124b --host a1,a1 -np 2 ./NPmpi
> gives 8424Mbps on a2:
> mpirun --prefix /opt/opnmpi124b --host a2,a2 -np 2 ./NPmpi
> gives 8372Mbps So theres enough memory and processor b/w to give
> for 3 pci express eth cards especially from --(a) between a1 and a2?
> Thank you for your help. Any assistance would be greatly apprectiated!
> Regards, Allan Menezes You should run a shared memory test, to see
> what's the max memory bandwidth you can get. Thanks, george. On Dec
> 2007, at 7:14 AM, Gleb Natapov wrote:
>>> On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
>>>>> How many PCI-Express Gigabit ethernet cards does OpenMPI version
>>>>> support with a corresponding linear increase in bandwith measured
>>>>> netpipe NPmpi and openmpi mpirun?
>>>>> With two PCI express cards I get a B/W of 1.75Gbps for 892Mbps
>>>>> for three pci express cards ( one built into the motherboard) i
>>>>> 1.95Gbps. They all are around 890Mbs indiviually measured with
>>>>> and NPtcp and NPmpi and openmpi. For two it seems there is a
>>>>> increase in b/w but not for three pci express gigabit eth cards.
>>>>> I have tune the cards using netpipe and $HOME/.openmpi/mca-
>>>>> file for latency and percentage b/w .
>>>>> Please advise.
>>> What is in your $HOME/.openmpi/mca-params.conf? May be are hitting
>>> chipset limit here. What is your HW configuration? Can you try to
>>> NPtcp on each interface simultaneously and see what BW do you get.
> users mailing list
Dr. Aurelien Bouteiller, Sr. Research Associate
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996