Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Götz Waschk (goetz.waschk_at_[hidden])
Date: 2007-05-11 07:19:33


On 4/27/07, Götz Waschk <goetz.waschk_at_[hidden]> wrote:
> I'm testing my new cluster installation with the hpcc benchmark and
> openmpi 1.2.1 on RHEL5 32 bit. I have some trouble with using a
> threaded BLAS implementation. I have tried ATLAS 3.7.30 compiled with
> pthread support. It crashes as reported here:
[...]
> I have a problem with Goto BLAS 1.14 too, the output of hpcc stops
> before the HPL run, then the hpcc processes seem to do nothing,
> consuming 100% CPU. If I set the maximum number of threads for Goto
> BLAS to 1, hpcc is working fine again.

Hi,

replying to myself here. I've tested this a bit more. It is working
fine if I don't start hpcc from a Gridengine job. I think this is not
related to openmpi's Gridengine integration, as the problem persists
if I disable Gridengine integration on the mpirun command line. I'll
keep you informed if I find a solution.

Regards, Götz Waschk

-- 
AL I:40: Do what thou wilt shall be the whole of the Law.