Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Götz Waschk (goetz.waschk_at_[hidden])
Date: 2007-05-11 07:19:33


On 4/27/07, Götz Waschk <goetz.waschk_at_[hidden]> wrote:
> I'm testing my new cluster installation with the hpcc benchmark and
> openmpi 1.2.1 on RHEL5 32 bit. I have some trouble with using a
> threaded BLAS implementation. I have tried ATLAS 3.7.30 compiled with
> pthread support. It crashes as reported here:
[...]
> I have a problem with Goto BLAS 1.14 too, the output of hpcc stops
> before the HPL run, then the hpcc processes seem to do nothing,
> consuming 100% CPU. If I set the maximum number of threads for Goto
> BLAS to 1, hpcc is working fine again.

Hi,

replying to myself here. I've tested this a bit more. It is working
fine if I don't start hpcc from a Gridengine job. I think this is not
related to openmpi's Gridengine integration, as the problem persists
if I disable Gridengine integration on the mpirun command line. I'll
keep you informed if I find a solution.

Regards, Götz Waschk

-- 
AL I:40: Do what thou wilt shall be the whole of the Law.