This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Pasha, thanks for your comment.
I add the comments inlne.
> Please see my comment inline.
>> More generally, in the case of a front-end nodes with a processors definitively different from
>> worker nodes (same firm i.e.Intel) can openMPI applications compiled on one run correctly
>> on the others?
> It is possible to come up with a set of gcc flags that generate a "generic" binary for both systems. The problem is that the generic code would not be optimal for backend nodes. As a result it may effect on applications performance.
As a matter of fact we had trouble and loss of performance also with the hpl and IMB test
> We had a similar setup issues on our platform as well. As a work around, we compiled two version of the code and merged them to single install directory.
At the moment this is the way we are pursuing. We are doing the same work for the other two compiler, pgi and Intel.
At the end we are persuading ourselves to have the front-end of the same architecture of the backend nodes also if
this can results in a waste of valuable computing resources. Otherwise we are thinking, for the user, to move
the compilations, after a test phase on the front-end, on the backend nodes making use of the scheduler.