Jeff and Samuel,
Thanks for your responses.
Jeff Squyres wrote:
> If you need per-job settings, then a wrapper is probably your best bet.
> On Sep 10, 2008, at 5:08 AM, Samuel Sarholz wrote:
>> Hi Jeff,
>> I think setting global limits will not help in this case as the
>> limits like stacksize need to be program specific.
>> So far I am using wrappers, however the solution is a bit nasty.
>> If there is another way it would be great.
>> Hoever I doubt that there is a way as the FAQ states:
>> More specifically -- it may not be sufficient to simply execute the
>> following, because the ulimit may not be in effect on all nodes where
>> Open MPI processes will be run:
>> shell$ ulimit -l unlimited
>> shell$ mpirun -np 2 my_mpi_application
>> But this case is what is needed as any global or user global (bashrc
>> zshrc .. ) setting will only work if you run one kind of jobs at the
>> same time.
>> And wrapping:
>> ulimit -s 300000
>> mpirun -np 2 zsh -c wrap.sh
>> works but is not nice.
>> best regards,
>> Jeff Squyres wrote:
>>> There are several factors that can come into play here. See this
>>> FAQ entry about registered memory limits (the same concepts apply to
>>> the other limits):
>>> On Sep 9, 2008, at 7:04 PM, Amidu Oloso wrote:
>>>> mpirun under OpenMPI is not picking the limit settings from the
>>>> user environment. Is there a way to do this, short of wrapping my
>>>> executable in a script where my limits are set and then invoking
>>>> mpirun on that script?
>>>> users mailing list
>> users mailing list