This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Brian Barrett wrote:
> On Jul 14, 2007, at 8:26 AM, Dirk Eddelbuettel wrote:
>> Please let us (ie Debian's openmpi maintainers) how else we can
>> help. I am
>> ccing the porters lists (for hppa, m68k, mips) too to invite them
>> to help. I
>> hope that doesn't get the spam filters going... I may contact the
>> porters once we have a failure; s390 and sparc activity are not as
>> big these
> Open MPI uses some assembly for things like atomic locks, atomic
> compare and swap, memory barriers, and the like. We currently have
> support for:
> * x86 (32 bit)
> * x86_64 / amd64 (32 or 64 bit)
> * UltraSparc (v8plus and v9* targets)
> * IA64
> * PowerPC (32 or 64 bit)
> We also have code for:
> * Alpha
> * MIPS (32 bit NEW ABI & 64 bit)
> This support isn't well tested in a while and it sounds like it
> doesn't work for MIPS. At one time, we supported the sparc v8
> target, but that The other platforms (hppa, mipsel (how is this
> different than MIPS?), s390, m68k) aren't at all supported by Open
> MPI. If you can get the real error messages, I can help on the MIPS
> issue, although it'll have to be a low priority.
As maintainer of the atomics code for two projects unrelated to OpenMPI,
I thought I'd pass on some of my insight. I'll not post any code here
to avoid any accidental license questions.
HPPA lacks an atomic compare-and-swap and is therefore probably a lost
cause. The Linux kernel uses HPPA's only atomic instruction,
load-and-clear, to implement a spinlock and a hashed table of spinlocks
to implement atomic operations. This works because the atomic_read and
atomic_set macros honor the spinlocks. This is not the case with ompi's
atomics, is it? OpenMPI appears to contain fragments of such an
array-of-spinlocks implementation for SPARCv8, but Brian's comments
suggest to me that this may no longer work.
ARM before v6 needs no memory barriers, but lacks atomic instructions
other than unconditional swap (though very few multi-processor systems
were built with earlier chips). However, on the libc-ports mailing
list (http://sourceware.org/ml/libc-ports/2005-10/msg00016.html) says of
the code used in glibc
/* Atomic compare and exchange. These sequences are not actually Atomic;
there is a race if *MEM != OLDVAL and we are preempted between the two
swaps. However, they are very close to atomic, and are the best that a
pre-ARMv6 implementation can do without operating system support.
LinuxThreads has been using these sequences for many years. */
So, ompi might try getting away with the same logic if an ARM port is
high priority for somebody. Alternatively, if one is on a new enough
Linux kernel (>= 2.6.12 IIRC) you get kernel support for CAS by calling
to a function in a "highpage" (like the VDSO on x86) that is implemented
natively on >=ARMv6 and traps to the kernel otherwise (the kernel
disables interrupts and then uses the not-quite-atomic sequence).
For ARMv6 you get a load-exclusive and store-exclusive pair, and you get
real memory barriers as well.
M68K has a CAS instruction and memory barriers are no-ops. This should
be an easy one to implement from the instruction set reference docs.
s390 is one I don't have any first-hand experience with but know from
peeking at the Linux kernel source that it has the eieio memory-barrier
instruction of early PPCs and a CAS instruction. Again, should be easy
from the ISA docs.
MIPS is supposed to work w/ ompi on IRIX, but there is no
atomic-mips-linux.s on OpenMPI 1.2.3. I was going to try to build 1.2.3
on an O2K (IRIX64 6.5 and gcc 3.3) today, but found that configure dies with
configure: error: Could not determine global symbol label prefix
So, I'll not be pursuing that.
> We don't currently have support for a non-assembly code path. We
> originally planned on having one, but the team went away from that
> route over time and there's no way to build Open MPI without assembly
> support right now.
Paul H. Hargrove PHHargrove_at_[hidden]
Future Technologies Group
HPC Research Department Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900