Thanks a lot for the comments and clarifications. My responses are as follows:
We are not using Irix but Linux as the operating system. The config.guess script identifies the system as mips64-unknown-gnu-linux. I guess it identifies the platform as "unknwon" because it is all propritary, home built hardware.
Your offer to help us port the code to our platform is really generous. If my supervisor allows that, I'll create an account for you and let you know the details.
Now about netpipe, you are both right and wrong. You are absolutely right that netpipe does not like more that 2 processes (it kills itself). Fortunately, I only have 2 boards in my test cluster so that is not a problem. And openMPI does spawn 2 copies of the netpipe on the two boards, I have verified it by doing a "ps -ef" on both boards and seeing the process running. However, I used mpiexec instead of mpirun to create the processes. My question is (this is something I have always wondered) what is the difference between mpirun and mpiexec?
I will run it thru the debugger tomorrow and let you know of the outcome.
On 2/28/06, Brian Barrett <email@example.com> wrote:
On Feb 28, 2006, at 7:45 PM, Durga Choudhury wrote:
> When I downloaded openMPI and tried to compile it for our MIPS64
> platform, it broke at 3 places.
I'm guessing since you call it MIPS64 that you aren't running IRIX,
since most SGI users just call it MIPS ;). We don't really support
the MIPS platform at this time, due to lack of resources. None of
the institutions involved in Open MPI have MIPS-based clusters, so it
hasn't been on our to-do list. If someone were to offer a temporary
guest account for a week or two, it would help immensely. Or, even
better, I'm happy to guide someone through cleaning up the port...
> 1. The configure script in the root directory did not have a case
> for MIPS64. That is fixed in the attached patch.configure patch file.
Thanks. For future reference, configure is generated from a bunch
of .m4 macro files, so those are what need to be patched. The one in
this case is config/ompi_config_asm.m4. I've committed a patch for
this in our SVN trunk - it should be in the nightly tarballs tonight.
> 2. The Makefile.am in opal/asm/ directory is incorrect. It creates
> a platform-dependent file called atomic-asm.s that has #include's
> in it. According to gcc manual, .s assembly files are NOT
> preprocessed and hence none of the macros in the atomic-asm.s file
> were expanded.
> Note that it works fine for IA32 platforms because that version of
> atomic-asm.s file does not have macros in it. The
> fixes this. Note that you need to rerun automake after patching
> this file.
Yes, the MIPS assembly is IRIX specific and (I believe) requires the
use of the SGI compilers. Using a capital S for the suffix isn't
really a fix, as some compilers we have to support don't like that
suffix (I can't remember offhand which, but there definitely are a
The right solution is to remove the short-cuts we took for the MIPS
assembly and make it like all the other platforms. This is unlikely
to happen within the OMPI development team unless someone provides us
with access to machines.
> 3. I don't remember the third place it broke right now. I can give
> out a third patch later.
> Now the question is: Is there a benchmark program I can run for the
> openMPI suite of library? I tried NetPipe from Ameslab.gov. It
> seems to run, but it terminates without producing any output,
> either to the console or to any file. I tried specifying the output
> file explicitly with the -o option, but to no avail.
NetPIPE should produce output to both standard output and a file
np.out. If it is failing to do so, you might want to check if it
started in the first place. I think some versions of NetPIPE get
unhappy unless you run with exactly two processes (mpirun -np 2 ./
NPmpi), but I could be mistaken there. If you aren't seeing the
output, there are some fairly serious issues with the Open MPI
build. First step would be to make sure the NetPIPE processes are
starting. Assuming they are, I would start them in a debugger,
mpirun -np 2 -d xterm -e gdb ./NPMpi
and see if they produce output that way (which would indicate that
there's a problem with our standard output forwarding). If they
produce no output, you might want to step through MPI_INIT and figure
out where they are getting hung up. If you can get some information
about where things are getting stuck, I can probably help with
narrowing down the issue.
By the way, if you are interested in continuing to work on getting
Open MPI ported to your platform, I'd recommend subscribing to the
devel mailing list - the discussions tend to get much more technical,
as we're less worried about boring a bunch of people who just want to
use Open MPI. The mailing list URL is:
I'd also recommend working from a Subversion checkout of the trunk -
it's much easier to feed patches back (and they're much more likely
to be accepted) if you are working from the same source as all the
core developers. More information is available at this URL:
Open MPI developer
users mailing list
Devil wanted omnipresence;
He therefore created communists.