Re: Molcas 7.8 over InfiniBand - problem with nodes


[ Molcas user's WWWBoard ]

Posted by Piotr Stuglik on July 08, 2013 at 08:43:39:

In Reply to: Molcas 7.8 over InfiniBand - problem with nodes posted by Piotr Stuglik on July 07, 2013 at 20:19:12:

Hi,

I think I found what was wrong the last time, because I'm running Molcas 7.8 on 2 nodes right now - so far, so good - 30 tests did not fail so far. For some reason Open MPI does not work as I would expect it (i.e. govern the message system without additional help). I will run it on 3 nodes when the tests on 2 nodes finish, because I've seen some people reported that Molcas may work on 1 node, 2 nodes - 3 nodes and above - not so much.

Adding -infiniband flag to the configure flags before compilation is necessary. Exporting $CPUS is also necessary. Using -machineflag is not necessary. I was thinking that adding "export CPUS=$(cat $PBS_NODEFILE | wc -l)" line to the Molcas driver file would be a good idea.

Regards

Piotr


Follow Ups:



Post a Followup

Name:
E-Mail:

Subject:

if B is 1s22s22p1, what is Li?

Passfield:

Comments:


[ Follow Ups ] [ Post Followup ] [ Molcas user's WWWBoard ]