Forum Message

 

 

We have moved the forum to https://forum.ngsolve.org . This is an archived version of the topics until 05/05/23. All the topics were moved to the new forum and conversations can be continued there. This forum is just kept as legacy to not invalidate old links. If you want to continue a conversation just look for the topic in the new forum.

Notice

The forum is in read only mode.

Numerical artifacts when using MPI parallelisation

More
3 years 9 months ago #2908 by philipp
Dear NGSolve community,

recently I have been experimenting with NGSolve and MPI.
For a start, I used a transport problem whose velocity is given by a Stokes
equation. With shared-memory parallelisation this works just fine.
When I switch to MPI (basically the same code but with mesh distribution),
I observe first of all that the velocity somehow increases and also
its vector field is being distorted. Depending on the number of ranks,
their seem also to be artifacts near the boundary, where the mesh is divided.
For matrix inversion I tried pardiso and bddc + GMRes which both produce
the same result.

I have no idea what kind of problem is causing this and any hints or suggestions
are very much appreciated.

All the best,
Philipp
More
3 years 9 months ago #2910 by lkogler
Could you upload your code? I will take a look at it.

Best,
Lukas
More
3 years 9 months ago #2911 by philipp
Hallo Lukas,

thank you very much for your interest.
I attached a mwe (just forced instationary Stokes flow) that reproduces the
issue on my system. The problem is particularly evident with three ranks.

All the best,
Philipp
Attachments:
More
3 years 9 months ago - 3 years 9 months ago #2917 by lkogler
The problem is that the NumberFESpace does not work with MPI. Fixing this should not be a problem, but it requires a couple tweaks in numerous places - expect it in the master sometime this week, I will post here when it is in the master branch.

However, even once it works, be aware that the NumberFESpace is quite a bottleneck with MPI - it consists of one DOF that is shared by all MPI ranks, so whenever a vector is cumulated (so when it's values are made consistent) EVERY rank (except rank 0) sends a message to, and receives a message from EVERY OTHER rank (except rank 0).

For this reason, in practice it might be a good idea to add a small regularization in the pressure-pressure block for Stokes problems with pure dirichlet conditions.

Best,
Lukas
Last edit: 3 years 9 months ago by lkogler.
More
3 years 9 months ago #2920 by philipp
Thank you very much for the clarification and the advice. I will go with the way you suggested for the sake of performance.

All the best,
Philipp
More
3 years 9 months ago #2926 by lkogler
The NumberFESpace should work with MPI with the coming nightly release.

Best, Lukas
The following user(s) said Thank You: philipp
Time to create page: 0.178 seconds