- Thank you received: 0
using ngsolve with mpi on cluster
5 years 7 months ago #2302
by gcdiwan
using ngsolve with mpi on cluster was created by gcdiwan
Dear NGSolve Developers,
I am trying to understand how ngsolve works on a hpc cluster. I launch the mpi_poisson.py from here with:
on one of the nodes. I would have expected the code to print the dofs owned by each rank at the end and save the solution in 4 different vtk files as dictated by the partitions. However what i see is 4 copies of the code being run thus it's always the rank 0 which owns all the dofs and the output being saved to a single vtk file:
How do i make sure the mpi is enabled when using ngsolve on hpc? I can see that on the cluster, build/CMakeCache.txt has -DUSE_MPI=OFF as well as i can see that NETGEN_CMAKE_ARGS has:
-DCMAKE_CXX_COMPILER=/opt/gridware/depots/3dc76222/el7/pkg/compilers/gcc/8.3.0/bin/g++;-DCMAKE_C_COMPILER=/opt/gridware/depots/3dc76222/el7/pkg/compilers/gcc/8.3.0/bin/gcc;
I would have expected the c++ compiler to be mpicc and not g++. Is this the problem? I am attaching the CMakeCache.txt in build and build/netgen for your reference. Thanks a lot for your time and help!
I am trying to understand how ngsolve works on a hpc cluster. I launch the mpi_poisson.py from here with:
Code:
mpiexec -n 4 python3 mpi_poisson.py
Code:
Hello from rank 0 of 1
rank 0 has 1288 of 1288 dofs!
L2-error 1.3801833471401903e-07
Hello from rank 0 of 1
rank 0 has 1288 of 1288 dofs!
L2-error 1.380183347140096e-07
Hello from rank 0 of 1
rank 0 has 1288 of 1288 dofs!
L2-error 1.3801833471401694e-07
Hello from rank 0 of 1
rank 0 has 1288 of 1288 dofs!
L2-error 1.3801833471401066e-07
How do i make sure the mpi is enabled when using ngsolve on hpc? I can see that on the cluster, build/CMakeCache.txt has -DUSE_MPI=OFF as well as i can see that NETGEN_CMAKE_ARGS has:
-DCMAKE_CXX_COMPILER=/opt/gridware/depots/3dc76222/el7/pkg/compilers/gcc/8.3.0/bin/g++;-DCMAKE_C_COMPILER=/opt/gridware/depots/3dc76222/el7/pkg/compilers/gcc/8.3.0/bin/gcc;
I would have expected the c++ compiler to be mpicc and not g++. Is this the problem? I am attaching the CMakeCache.txt in build and build/netgen for your reference. Thanks a lot for your time and help!
Attachments:
5 years 7 months ago #2303
by lkogler
Replied by lkogler on topic using ngsolve with mpi on cluster
You need to add -DUSE_MPI=ON to your cmake command. On a cluster, -DUSE_GUI=OFF is probably advisible.
It is not recommended to use the MPI wrapper compilers (mpicc/mpicxx) - cmake should find the appropriate compile and link flags on its own.
However, if possible, make sure that the MPI library und ngsolve are compiled with the same underlying compiler.
Best,
Lukas
It is not recommended to use the MPI wrapper compilers (mpicc/mpicxx) - cmake should find the appropriate compile and link flags on its own.
However, if possible, make sure that the MPI library und ngsolve are compiled with the same underlying compiler.
Best,
Lukas
5 years 7 months ago #2326
by gcdiwan
I am not sure if we have used the same compilers for mpi libraries and ngsolve. Whereas we have MPI_C_COMPILER set to mpicc, the ngsolve's cmake argument for c compiler is gcc. Do you think this will have any effect on performance?
Thank you!
Replied by gcdiwan on topic using ngsolve with mpi on cluster
lkogler wrote: You need to add -DUSE_MPI=ON to your cmake command. On a cluster, -DUSE_GUI=OFF is probably advisible.
It is not recommended to use the MPI wrapper compilers (mpicc/mpicxx) - cmake should find the appropriate compile and link flags on its own.
However, if possible, make sure that the MPI library und ngsolve are compiled with the same underlying compiler.
Best,
Lukas
I am not sure if we have used the same compilers for mpi libraries and ngsolve. Whereas we have MPI_C_COMPILER set to mpicc, the ngsolve's cmake argument for c compiler is gcc. Do you think this will have any effect on performance?
Thank you!
Time to create page: 0.112 seconds