- Thank you received: 31
The forum is in read only mode.
Guosheng Fu wrote: ../comp/libngcomp.so: undefined reference to `_ZNK6netgen8Ngx_Mesh26MultiElementTransformationILi2ELi2EDv4_dEEviiPKT1_mPS3_mS6_m'
Things like this happen when C++ libraries are linked together, but they were compiled with different compilers. (e.g. Umfpack was compiled with gcc 6 and NGSolve with gcc 4.8 for instance). I just saw that I am not passing CMAKE_CXX_COMPILER to the Umfpack subproject, I will fix that.Guosheng Fu wrote: OK. So I updated the library, and finally have MPI version installed.
Previously, It was the mpi location issue. I have two mpi in the cluster, one is located under the python folder that is not working properly. Say, how to specify the location of MPI in cmake? like -DMPI_ROOT=...
Now, I need to add a direct solver. I don't have any of umpack/pardiso/mumps.
1) I tried to install umpack as my local installation in the laptop using
"-DCMAKE_PREFIX_PATH= ~/netgen/SuiteSparse -DUSE_UMFPACK=ON"
but got an length error at the final linking stage:
../fem/libngfem.so: undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::replace(unsigned long, unsigned long, char const*, unsigned long)@GLIBCXX_3.4.21'
../comp/libngcomp.so: undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::operator<< <char, std::char_traits<char>, std::allocator<char> >(std::basic_ostream<char, std::char_traits<char> >&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)@GLIBCXX_3.4.21'
../comp/libngcomp.so: undefined reference to `std::basic_ofstream<char, std::char_traits<char> >::basic_ofstream(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::_Ios_Openmode)@GLIBCXX_3.4.21'
libsolve.so: undefined reference to `VTT for std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >@GLIBCXX_3.4.21'
....
Please give me the exact error message. Did you point to a prebuilt version of MUMPS? Otherwise it's built automatically with NGSolve (recommended approach).Guosheng Fu wrote: 2) similar thing happes when I activated mumps with "-DUSE_MUMPS=ON"
This function seems to be missing in you mkl version. Is there no newer version installed? (Yours is from 2011). If not, you can just comment out those two functions calls, they only affect shared memory parallelization and are irrelevant with MPI.Guosheng Fu wrote: 3) I tried to add pardiso with MKL:
"-DUSE_MKL=ON
-DMKL_ROOT=/soft/intel/x86_64/12.1/8.273/composer_xe_2011_sp1.8.273/mkl/"
but got a compiling error for ngsolve at very begining:
netgen/src/ngsolve/ngstd/taskmanager.cpp: In member function 'void ngstd::TaskManager::Loop(int)':
netgen/src/ngsolve/ngstd/taskmanager.cpp:345:32: error: 'mkl_set_num_threads_local' was not declared in this scope
mkl_set_num_threads_local(1);
Guosheng Fu wrote: Finally, I have convergence issue with the provided preconditioner. Running with
>> mpirun -np 5 ngspy mpi_poission.py
(the bddc preconditioner)
I got the following diverging result:
assemble VOL element 6697/6697
assemble VOL element 6697/6697
create masterinverse
master: got data from 4
now build graph
n = 8507
now build matrix
have matrix, now invert
start order
order ........ 14952360 Bytes task-based parallelization (C++11 threads) using 1 threads
factor SPD ........
0 1.00669
1 0.940628
2 0.533298
3 0.540046
4 1.33798
5 1.05662
But without MPI, the method converges in 12 iterations.
I replaced the preconditioner with type "local", then there is no convergence difference between the mpi version and non-mpi version. Is this to be expected?
Thanks in advance,
Guosheng