- Thank you received: 6
Please Log in or Create an account to join the conversation.
Only "mumps" and "masterinverse". Some more direct solvers are available through the PETSc interface.Also, a side question, what options are available for the inverse flag of bddc solver that is suitable for MPI?
Please Log in or Create an account to join the conversation.
Please Log in or Create an account to join the conversation.
Please Log in or Create an account to join the conversation.
No idea. Are you measuring simply the time the entire job takes? Bigger jobs can take longer to start up.I didn't see the expected speed-up when using more than 24 cores. (my machine is 24 core/node)
Do you know what's going on for my code?
What are you doing with the vector? Parallel NGSolve objects should accept a CUMULATED or DISTRIBUTED vector as input and do the conversion automatically if needed. But if you are doing something directly to the values you need to cumulate it.One thing I noticed is that the PETSc ksp solver returns "DISTRUBUTED" vector, and I need to manually convert it to a "CUMULATIVE" vector to make the code work.
Please Log in or Create an account to join the conversation.
What are you doing with the vector? Parallel NGSolve objects should accept a CUMULATED or DISTRIBUTED vector as input and do the conversion automatically if needed. But if you are doing something directly to the values you need to cumulate it.
Please Log in or Create an account to join the conversation.