# 2.2 Programming an eigenvalue solver¶

We solve the generalized eigenvalue problem

$A u = \lambda M u,$

where $$A$$ comes from $$\int \nabla u \nabla v$$, and $$M$$ from $$\int u v$$, on the space $$H_0^1$$.

This tutorial shows how to implement linear algebra algorithms.

[1]:

import netgen.gui
from netgen.geom2d import unit_square
from ngsolve import *
import math
import scipy.linalg
from scipy import random

mesh = Mesh(unit_square.GenerateMesh(maxh=0.1))


We setup a stiffness matrix $$A$$ and a mass matrix $$M$$, and declare a preconditioner for $$A$$:

[2]:

fes = H1(mesh, order=4, dirichlet=".*")
u = fes.TrialFunction()
v = fes.TestFunction()

a = BilinearForm(fes)
pre = Preconditioner(a, "multigrid")

m = BilinearForm(fes)
m += u*v*dx

a.Assemble()
m.Assemble()

u = GridFunction(fes)


The inverse iteration is

$u_{n+1} = A^{-1} M u_n,$

where the Rayleigh quotient

$\rho_n = \frac{\left \langle A u_n, u_n\right \rangle}{\left \langle M u_n, u_n\right \rangle}$

converges to the smallest eigenvalue $$\lambda_1$$, with rate of convergence $$\lambda_1 / \lambda_2,$$ where $$\lambda_2$$ is the next smallest eigenvalue.

The preconditioned inverse iteration (PINVIT), see [Knyazef+Neymeyr], replaces $$A^{-1}$$ by an approximate inverse $$C^{-1}$$:

$\begin{split}\rho_n = \frac{\left \langle A u_n, u_n\right \rangle}{\left \langle M u_n, u_n\right \rangle} \\ w_n = C^{-1} (A u_n - \rho_n M u_n) \\ u_{n+1} = u_n + \alpha w_n\end{split}$

The optimal step-size $$\alpha$$ is found by minimizing the Rayleigh-quotient on a two-dimensional space:

$u_{n+1} = \operatorname{arg} \min_{v \in \{ u_n, w_n\}} \frac{\left \langle A v, v\right \rangle}{\left \langle M v, v\right \rangle}$

This minimization problem can be solved by a small eigenvalue problem

$a y = \lambda m y$

with matrices

$\begin{split}a = \left( \begin{array}{cc} \left \langle A u_n, u_n \right \rangle & \left \langle A u_n, w_n \right \rangle \\ \left \langle A w_n, u_n \right \rangle & \left \langle A w_n, w_n \right \rangle \end{array} \right), \quad m = \left( \begin{array}{cc} \left \langle M u_n, u_n \right \rangle & \left \langle M u_n, w_n \right \rangle \\ \left \langle M w_n, u_n \right \rangle & \left \langle M w_n, w_n \right \rangle \end{array} \right).\end{split}$

Then, the new iterate is

$u_{n+1} = y_1 u_n + y_2 w_n$

where $$y$$ is the eigenvector corresponding to the smaller eigenvalue.

## Implementation in NGSolve¶

First, we create some help vectors. CreateVector creates new vectors of the same format as the existing vector, i.e., same dimension, same real/ complex type, same entry-size, and same MPI-parallel distribution if any.

[3]:

r = u.vec.CreateVector()
w = u.vec.CreateVector()
Mu = u.vec.CreateVector()
Au = u.vec.CreateVector()
Mw = u.vec.CreateVector()
Aw = u.vec.CreateVector()


Next, we pick a random initial vector, which is zeroed on the Dirichlet boundary.

Below, the FV method (short for FlatVector) lets us access the abstract vector’s linear memory chunk, which in turn provides a “numpy” view of the memory. The projector clears the entries at the Dirichlet boundary:

[4]:

r.FV().NumPy()[:] = random.rand(fes.ndof)
u.vec.data = Projector(fes.FreeDofs(), True) * r


Finally, we run the PINVIT algorithm. Note that the small matrices $$a$$ and $$m$$ defined above are called asmall and msmall below. They are of type Matrix, a class provided by NGSolve for dense matrices.

[5]:

for i in range(20):
Au.data = a.mat * u.vec
Mu.data = m.mat * u.vec
auu = InnerProduct(Au, u.vec)
muu = InnerProduct(Mu, u.vec)
# Rayleigh quotient
lam = auu/muu
print (lam / (math.pi**2))
# residual
r.data = Au - lam * Mu
w.data = pre.mat * r.data
w.data = 1/Norm(w) * w
Aw.data = a.mat * w
Mw.data = m.mat * w

# setup and solve 2x2 small eigenvalue problem
asmall = Matrix(2,2)
asmall[0,0] = auu
asmall[0,1] = asmall[1,0] = InnerProduct(Au, w)
asmall[1,1] = InnerProduct(Aw, w)
msmall = Matrix(2,2)
msmall[0,0] = muu
msmall[0,1] = msmall[1,0] = InnerProduct(Mu, w)
msmall[1,1] = InnerProduct(Mw, w)
# print ("asmall =", asmall, ", msmall = ", msmall)

eval,evec = scipy.linalg.eigh(a=asmall, b=msmall)
# print (eval, evec)
u.vec.data = float(evec[0,0]) * u.vec + float(evec[1,0]) * w

Draw (u)

22.90966430220296
2.0703597199439043
2.001629784060489
2.000248910691088
2.0000580511268775
2.0000151392070036
2.0000040113490787
2.0000010738975402
2.0000002892729336
2.000000078307465
2.000000021290389
2.0000000058280283
2.0000000016226536
2.0000000004764638
2.000000000163472
2.0000000000778746
2.0000000000544307
2.0000000000480056
2.0000000000462412
2.000000000045757


### Simultaneous iteration for several eigenvalues¶

Here are the steps for extending the above to num vectors.

Declare a GridFunction with multiple components to store several eigenvectors:

[6]:

num = 5
u = GridFunction(fes, multidim=num)


Create list of help vectors, and a set of random initial vectors in u, with zero boundary conditions:

[7]:

r = u.vec.CreateVector()
Av = u.vec.CreateVector()
Mv = u.vec.CreateVector()

vecs = []
for i in range(2*num):
vecs.append (u.vec.CreateVector())

for v in u.vecs:
r.FV().NumPy()[:] = random.rand(fes.ndof)
v.data = Projector(fes.FreeDofs(), True) * r


Compute num residuals, and solve a small eigenvalue problem on a 2 $$\times$$ num dimensional space:

[8]:

asmall = Matrix(2*num, 2*num)
msmall = Matrix(2*num, 2*num)
lams = num * [1]

for i in range(20):

for j in range(num):
vecs[j].data = u.vecs[j]
r.data = a.mat * vecs[j] - lams[j] * m.mat * vecs[j]
vecs[num+j].data = pre.mat * r

for j in range(2*num):
Av.data = a.mat * vecs[j]
Mv.data = m.mat * vecs[j]
for k in range(2*num):
asmall[j,k] = InnerProduct(Av, vecs[k])
msmall[j,k] = InnerProduct(Mv, vecs[k])

ev,evec = scipy.linalg.eigh(a=asmall, b=msmall)
lams[:] = ev[0:num]
print (i, ":", [lam/math.pi**2 for lam in lams])

for j in range(num):
u.vecs[j][:] = 0.0
for k in range(2*num):
u.vecs[j].data += float(evec[k,j]) * vecs[k]

Draw (u)

0 : [11.839628012103589, 66.90704077622333, 83.23602218500713, 96.16783419040934, 113.19910433421691]
1 : [2.062106671547611, 10.252918770778471, 12.234868622400972, 19.16037938161243, 27.490259944822544]
2 : [2.001455543075888, 5.386524364181839, 5.56108827424483, 10.442539176041299, 13.067462676289631]
3 : [2.0000815269899657, 5.020909036999217, 5.050088997020041, 9.798284039621885, 10.152372454173927]
4 : [2.000014400066359, 5.001641836531211, 5.008155007978695, 9.444568060589997, 10.022530029444464]
5 : [2.000003188391392, 5.000234753763163, 5.0014261130965805, 9.005843080180533, 10.006058910619629]
6 : [2.00000073931688, 5.000051079389556, 5.000236979847725, 8.594599099526139, 10.001972290852708]
7 : [2.000000177140659, 5.00001236207398, 5.000036573981745, 8.308828901118687, 10.000723229892294]
8 : [2.0000000428250084, 5.000003047336144, 5.0000056023772, 8.14913662131946, 10.000274956900094]
9 : [2.0000000105306808, 5.000000714618564, 5.000000958266693, 8.069278102374625, 10.000106590430518]
10 : [2.0000000026260345, 5.0000001414399255, 5.00000022135555, 8.031637385151042, 10.000041623607716]
11 : [2.0000000006854304, 5.000000031401631, 5.00000005978714, 8.01433099296039, 10.000016368424955]
12 : [2.000000000204645, 5.000000010754387, 5.000000019340596, 8.006471729809725, 10.00000650426417]
13 : [2.0000000000853038, 5.000000006579186, 5.00000000899958, 8.002918618826948, 10.000002644500203]
14 : [2.000000000055517, 5.000000005630956, 5.000000006377929, 8.001315899803812, 10.000001132168126]
15 : [2.0000000000480695, 5.0000000053296665, 5.000000005784706, 8.000593282692382, 10.000000539177078]
16 : [2.0000000000462017, 5.000000005223161, 5.000000005663905, 8.000267556904207, 10.000000306528559]
17 : [2.0000000000457323, 5.000000005192912, 5.00000000563656, 8.000120698747716, 10.000000215215174]
18 : [2.000000000045614, 5.000000005184557, 5.000000005629427, 8.000054478014915, 10.000000179364676]
19 : [2.000000000045583, 5.000000005182548, 5.000000005627899, 8.000024609227019, 10.000000165284584]


The multidim-component select in the Visualization dialog box allows to switch between the components of the multidim-GridFunction.

### Implementation using MultiVector¶

The simultaneous iteration can be optimized by using MultiVectors introduced in NGSolve 6.2.2007. These are arrays of vectors of the same format. You can think of a MultiVector with m components of vector size n as an $$n \times m$$ matrix.

• a MultiVector consisting of num vectors of the same format as an existing vector vec is created via MultiVector(vec, num).

• we can iterate over the components of a MultiVector, and the bracket operator allows to access a subset of vectors

• linear operator application is optimized for MultiVector

• vector operations are optimized and called as mv * densematrix: $$x = y * mat$$ results in x[i] = sum_j y[j] * mat[j,i] (where x and y are ’MultiVector’s, and mat is a dense matrix)

• pair-wise inner products of two MultiVectors is available, the result is a dense matrix: InnerProduct(x,y)[i,j] =

• mv.Orthogonalize() uses modified Gram-Schmidt to orthogonalize the vectors. Optionally, a matrix defining the inner product can be provided.

• with mv.Append(vec) we can add another vector to the array of vectos. A new vector is created, and the values are copied

• mv.AppendOrthogonalize(vec) appends a new vector, and orthogonalizes it against the existing vectors, which are assumed to be orthogonal.

[9]:

uvecs = MultiVector(u.vec, num)
vecs = MultiVector(u.vec, 2*num)

for v in vecs[0:num]:
v.SetRandom()
uvecs[:] = pre * vecs[0:num]
lams = Vector(num * [1])

[10]:

for i in range(20):
vecs[0:num] = a.mat * uvecs - (m.mat * uvecs).Scale (lams)
vecs[num:2*num] = pre * vecs[0:num]
vecs[0:num] = uvecs

vecs.Orthogonalize() # m.mat)

asmall = InnerProduct (vecs, a.mat * vecs)
msmall = InnerProduct (vecs, m.mat * vecs)

ev,evec = scipy.linalg.eigh(a=asmall, b=msmall)
lams = Vector(ev[0:num])
print (i, ":", [l/math.pi**2 for l in lams])

uvecs[:] = vecs * Matrix(evec[:,0:num])

0 : [404.68056942371976, 1060.470692600797, 1068.6095517702158, 1145.4078742910644, 1156.661004074373]
1 : [2.0134975652880094, 9.629586012146294, 11.743085663541715, 19.611719804011088, 30.836558101834626]
2 : [2.000234806467672, 5.208490497916943, 5.438018552796777, 10.164006528324329, 10.925396421087177]
3 : [2.0000260396243807, 5.015818008551176, 5.021895371867914, 8.67377744658818, 10.128432056689821]
4 : [2.0000055831322388, 5.0010524209916, 5.002575318137579, 8.265838617010955, 10.02358049660661]
5 : [2.000001354657949, 5.00013281071413, 5.000337995873408, 8.11418566635396, 10.005349940417481]
6 : [2.000000343571751, 5.00002527729114, 5.000050100052231, 8.049259771328536, 10.00151765756906]
7 : [2.000000089114571, 5.000005078122855, 5.000008948494983, 8.021247210863253, 10.000506672534723]
8 : [2.000000023433106, 5.000001045724599, 5.00000188169463, 8.00915619602276, 10.000187310003113]
9 : [2.000000006241454, 5.000000229419529, 5.000000434942534, 8.003953491699354, 10.00007208964987]
10 : [2.000000001696028, 5.000000056071374, 5.000000107886655, 8.001707726094567, 10.000028296976874]
11 : [2.000000000487065, 5.000000017218019, 5.000000030459089, 8.000739031380261, 10.0000112509173]
12 : [2.000000000164004, 5.0000000081464995, 5.000000011692094, 8.00032004994377, 10.000004552646866]
13 : [2.00000000007741, 5.000000005950215, 5.000000007096612, 8.000138821777048, 10.000001907599977]
14 : [2.0000000000541447, 5.000000005385244, 5.000000005983285, 8.000060265616737, 10.00000086002437]
15 : [2.0000000000478835, 5.000000005233976, 5.000000005715913, 8.00002620917602, 10.000000444376552]
16 : [2.000000000046196, 5.000000005195062, 5.0000000056497305, 8.000011420339161, 10.000000279210257]
17 : [2.000000000045742, 5.000000005185314, 5.000000005633003, 8.000004997540197, 10.000000213512722]
18 : [2.00000000004562, 5.000000005182845, 5.000000005628533, 8.000002204481186, 10.000000187354377]
19 : [2.000000000045586, 5.00000000518224, 5.000000005627427, 8.000000989752058, 10.000000176932463]


The operations are implemented using late evaluation. The operations return expressions, and computation is done within the assignment operator. The advantage is to avoid dynamic allocation. An exception is InnerProduct, which allows an expression in the second argument (and then needs vector allocation in every call).

[ ]:



[ ]: