Chi-Tech
CodeTut2.h
Go to the documentation of this file.
1
/** \page CodeTut2 Coding Tutorial 2 - Using ghosted vectors
2
3
## Table of contents
4
- \ref CodeTut2Sec1
5
- \ref CodeTut2Sec2
6
- \ref CodeTut2Sec3
7
- \ref CodeTut2Sec4
8
- \ref CodeTut2Sec5
9
- \ref CodeTut2Sec6
10
11
\section CodeTut2Sec1 1 Ghost cell's in ChiTech
12
For the discussions that follow refer to the schematic in the figure below.
13
\image html CodingTutorials/GhostCells.png width=800px
14
15
The concept of ghost cells is quite important in ChiTech, as is the case with
16
any scientific computing library that runs in parallel. Ghost cells arises
17
because it is often required to partition a mesh among different processors in
18
order to more effectively leverage the distributed memory models of HPCs.
19
20
<b>Definition: In ChiTech a ghost cell is a cell that shares a face or a vertex with cells
21
partitioned to be local to a given processor</b>. We can observe these cells in the
22
schematic above. This definition requires us to store the first layer of cells beyond
23
the locally owned cells, as well as their accompanying vertices. This allows us
24
to have the complete geometrical information of those ghost cells. Because some numerical
25
schemes often require us to map indices to ghost nodes/cells, this storage scheme,
26
is considered very much necessary.
27
28
In this coding tutorial we will be looking at the basics of using the spatial
29
discretization methods (SD methods) to gain access to ghost data, which can be
30
very difficult.
31
32
\section CodeTut2Sec2 2 Computing the gradient of a Finite Volume Scalar
33
In the finite volume SD scheme the computation of the gradient of a scalar quantity
34
can often be required. We will use \ref CodeTut1 as a reference for this tutorial
35
and assume that we are adding code to this tutorial.
36
37
Given that we want
38
\f$
39
\boldsymbol{\nabla} \phi
40
\f$,
41
how do we do this in ChiTech for the Finite Volume SD? Well the first thing to do
42
is to note that, from Gauss's Divergence Theorem,
43
\f[
44
\int_V \boldsymbol{\nabla} \phi dV = \int_S \mathbf{n} \phi dA
45
\f]
46
Now, if the storage of \f$ \boldsymbol{\nabla} \phi \f$ also follows the Finite
47
Volume SD, then \f$ \boldsymbol{\nabla} \phi \f$ will be constant on cell \f$ P \f$,
48
therefore
49
\f[
50
\int_S \mathbf{n} \phi dA = V_P \biggr( \boldsymbol{\nabla} \phi \biggr)_P
51
\f]
52
from which we can discretize the surface integral to get
53
\f[
54
\biggr( \boldsymbol{\nabla} \phi \biggr)_P = \frac{1}{V_P}
55
\sum_f A_f \mathbf{n}_f \phi_f
56
\f]
57
Now our first problem is to find \f$ \phi_f \f$, considering additionally that
58
we might not have an equally spaced mesh. Consider the diagram below
59
60
\image html CodingTutorials/SqueezedHexes.png width=150px
61
62
we can now define \f$ \phi_f \f$ as a geometrically weighted average of
63
\f$ \phi_N \f$ and \f$ \phi_P \f$ by using the face centroid, \f$ \mathbf{x}_f \f$.
64
Therefore,
65
\f[
66
\phi_f = \frac{1}{|| \mathbf{x}_N - \mathbf{x}_P ||}
67
\biggr(
68
(|| \mathbf{x}_N - \mathbf{x}_f ||) \phi_P +
69
(|| \mathbf{x}_f - \mathbf{x}_P ||) \phi_N
70
\biggr)
71
\f]
72
73
Here we can see that we would have a problem if \f$ \phi_N \f$ is not locally
74
available, i.e., it is a ghost value. Therefore we need a means to obtain this
75
value.
76
77
78
79
80
\section CodeTut2Sec3 3 Making a ghosted vector and communicating ghosted data
81
Ghost value communication is handled via a `chi_math::VectorGhostCommunicator`
82
object. These objects require quite a bit of MPI communication during initializing
83
but thereafter can be re-used efficiently on different vectors (as long as they
84
share the intended structure of the ghost-communicator).
85
86
In order to gain access to `chi_math::VectorGhostCommunicator` we first include
87
the header
88
\code
89
#include "math/VectorGhostCommunicator/vector_ghost_communicator.h"
90
\endcode
91
92
Next we add the following code:
93
\code
94
//============================================= Make ghosted vectors
95
std::vector<int64_t> ghost_ids = sdm.GetGhostDOFIndices(OneDofPerNode);
96
97
chi_math::VectorGhostCommunicator vgc(num_local_dofs,
98
num_globl_dofs,
99
ghost_ids,
100
Chi::mpi.comm);
101
std::vector<double> field_wg = vgc.MakeGhostedVector(field);
102
103
vgc.CommunicateGhostEntries(field_wg);
104
\endcode
105
The first item on the agenda is "what ghost data to include in a local vector?".
106
To obtain the relevant ghost DOF-ids we ask the SD to provide us with the necessary
107
information with a call to `chi_math::SpatialDiscretization::GetGhostDOFIndices`.
108
Note this routine requires as a parameter a structure of unknowns, which is why
109
we supply the configuration `OneDofPerNode`.
110
111
Next we construct the `chi_math::VectorGhostCommunicator`. The constructor of
112
this object requires the number of local DOFs, the number of global DOFs, and
113
the global-ids of the ghosted data. Finally a communicator specification is
114
required. The construction of this object initializes data structures that will
115
make repeated communications very efficient.
116
117
Given that we now have the ghost-communicator we can now create a vector that
118
has enough space for the local data AND the ghost data. This is done with a call
119
to `chi_math::VectorGhostCommunicator::MakeGhostedVector` which is overloaded
120
to accept a vector containing only local data. This allows us, in this case,
121
to make a ghosted vector, `field_wg`, with the local entries the same as `field` but
122
with additional zeros representing the ghost values.
123
124
Finally we communicate ghost entries, of a given vector, with a call to
125
`chi_math::VectorGhostCommunicator::CommunicateGhostEntries`. Next we will look
126
at how we are going to access the ghost values.
127
128
129
130
131
132
\section CodeTut2Sec4 4 Accessing ghosted data
133
As a reminder, consider that a call to `chi_math::SpatialDiscretization::MapDOF`
134
provides the <b>global-id</b> of a DOF whereas
135
`chi_math::SpatialDiscretization::MapDOFLocal` provides the <b>local-id</b> of a DOF.
136
Ghost data is, by its nature, not normally a local quantity. Therefore, one would
137
assume that a call to `MapDOFLocal` would provide an invalid address for
138
ghost-DOFs, however, there is a catch. The id provided by `MapDOFLocal` maps to a
139
location aligned with the call to `GetGhostDOFIndices`. For example, consider there
140
are 100 local-DOFs, and 15 ghost-DOFs on a given processor. A call to
141
`GetGhostDOFIndices` would return 15 indices. Now lets say we want the local-id of
142
the 9-th ghost-DOF. In this case a call to `MapDOFLocal` will return 100+8, because
143
the 9-th ghost-DOF has an index of 8 in the list of ghost-ids.
144
145
We will now use this concept in the computation of the gradient of \f$ \phi \f$.
146
147
First we create a vector that can store the x, y and z components of the gradient.
148
The size of this vector, as well as mappings into it, will be provided by the SD
149
and therefore we need to make a `chi_math::UnknownManager` to dictate the structure
150
of the unknowns. Therefore we do the following:
151
\code
152
chi_math::UnknownManager grad_uk_man(
153
{chi_math::Unknown(chi_math::UnknownType::VECTOR_3)});
154
\endcode
155
156
Next we get the size of the vector and instantiate it
157
\code
158
const size_t num_grad_dofs = sdm.GetNumLocalDOFs(grad_uk_man);
159
160
std::vector<double> grad_phi(num_grad_dofs, 0.0);
161
\endcode
162
163
Since we already have ghosted data we are now ready to compute the gradient on
164
a cell-by-cell basis. We do this with the following code:
165
\code
166
for (const auto& cell : grid.local_cells)
167
{
168
const auto& cell_mapping = sdm.GetCellMapping(cell);
169
170
const int64_t imap = sdm.MapDOFLocal(cell, 0);
171
const double phi_P = field_wg[imap];
172
173
const auto& xp = cell.centroid;
174
175
auto grad_phi_P = chi_mesh::Vector3(0,0,0);
176
177
size_t f=0;
178
for (const auto& face : cell.faces)
179
{
180
const auto& xf = face.centroid;
181
const auto Af = cell_mapping.FaceArea(f) * face.normal;
182
183
double phi_N = 0.0;
184
auto xn = xp + 2*(xf-xp);
185
186
if (face.has_neighbor)
187
{
188
const auto& adj_cell = grid.cells[face.neighbor_id];
189
const int64_t nmap = sdm.MapDOFLocal(adj_cell, 0);
190
phi_N = field_wg[nmap];
191
192
xn = adj_cell.centroid;
193
}
194
195
grad_phi_P += Af * ((xn-xf).Norm()*phi_P + (xf-xp).Norm()*phi_N)/
196
(xn-xp).Norm();
197
++f;
198
}//for face
199
grad_phi_P /= cell_mapping.CellVolume();
200
201
const int64_t xmap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 0);
202
const int64_t ymap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 1);
203
const int64_t zmap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 2);
204
205
grad_phi[xmap] = grad_phi_P.x;
206
grad_phi[ymap] = grad_phi_P.y;
207
grad_phi[zmap] = grad_phi_P.z;
208
}//for cell
209
\endcode
210
As per usual we loop over the local cells. Once we have a cell reference we
211
obtain a cell-mapping. We then immediately grab the present cell's \f$ \phi_P \f$
212
value by first mapping its address (reminder, the second paramater of `MapDOFLocal`
213
is the node-number, which is zero for the Finite Volume SD), then accessing the
214
ghosted field vector, `field_wg`.
215
216
We then initialize an, initially zero, three-component vector representing the
217
gradient for cell \f$ P \f$. Next we proceed to loop over the faces of the cell
218
where we obtain the neighboring cell's \f$ \phi_N \f$ value (provided that
219
`face.has_neighbor`, otherwise defaulting to the boundary condition of zero)
220
with the code
221
\code
222
const auto& adj_cell = grid.cells[face.neighbor_id];
223
const int64_t nmap = sdm.MapDOFLocal(adj_cell, 0);
224
phi_N = field_wg[nmap];
225
\endcode
226
Notice here the use of `MapDOFLocal`. This only works because `field_wg` will have
227
a value for the case when `MapDOFLocal` is asked to map a ghosted DOF.
228
229
Finally we compute the gradient from the formula
230
\f[
231
\phi_f = \frac{1}{|| \mathbf{x}_N - \mathbf{x}_P ||}
232
\biggr(
233
(|| \mathbf{x}_N - \mathbf{x}_f ||) \phi_P +
234
(|| \mathbf{x}_f - \mathbf{x}_P ||) \phi_N
235
\biggr)
236
\f]
237
with the code
238
\code
239
grad_phi_P += Af * ((xn-xf).Norm()*phi_P + (xf-xp).Norm()*phi_N)/
240
(xn-xp).Norm();
241
\endcode
242
which is divided by the volume at the termination of the faces-loop.
243
244
The final step for a cell is to appropriately store this gradient, therefore
245
we map each component of the gradient and appropriately set the `grad_phi`
246
vector,
247
\code
248
const int64_t xmap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 0);
249
const int64_t ymap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 1);
250
const int64_t zmap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 2);
251
252
grad_phi[xmap] = grad_phi_P.x;
253
grad_phi[ymap] = grad_phi_P.y;
254
grad_phi[zmap] = grad_phi_P.z;
255
\endcode
256
257
258
259
260
261
\section CodeTut2Sec5 5 Visualizing the gradient
262
Visualizing the gradient is conceptually simple. We simply create a
263
field-function that will export all three the components of the gradient together.
264
Then we use paraview to make arrow glyphs for visualization.
265
\code
266
//============================================= Create Field Function
267
auto ff_grad = std::make_shared<chi_physics::FieldFunction>(
268
"GradPhi",
269
sdm_ptr,
270
chi_math::Unknown(chi_math::UnknownType::VECTOR_3)
271
);
272
273
ff_grad->UpdateFieldVector(grad_phi);
274
275
ff_grad->ExportToVTK("CodeTut2_FV_grad");
276
\endcode
277
278
Within paraview we can use the `Calculator` filter to construct vector quantities
279
from the 3 components after which we can apply a `Glyph` filter to obtain a
280
visualization as shown below
281
282
\image html CodingTutorials/Tut2Gradient.png width=700px
283
284
285
286
287
288
289
\section CodeTut2Sec6 The complete program
290
291
\code
292
#include "chi_runtime.h"
293
#include "chi_log.h"
294
295
#include "mesh/MeshHandler/chi_meshhandler.h"
296
#include "mesh/MeshContinuum/chi_meshcontinuum.h"
297
298
#include "math/SpatialDiscretization/FiniteVolume/fv.h"
299
#include "math/PETScUtils/petsc_utils.h"
300
301
#include "physics/FieldFunction/fieldfunction2.h"
302
303
#include "math/VectorGhostCommunicator/vector_ghost_communicator.h"
304
305
int main(int argc, char* argv[])
306
{
307
chi::Initialize(argc,argv);
308
chi::RunBatch(argc, argv);
309
310
chi::log.Log() << "Coding Tutorial 2";
311
312
//============================================= Get grid
313
auto grid_ptr = chi_mesh::GetCurrentHandler().GetGrid();
314
const auto& grid = *grid_ptr;
315
316
chi::log.Log() << "Global num cells: " << grid.GetGlobalNumberOfCells();
317
318
//============================================= Make SDM
319
typedef std::shared_ptr<chi_math::SpatialDiscretization> SDMPtr;
320
SDMPtr sdm_ptr = chi_math::SpatialDiscretization_FV::New(grid_ptr);
321
const auto& sdm = *sdm_ptr;
322
323
const auto& OneDofPerNode = sdm.UNITARY_UNKNOWN_MANAGER;
324
325
const size_t num_local_dofs = sdm.GetNumLocalDOFs(OneDofPerNode);
326
const size_t num_globl_dofs = sdm.GetNumGlobalDOFs(OneDofPerNode);
327
328
chi::log.Log() << "Num local DOFs: " << num_local_dofs;
329
chi::log.Log() << "Num globl DOFs: " << num_globl_dofs;
330
331
//============================================= Initializes Mats and Vecs
332
const auto n = static_cast<int64_t>(num_local_dofs);
333
const auto N = static_cast<int64_t>(num_globl_dofs);
334
Mat A;
335
Vec x,b;
336
337
A = chi_math::PETScUtils::CreateSquareMatrix(n,N);
338
x = chi_math::PETScUtils::CreateVector(n,N);
339
b = chi_math::PETScUtils::CreateVector(n,N);
340
341
std::vector<int64_t> nodal_nnz_in_diag;
342
std::vector<int64_t> nodal_nnz_off_diag;
343
sdm.BuildSparsityPattern(nodal_nnz_in_diag,nodal_nnz_off_diag,OneDofPerNode);
344
345
chi_math::PETScUtils::InitMatrixSparsity(A,
346
nodal_nnz_in_diag,
347
nodal_nnz_off_diag);
348
349
//============================================= Assemble the system
350
chi::log.Log() << "Assembling system: ";
351
for (const auto& cell : grid.local_cells)
352
{
353
const auto& cell_mapping = sdm.GetCellMapping(cell);
354
const int64_t imap = sdm.MapDOF(cell,0);
355
356
const auto& xp = cell.centroid;
357
const double V = cell_mapping.CellVolume();
358
359
size_t f=0;
360
for (const auto& face : cell.faces)
361
{
362
const auto Af = face.normal * cell_mapping.FaceArea(f);
363
364
if (face.has_neighbor)
365
{
366
const auto& adj_cell = grid.cells[face.neighbor_id];
367
const int64_t jnmap = sdm.MapDOF(adj_cell,0);
368
369
const auto& xn = adj_cell.centroid;
370
371
const auto xpn = xn - xp;
372
373
const auto cf = Af.Dot(xpn) / xpn.NormSquare();
374
375
MatSetValue(A, imap, imap , cf, ADD_VALUES);
376
MatSetValue(A, imap, jnmap, -cf, ADD_VALUES);
377
}
378
else
379
{
380
const auto& xn = xp + 2.0*(face.centroid - xp);
381
const auto xpn = xn - xp;
382
383
const auto cf = Af.Dot(xpn) / xpn.NormSquare();
384
385
MatSetValue(A, imap, imap , cf, ADD_VALUES);
386
}
387
++f;
388
}//for face
389
390
VecSetValue(b, imap, 1.0*V, ADD_VALUES);
391
}//for cell i
392
393
chi::log.Log() << "Global assembly";
394
395
MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
396
MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
397
VecAssemblyBegin(b);
398
VecAssemblyEnd(b);
399
400
chi::log.Log() << "Done global assembly";
401
402
//============================================= Create Krylov Solver
403
chi::log.Log() << "Solving: ";
404
auto petsc_solver =
405
chi_math::PETScUtils::CreateCommonKrylovSolverSetup(
406
A, //Matrix
407
"FVDiffSolver", //Solver name
408
KSPCG, //Solver type
409
PCGAMG, //Preconditioner type
410
1.0e-6, //Relative residual tolerance
411
1000); //Max iterations
412
413
//============================================= Solve
414
KSPSolve(petsc_solver.ksp,b,x);
415
416
chi::log.Log() << "Done solving";
417
418
//============================================= Extract PETSc vector
419
std::vector<double> field(num_local_dofs, 0.0);
420
sdm.LocalizePETScVector(x,field,OneDofPerNode);
421
422
//============================================= Clean up
423
KSPDestroy(&petsc_solver.ksp);
424
425
VecDestroy(&x);
426
VecDestroy(&b);
427
MatDestroy(&A);
428
429
chi::log.Log() << "Done cleanup";
430
431
//============================================= Create Field Function
432
auto ff = std::make_shared<chi_physics::FieldFunction>(
433
"Phi",
434
sdm_ptr,
435
chi_math::Unknown(chi_math::UnknownType::SCALAR)
436
);
437
438
ff->UpdateFieldVector(field);
439
440
ff->ExportToVTK("CodeTut2_FV");
441
442
//============================================= Make ghosted vectors
443
std::vector<int64_t> ghost_ids = sdm.GetGhostDOFIndices(OneDofPerNode);
444
445
chi_math::VectorGhostCommunicator vgc(num_local_dofs,
446
num_globl_dofs,
447
ghost_ids,
448
Chi::mpi.comm);
449
std::vector<double> field_wg = vgc.MakeGhostedVector(field);
450
451
vgc.CommunicateGhostEntries(field_wg);
452
453
//============================================= Setup gradient unknown structure
454
chi_math::UnknownManager grad_uk_man(
455
{chi_math::Unknown(chi_math::UnknownType::VECTOR_3)});
456
457
const size_t num_grad_dofs = sdm.GetNumLocalDOFs(grad_uk_man);
458
459
std::vector<double> grad_phi(num_grad_dofs, 0.0);
460
461
for (const auto& cell : grid.local_cells)
462
{
463
const auto& cell_mapping = sdm.GetCellMapping(cell);
464
465
const int64_t imap = sdm.MapDOFLocal(cell, 0);
466
const double phi_P = field_wg[imap];
467
468
const auto& xp = cell.centroid;
469
470
auto grad_phi_P = chi_mesh::Vector3(0,0,0);
471
472
size_t f=0;
473
for (const auto& face : cell.faces)
474
{
475
const auto& xf = face.centroid;
476
const auto Af = cell_mapping.FaceArea(f) * face.normal;
477
478
double phi_N = 0.0;
479
auto xn = xp + 2*(xf-xp);
480
481
if (face.has_neighbor)
482
{
483
const auto& adj_cell = grid.cells[face.neighbor_id];
484
const int64_t nmap = sdm.MapDOFLocal(adj_cell, 0);
485
phi_N = field_wg[nmap];
486
487
xn = adj_cell.centroid;
488
}
489
490
grad_phi_P += Af * ((xn-xf).Norm()*phi_P + (xf-xp).Norm()*phi_N)/
491
(xn-xp).Norm();
492
++f;
493
}//for face
494
grad_phi_P /= cell_mapping.CellVolume();
495
496
const int64_t xmap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 0);
497
const int64_t ymap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 1);
498
const int64_t zmap = sdm.MapDOFLocal(cell, 0, grad_uk_man, 0, 2);
499
500
grad_phi[xmap] = grad_phi_P.x;
501
grad_phi[ymap] = grad_phi_P.y;
502
grad_phi[zmap] = grad_phi_P.z;
503
}//for cell
504
505
//============================================= Create Field Function
506
auto ff_grad = std::make_shared<chi_physics::FieldFunction>(
507
"GradPhi",
508
sdm_ptr,
509
chi_math::Unknown(chi_math::UnknownType::VECTOR_3)
510
);
511
512
ff_grad->UpdateFieldVector(grad_phi);
513
514
ff_grad->ExportToVTK("CodeTut2_FV_grad");
515
516
chi::Finalize();
517
return 0;
518
}
519
\endcode
520
521
*/
doc
PAGES
ProgrammersManual
CodingTutorials
CodeTut2.h
Generated by
1.9.3