Chi-Tech
CodeTut6.h
Go to the documentation of this file.
1
/** \page CodeTut6 Coding Tutorial 6 - Transport using WDD
2
3
## Table of contents
4
- \ref CodeTut6Sec1
5
- \ref CodeTut6Sec2
6
- \ref CodeTut6Sec2_1
7
- \ref CodeTut6Sec2_2
8
- \ref CodeTut6Sec3
9
- \ref CodeTut6Sec4
10
- \ref CodeTut6Sec5
11
- \ref CodeTut6Sec6
12
- \ref CodeTut6Sec7
13
- \ref CodeTut6Sec8
14
- \ref CodeTut6Sec9
15
- \ref CodeTut6SecX
16
17
\section CodeTut6Sec1 1 The multigroup Discrete Ordinates Equations
18
In this tutorial we look at the multigroup discrete ordinates (DO) equations,
19
otherwise known as the \f$ S_N \f$ equations. For a complete tutorial, on how
20
these equations are derived, please consult
21
<a href="https://github.com/doctor-janv/whitepapers/blob/main/ChiTech-LBS-TheoryManual/ChiTech-LBS-TheoryManual_v_1_13.pdf">this whitepaper</a>.
22
23
We start with the equations themselves
24
\f[
25
\biggr(\boldsymbol{\Omega}_n \boldsymbol{\cdot}
26
\boldsymbol{\nabla} +\sigma_{tg} (\mathbf{x})\biggr)
27
\psi_{gn} (\mathbf{x})=
28
\sum_{m=0}^{N_m - 1}
29
\frac{2\ell+1}{4\pi}Y_{\ell m^*}(\boldsymbol{\Omega}_n)
30
\sum_{g'=0}^{N_G-1} \biggr[
31
\sigma_{s\ell,g'{\to}g} (\mathbf{x})
32
\ \phi_{mg'}(\mathbf{x})
33
\biggr]
34
+ q_{g,n} (\mathbf{x})
35
\quad \quad \quad \mathbf{x}\in\mathcal{D}
36
\f]
37
\f[
38
\psi_{gn}(\mathbf{x}) =0 \quad \quad \quad \mathbf{x}\in\partial\mathcal{D}
39
\f]
40
where the variables have the following meaning
41
- \f$ \boldsymbol{\Omega}_n \f$ is the \f$ n \f$-th direction vector
42
- \f$ \sigma_{tg} \f$ is the total interaction cross section for group \f$ g \f$
43
- \f$ \psi_{gn} \f$ is the angular flux for group \f$ g \f$ and direction \f$ n \f$
44
- \f$ Y_{\ell m^*} \f$ is the tesseral Spherical Harmonic function where \f$ (\ell, m^*) \f$
45
is mapped from \f$ m \f$
46
- \f$ \sigma_{sm,g'{\to}g} \f$ is the scattering cross section for moment
47
\f$ m \f$ scattering from group \f$ g' \f$ to \f$ g \f$
48
- \f$ \phi_{mg} \f$ is the flux-moment for moment \f$ m \f$ group \f$ g \f$, computed
49
from the angular quadrature
50
\f{align*}{
51
\phi_{gm}(\mathbf{x}) &= \int_{4\pi} \psi_g(\mathbf{x},\boldsymbol{\Omega})
52
Y_{\ell m}(\boldsymbol{\Omega}) d\boldsymbol{\Omega}\\
53
&=\sum_{n=0}^{N_D-1} w_n \ \psi_{gn}(\mathbf{x}) Y_{\ell m^*}(\boldsymbol{\Omega}_n)
54
\f}
55
- \f$ q_{g,n} \f$ is the angular material source for group \f$ g \f$ direction
56
\f$ n \f$, normally defined as an isotropic source \f$ \frac{1}{4\pi} q_{g} \f$
57
58
The "angular quadrature" mentioned here contains considerable history. When the
59
DO-method was still in it's infancy, most simulations were still one dimensional. In such
60
cases the angular flux has no azimuthal dependence and the angular integrals
61
reduce to `Gauss-Legendre` quadratures. Because the DO-method generally avoids
62
the placement of a quadrature point on the \f$ \mu=0 \f$ plane, it was generally
63
preferred to use quadratures with even-amounts of quadrature points.
64
Because of this, people in the transport community
65
typically referred to the DO-method as the \f$ S_N \f$ method with \f$ S_2 \f$
66
being the first even quadrature used, followed by \f$ S_4,\ S_8,\ \f$ etc.
67
68
In two and three dimensions, where one generally have dependence on both the
69
azimuthal and polar directions (although 2D is symmetric in the polar angle), the
70
nomenclature could be extended with the use of the "so-called" triangular
71
quadratures. These type of quadratures placed ever decreasing amount of polar
72
angles starting from the equator of the unit sphere towards the poles.
73
Incidentally, if the number of polar angles per octant is \f$ N/2 \f$ then the
74
azimuthal "level" closest to the equator will start with \f$ N/2 \f$ number of
75
azimuthal angles. For example, an \f$ S_8 \f$ angular quadrature in 3D has 4 polar
76
levels per octant, therefore the azimuthal angles would start at 4, then 3, then
77
2, then 1 at the polar-lelel closest to the pole.
78
79
The use of the triangular quadratures are considered to be quite antiquated. Some
80
of the reasons for this includes the following:
81
- When more resolution in the azimuthal domain is required then a higher \f$ S_N \f$
82
order will unfortunately be accompanied by more polar angles.
83
- In 2D, the polar dependence can accurately be captured with just a few polar angles,
84
however, the azimuthal dependence may still be unresolved. Product quadratures
85
are then best suited to place more azimuthal directions in the quadrature.
86
- Generally, 3D simulations (e.g. reactor simulations) depend more strongly on the
87
azimuthal directions than on the polar directions.
88
89
In ChiTech we have a wide array of angular quadratures to choose from.
90
91
For this tutorial we will be looking at the following test problem
92
\image html "CodingTutorials/Tut6_problem.png" width=500px
93
94
The domain spans from -1 to +1 in both dimensions. We define a source in the
95
sub-domain -0.5 to +0.5 in both dimensions. The source strength is 1 in group 0
96
only.
97
98
\section CodeTut6Sec2 2 The Diamond Difference Spatial Discretization
99
The Diamond Difference (DD) spatial discretization is a precursor to the
100
Weighted Diamond Difference (WDD) spatial discretization. Fundamentally, it is
101
a Finite Volume based spatial discretization.
102
103
\subsection CodeTut6Sec2_1 2.1 The right hand side
104
Since we are dealing with a FV discretization the values of cross sections,
105
fluxes and sources are cell constant. Therefore, the RHS of the multigroup DO
106
equations becomes
107
\f[
108
q_c^{rhs} =
109
\sum_{m=0}^{N_m - 1}
110
\frac{2\ell+1}{4\pi}Y_{\ell m^*}(\boldsymbol{\Omega}_n)
111
\sum_{g'=0}^{N_G-1} \biggr[
112
\sigma_{s\ell,g'{\to}g,c}
113
\ \phi_{mg',c}
114
\biggr]
115
+ \frac{1}{4\pi} q_{g,c}.
116
\f]
117
This representation is, however, not very practical and instead we choose to write
118
it as
119
\f[
120
q_{c,g}^{rhs} = \sum_{m=0}^{N_m-1} M_{mn} q_{c,g}^{moms,m}
121
\f]
122
where \f$ M_{mn} \f$ is called the Moment-To-Discrete operator (on a nodal
123
level) and defined as
124
\f[
125
M_{mn} = \frac{2\ell+1}{4\pi}Y_{\ell m^*}(\boldsymbol{\Omega}_n)
126
\f]
127
noting that \f$ (\ell,m^*) \f$ is mapped from \f$ m \f$. This operator can be
128
precomputed once the angular quadrature is known. The other quantity introduced
129
here, \f$ q_{c,g}^{moms} \f$, is called the source moments for cell \f$ c \f$
130
group \f$ g \f$ and is defined by
131
\f[
132
q_{c,g}^{moms,m} =
133
\begin{cases}
134
\sum_{g'=0}^{N_G-1} \biggr[
135
\sigma_{s\ell,g'{\to}g,c}
136
\ \phi_{mg',c}
137
\biggr] + q_{c,g} \quad \quad &m=0,\\
138
\sum_{g'=0}^{N_G-1} \biggr[
139
\sigma_{s\ell,g'{\to}g,c}
140
\ \phi_{mg',c}
141
\biggr] \quad \quad &m\ne 0
142
\end{cases}
143
\f]
144
145
In the iterative schemes, that will be introduced later, we compute and store a
146
vector of source moments for each iteration.
147
148
\subsection CodeTut6Sec2_2 2.2 The left hand side
149
With the simplified definition of the RHS we now need to address the
150
discretization of the divergence term in the DO equations.
151
152
The DD approximation uses convention as shown in the figure below
153
\image html "CodingTutorials/Tut6_orthoijk.png" width=700px
154
155
With this convention we've introduced nodes on the interstitial faces between
156
cells. Suppressing, for the moment, direction and group indices, we can express
157
the divergence term as
158
\f[
159
\boldsymbol{\Omega} \boldsymbol{\cdot} \boldsymbol{\nabla} \psi =
160
\Omega_x \frac{\partial \psi}{\partial x} +
161
\Omega_y \frac{\partial \psi}{\partial y} +
162
\Omega_z \frac{\partial \psi}{\partial z}.
163
\f]
164
We then introduce the approximation, e.g., in the x-dimension
165
\f[
166
\Omega_x \frac{\partial \psi}{\partial x} =
167
\frac{\Omega_x}{\Delta x} (\psi_{i+\frac{1}{2}} - \psi_{i-\frac{1}{2}})
168
\f]
169
which, if done for all the dimensions, introduces up to 6 new variables into the
170
equation. Fortunately, since we can apply upwinding, we can eliminate half of
171
these with the closure relation
172
\f[
173
\psi_i = \frac{1}{2}(\psi_{i+\frac{1}{2}} + \psi_{i-\frac{1}{2}}),
174
\f]
175
obviously extended to all dimensions. To comprehend this upwinding, consider
176
the schematic below, showing the upwinding for different configurations of
177
\f$ \boldsymbol{\Omega}_n \f$.
178
179
\image html "CodingTutorials/Tut6_upwinding.png" width=500px
180
181
With upwinding defined we can re-express the upwinded angular flux as
182
\f$ \psi_{us,x} \f$ being either \f$ \psi_{i+\frac{1}{2}} \f$ or
183
\f$ \psi_{i-\frac{1}{2}} \f$ depending on the sign of \f$ \Omega_x \f$. The
184
\f$ \psi_{us,x} \f$ of the current cell would then be the downstream flux,
185
\f$ \psi_{ds,x} \f$ of the adjacent cell at the interstitial face. With some
186
manipulation we arrive at the following equations to solve the current cell's
187
\f$ \psi \f$:
188
189
\image html "CodingTutorials/Tut6_123D.png" width=500px
190
191
as well as the following equations to obtain its downstream angular flux:
192
\f{align*}{
193
\psi_{ds,x} &= 2\psi - \psi_{us,x} \\
194
\psi_{ds,y} &= 2\psi - \psi_{us,y} \\
195
\psi_{ds,z} &= 2\psi - \psi_{us,z} \\
196
\f}
197
198
The solution procedure, per cell, is therefore to first grab \f$ \psi_{us} \f$ as
199
either the upstream cell's \f$ \psi_{ds} \f$ or from the upstream boundary
200
condition. Then to solve for the cell \f$ \psi \f$. Then to compute
201
\f$ \psi_{ds} \f$.
202
203
The obvious difficulty now is to loop through the cells in such a way that the
204
upstream relationships, with respect to a given \f$ \boldsymbol{\Omega}_n \f$,
205
is maintained. This process is known as <b>sweeping</b>. And for orthogonal
206
grids it looks like shown in the figure below
207
208
\image html "CodingTutorials/Tut6_sweeping.png" width=700px
209
210
\section CodeTut6Sec3 3 Getting the grid with additional preoperties
211
We get the grid in the same fashion as we did in all the previous tutorial,
212
however, this time need to get some additional information since we will use it
213
during the sweep.
214
215
We define the following code once we obtained a reference to the grid
216
\code
217
//============================================= Make Orthogonal mapping
218
const auto ijk_info = grid.GetIJKInfo();
219
const auto& ijk_mapping = grid.MakeIJKToGlobalIDMapping();
220
const auto cell_ortho_sizes = grid.MakeCellOrthoSizes();
221
222
const auto Nx = static_cast<int64_t>(ijk_info[0]);
223
const auto Ny = static_cast<int64_t>(ijk_info[1]);
224
const auto Nz = static_cast<int64_t>(ijk_info[2]);
225
226
const auto Dim1 = chi_mesh::DIMENSION_1;
227
const auto Dim2 = chi_mesh::DIMENSION_2;
228
const auto Dim3 = chi_mesh::DIMENSION_3;
229
230
int dimension = 0;
231
if (grid.Attributes() & Dim1) dimension = 1;
232
if (grid.Attributes() & Dim2) dimension = 2;
233
if (grid.Attributes() & Dim3) dimension = 3;
234
\endcode
235
The structure `ijk_info` holds an array holding the number of cells in x,y and z.
236
The structure `ijk_mapping` holds a mapping, given a cell's ijk index, to the
237
linear global mapping.
238
The structure `cell_ortho_size` contains a `std::vector` of cell
239
\f$ \Delta x \f$, \f$ \Delta y \f$ and \f$ \Delta z \f$.
240
We finally also assign the dimensionality of the mesh to an integer `dimension`.
241
242
\section CodeTut6Sec4 4 Creating an angular quadrature
243
The base class for angular quadratures is `chi_math::AngularQuadrature`.
244
\code
245
//============================================= Make an angular quadrature
246
std::shared_ptr<chi_math::AngularQuadrature> quadrature;
247
if (dimension == 1)
248
quadrature = std::make_shared<chi_math::AngularQuadratureProdGL>(8);
249
else if (dimension == 2)
250
{
251
quadrature = std::make_shared<chi_math::AngularQuadratureProdGLC>(8,8);
252
quadrature->OptimizeForPolarSymmetry(4.0*M_PI);
253
}
254
else if (dimension == 3)
255
quadrature = std::make_shared<chi_math::AngularQuadratureProdGLC>(8,8);
256
else
257
throw std::logic_error(fname + "Error with the dimensionality "
258
"of the mesh.");
259
chi::log.Log() << "Quadrature created." << std::endl;
260
\endcode
261
262
Depending on the dimensionality of the grid we instate a different quadrature set.
263
For example, in one dimension we instantiate the Gauss-Legendre quadrature
264
encapsulated in a product quadrature, `chi_math::AngularQuadratureProdGL`.
265
\code
266
quadrature = std::make_shared<chi_math::AngularQuadratureProdGL>(8);
267
\endcode
268
where the parameter is the number of polar angles per hemisphere.
269
270
In two and three dimensions we instantiate the Gauss-Legendre-Chebyshev quadrature
271
encapsulated in a product quadrature, `chi_math::AngularQuadratureProdGLC`.
272
\code
273
quadrature = std::make_shared<chi_math::AngularQuadratureProdGLC>(8,8);
274
\endcode
275
A GLC quadrature is essentially an evenly spaced quadrature in the azimuthal space
276
and a regular Legendre spacing in the polar space. The parameters are the number
277
of azimuthal angles per octant and the number of polar angles per octant.
278
279
For the 2D case, which has polar symmetry, we can modify the quadrature to only
280
contain the directions in the upper hemisphere. We do this as
281
\code
282
quadrature = std::make_shared<chi_math::AngularQuadratureProdGLC>(8,8);
283
quadrature->OptimizeForPolarSymmetry(4.0*M_PI);
284
\endcode
285
The argument for `OptimizeForPolarSymmetry` is the normalization factor to use
286
when normalizing the quadrature.
287
288
Since we now have the angular quadrature we can compute the Moment-To-Discrete
289
and Discrete-To-Moment operator.
290
\code
291
//============================================= Set/Get params
292
const size_t scat_order = 1;
293
const size_t num_groups = 20;
294
295
quadrature->BuildMomentToDiscreteOperator(scat_order,dimension);
296
quadrature->BuildDiscreteToMomentOperator(scat_order,dimension);
297
298
const auto& m2d = quadrature->GetMomentToDiscreteOperator();
299
const auto& d2m = quadrature->GetDiscreteToMomentOperator();
300
const auto& m_ell_em_map = quadrature->GetMomentToHarmonicsIndexMap();
301
302
const size_t num_moments = m_ell_em_map.size();
303
const size_t num_dirs = quadrature->omegas.size();
304
305
chi::log.Log() << "End Set/Get params." << std::endl;
306
chi::log.Log() << "Num Moments: " << num_moments << std::endl;
307
\endcode
308
Notice we also obtained the mapping from \f$ m \f$ to \f$ (\ell,m^*) \f$ in the
309
structure `m_ell_em_map` which contains a list of
310
`chi_math::AngularQuadrature::HarmonicIndices`.
311
312
\section CodeTut6Sec5 5 Auxialiary items
313
We define the auxiliary items as follows
314
\code
315
//============================================= Make Unknown Managers
316
const auto VecN = chi_math::UnknownType::VECTOR_N;
317
using Unknown = chi_math::Unknown;
318
319
std::vector<Unknown> phi_uks(num_moments, Unknown(VecN, num_groups));
320
std::vector<Unknown> psi_uks(num_dirs, Unknown(VecN, num_groups));
321
322
const chi_math::UnknownManager phi_uk_man(phi_uks);
323
const chi_math::UnknownManager psi_uk_man(psi_uks);
324
325
const size_t num_local_phi_dofs = sdm.GetNumLocalDOFs(phi_uk_man);
326
const size_t num_local_psi_dofs = sdm.GetNumLocalDOFs(psi_uk_man);
327
328
chi::log.Log() << "End ukmanagers." << std::endl;
329
330
//============================================= Make XSs
331
chi_physics::TransportCrossSections xs;
332
xs.MakeFromCHIxsFile("tests/xs_graphite_pure.cxs");
333
334
//============================================= Initializes vectors
335
std::vector<double> phi_old(num_local_phi_dofs,0.0);
336
std::vector<double> psi(num_local_psi_dofs, 0.0);
337
auto source_moments = phi_old;
338
auto phi_new = phi_old;
339
auto q_source = phi_old;
340
341
chi::log.Log() << "End vectors." << std::endl;
342
343
//============================================= Make material source term
344
for (const auto& cell : grid.local_cells)
345
{
346
const auto& cc = cell.centroid;
347
const auto& cell_mapping = sdm.GetCellMapping(cell);
348
const size_t num_nodes = cell_mapping.NumNodes();
349
350
if (cc.x < 0.5 and cc.y < 0.5 and cc.z < 0.5 and
351
cc.x >-0.5 and cc.y >-0.5 and cc.z >-0.5)
352
{
353
for (size_t i=0; i<num_nodes; ++i)
354
{
355
const int64_t dof_map = sdm.MapDOFLocal(cell,i,phi_uk_man,0,0);
356
357
q_source[dof_map] = 1.0;
358
}//for node i
359
}//if inside box
360
}//for cell
361
\endcode
362
363
First is the unknown managers for \f$ \phi \f$ and \f$ \psi \f$. Then transport
364
cross sections via the class `chi_physics::TransportCrossSections`.
365
366
We then define the unknown vectors.
367
\code
368
//============================================= Initializes vectors
369
std::vector<double> phi_old(num_local_phi_dofs,0.0);
370
std::vector<double> psi(num_local_psi_dofs, 0.0);
371
auto source_moments = phi_old;
372
auto phi_new = phi_old;
373
auto q_source = phi_old;
374
375
chi::log.Log() << "End vectors." << std::endl;
376
\endcode
377
378
Finally, we hardcode the source distribution
379
\code
380
//============================================= Make material source term
381
for (const auto& cell : grid.local_cells)
382
{
383
const auto& cc = cell.centroid;
384
const auto& cell_mapping = sdm.GetCellMapping(cell);
385
const size_t num_nodes = cell_mapping.NumNodes();
386
387
if (cc.x < 0.5 and cc.y < 0.5 and cc.z < 0.5 and
388
cc.x >-0.5 and cc.y >-0.5 and cc.z >-0.5)
389
{
390
for (size_t i=0; i<num_nodes; ++i)
391
{
392
const int64_t dof_map = sdm.MapDOFLocal(cell,i,phi_uk_man,0,0);
393
394
q_source[dof_map] = 1.0;
395
}//for node i
396
}//if inside box
397
}//for cell
398
\endcode
399
400
\section CodeTut6Sec6 6 Defining a cell-by-cell sweep chunk
401
\code
402
//============================================= Define sweep chunk
403
typedef chi_data_types::NDArray<double> IJKArrayDbl;
404
IJKArrayDbl psi_ds_x(std::array<int64_t,4>{Nx,Ny,Nz,num_groups});
405
IJKArrayDbl psi_ds_y(std::array<int64_t,4>{Nx,Ny,Nz,num_groups});
406
IJKArrayDbl psi_ds_z(std::array<int64_t,4>{Nx,Ny,Nz,num_groups});
407
408
typedef chi_mesh::Vector3 Vec3;
409
auto SweepChunk = [&ijk_info, &ijk_mapping, &cell_ortho_sizes, //ortho-quantities
410
&grid, &sdm,
411
&num_moments,
412
&phi_uk_man, &psi_uk_man,
413
&m2d,&d2m,
414
&phi_new, &source_moments, &psi,
415
&psi_ds_x, &psi_ds_y, &psi_ds_z]
416
(const std::array<int64_t,3>& ijk,
417
const Vec3& omega,
418
const size_t d,
419
const chi_physics::TransportCrossSections& cell_xs)
420
{
421
const auto cell_global_id = ijk_mapping.MapNDtoLin(ijk[1],ijk[0],ijk[2]);
422
const auto& cell = grid.cells[cell_global_id];
423
const auto cell_local_id = cell.local_id;
424
const auto& cell_mapping = sdm.GetCellMapping(cell);
425
426
const auto& cell_ortho_size = cell_ortho_sizes[cell_local_id];
427
const double dx = cell_ortho_size.x;
428
const double dy = cell_ortho_size.y;
429
const double dz = cell_ortho_size.z;
430
431
const std::vector<double> zero_vector(num_groups,0.0);
432
433
const double* psi_us_x = zero_vector.data();
434
const double* psi_us_y = zero_vector.data();
435
const double* psi_us_z = zero_vector.data();
436
437
const auto i = ijk[0]; const auto Nx = ijk_info[0];
438
const auto j = ijk[1]; const auto Ny = ijk_info[1];
439
const auto k = ijk[2]; const auto Nz = ijk_info[2];
440
441
if (omega.x > 0.0 and i > 0 ) psi_us_x = &psi_ds_x(i-1,j ,k ,0);
442
if (omega.x < 0.0 and i < (Nx-1)) psi_us_x = &psi_ds_x(i+1,j ,k ,0);
443
if (omega.y > 0.0 and j > 0 ) psi_us_y = &psi_ds_y(i ,j-1,k ,0);
444
if (omega.y < 0.0 and j < (Ny-1)) psi_us_y = &psi_ds_y(i ,j+1,k ,0);
445
if (omega.z > 0.0 and k > 0 ) psi_us_z = &psi_ds_z(i ,j ,k-1,0);
446
if (omega.z < 0.0 and k < (Nz-1)) psi_us_z = &psi_ds_z(i ,j ,k+1,0);
447
448
for (size_t g=0; g<num_groups; ++g)
449
{
450
double rhs = 0.0;
451
//Source moments
452
for (size_t m=0; m<num_moments; ++m)
453
{
454
const int64_t dof_map = sdm.MapDOFLocal(cell,0,phi_uk_man,m,g);
455
rhs += source_moments[dof_map]*m2d[m][d];
456
}
457
458
if (Nx > 1) rhs += 2.0*std::fabs(omega.x)*psi_us_x[g]/dx;
459
if (Ny > 1) rhs += 2.0*std::fabs(omega.y)*psi_us_y[g]/dy;
460
if (Nz > 1) rhs += 2.0*std::fabs(omega.z)*psi_us_z[g]/dz;
461
462
double lhs = cell_xs.sigma_t[g];
463
if (Nx > 1) lhs += 2.0*std::fabs(omega.x)/dx;
464
if (Ny > 1) lhs += 2.0*std::fabs(omega.y)/dy;
465
if (Nz > 1) lhs += 2.0*std::fabs(omega.z)/dz;
466
467
double psi_ijk = rhs/lhs;
468
469
//Accumulate flux-moments
470
for (size_t m=0; m<num_moments; ++m)
471
{
472
const int64_t dof_map = sdm.MapDOFLocal(cell,0,phi_uk_man,m,g);
473
phi_new[dof_map] += d2m[m][d]*psi_ijk;
474
}
475
476
//Save angular fluxes
477
const int64_t psi_map = sdm.MapDOFLocal(cell,0,psi_uk_man,d,g);
478
psi[psi_map] = psi_ijk;
479
480
psi_ds_x(i,j,k,g) = 2.0*psi_ijk - psi_us_x[g];
481
psi_ds_y(i,j,k,g) = 2.0*psi_ijk - psi_us_y[g];
482
psi_ds_z(i,j,k,g) = 2.0*psi_ijk - psi_us_z[g];
483
}//for g
484
};
485
\endcode
486
487
The first portion of the chunk is obtaining all the relevant cell information
488
\code
489
const auto cell_global_id = ijk_mapping.MapNDtoLin(ijk[1],ijk[0],ijk[2]);
490
const auto& cell = grid.cells[cell_global_id];
491
const auto cell_local_id = cell.local_id;
492
const auto& cell_mapping = sdm.GetCellMapping(cell);
493
494
const auto& cell_ortho_size = cell_ortho_sizes[cell_local_id];
495
const double dx = cell_ortho_size.x;
496
const double dy = cell_ortho_size.y;
497
const double dz = cell_ortho_size.z;
498
499
const auto i = ijk[0]; const auto Nx = ijk_info[0];
500
const auto j = ijk[1]; const auto Ny = ijk_info[1];
501
const auto k = ijk[2]; const auto Nz = ijk_info[2];
502
\endcode
503
504
Thereafter we determine the upstream fluxes from the general downstream
505
\f$ \psi \f$ structures. These data structures are multidimensional arrays
506
`chi_data_types::NDArray` which are used to store each cell's downstream
507
components. These arrays are indexed with ijk indices making them easy to use in
508
our orthogonal mesh setting.
509
\code
510
const std::vector<double> zero_vector(num_groups,0.0);
511
512
const double* psi_us_x = zero_vector.data();
513
const double* psi_us_y = zero_vector.data();
514
const double* psi_us_z = zero_vector.data();
515
516
if (omega.x > 0.0 and i > 0 ) psi_us_x = &psi_ds_x(i-1,j ,k ,0);
517
if (omega.x < 0.0 and i < (Nx-1)) psi_us_x = &psi_ds_x(i+1,j ,k ,0);
518
if (omega.y > 0.0 and j > 0 ) psi_us_y = &psi_ds_y(i ,j-1,k ,0);
519
if (omega.y < 0.0 and j < (Ny-1)) psi_us_y = &psi_ds_y(i ,j+1,k ,0);
520
if (omega.z > 0.0 and k > 0 ) psi_us_z = &psi_ds_z(i ,j ,k-1,0);
521
if (omega.z < 0.0 and k < (Nz-1)) psi_us_z = &psi_ds_z(i ,j ,k+1,0);
522
\endcode
523
524
Finally, we loop over each group then construct and solve the relevant equations.
525
526
527
First we develop the angular source from the source moments
528
\f[
529
q_{c,g}^{rhs} = \sum_{m=0}^{N_m-1} M_{mn} q_{c,g}^{moms,m}
530
\f]
531
\code
532
double rhs = 0.0;
533
//Source moments
534
for (size_t m=0; m<num_moments; ++m)
535
{
536
const int64_t dof_map = sdm.MapDOFLocal(cell,0,phi_uk_man,m,g);
537
rhs += source_moments[dof_map]*m2d[m][d];
538
}
539
\endcode
540
541
Thereafter we construct and solve the equation for \f$ \psi \f$,
542
\image html "CodingTutorials/Tut6_123D.png" width=500px
543
for which we have the code
544
\code
545
if (Nx > 1) rhs += 2.0*std::fabs(omega.x)*psi_us_x[g]/dx;
546
if (Ny > 1) rhs += 2.0*std::fabs(omega.y)*psi_us_y[g]/dy;
547
if (Nz > 1) rhs += 2.0*std::fabs(omega.z)*psi_us_z[g]/dz;
548
549
double lhs = cell_xs.sigma_t[g];
550
if (Nx > 1) lhs += 2.0*std::fabs(omega.x)/dx;
551
if (Ny > 1) lhs += 2.0*std::fabs(omega.y)/dy;
552
if (Nz > 1) lhs += 2.0*std::fabs(omega.z)/dz;
553
554
double psi_ijk = rhs/lhs;
555
\endcode
556
557
Once we have psi we can contribute to the flux moments
558
\f[
559
\phi_{\ell m^*} =\sum_n w_n Y_{\ell m^*}(\boldsymbol{\Omega}_n) \psi_n
560
\f]
561
which maps to
562
\f{align*}{
563
\phi_m &= \sum_n w_n Y_m (\boldsymbol{\Omega}_n) \psi_n \\
564
&= \sum_n D_{mn} \psi_n
565
\f}
566
where \f$ D_{mn} \f$ is the Discrete-To-Moment operator. We do the above with
567
the following code
568
\code
569
//Accumulate flux-moments
570
for (size_t m=0; m<num_moments; ++m)
571
{
572
const int64_t dof_map = sdm.MapDOFLocal(cell,0,phi_uk_man,m,g);
573
phi_new[dof_map] += d2m[m][d]*psi_ijk;
574
}
575
\endcode
576
577
Next we store all the angular flux
578
\code
579
//Save angular fluxes
580
const int64_t psi_map = sdm.MapDOFLocal(cell,0,psi_uk_man,d,g);
581
psi[psi_map] = psi_ijk;
582
583
psi_ds_x(i,j,k,g) = 2.0*psi_ijk - psi_us_x[g];
584
psi_ds_y(i,j,k,g) = 2.0*psi_ijk - psi_us_y[g];
585
psi_ds_z(i,j,k,g) = 2.0*psi_ijk - psi_us_z[g];
586
\endcode
587
588
589
590
591
592
\section CodeTut6Sec7 7 Defining a sweep over all directions
593
Next we define a routine that will sweep through cells with the correct upwinding
594
structure for all the directions in the quadrature.
595
\code
596
auto Sweep = [&num_dirs,&quadrature,Nx,Ny,Nz,&SweepChunk,&xs]()
597
{
598
for (size_t d=0; d<num_dirs; ++d)
599
{
600
const auto &omega = quadrature->omegas[d];
601
const auto &weight = quadrature->weights[d];
602
603
std::vector<int64_t> iorder, jorder, korder;
604
if (omega.x > 0.0) iorder = chi_math::Range<int64_t>(0, Nx);
605
else iorder = chi_math::Range<int64_t>(Nx - 1, -1, -1);
606
if (omega.y > 0.0) jorder = chi_math::Range<int64_t>(0, Ny);
607
else jorder = chi_math::Range<int64_t>(Ny - 1, -1, -1);
608
if (omega.z > 0.0) korder = chi_math::Range<int64_t>(0, Nz);
609
else korder = chi_math::Range<int64_t>(Nz - 1, -1, -1);
610
611
for (auto i: iorder)
612
for (auto j: jorder)
613
for (auto k: korder)
614
SweepChunk({i,j,k}, omega, d, xs);
615
}//for d
616
};
617
\endcode
618
This kind of code, for the sweep ordering, will only work for orthogonal meshes
619
hence why this tutorial is based on orthogonal meshes.
620
621
\section CodeTut6Sec8 8 The Classic Richardson iterative scheme
622
Yup, as easy as this:
623
\code
624
//============================================= Classic Richardson iteration
625
chi::log.Log() << "Starting iterations" << std::endl;
626
for (size_t iter=0; iter<200; ++iter)
627
{
628
phi_new.assign(phi_new.size(), 0.0);
629
//Build rhs
630
source_moments = SetSource(grid,sdm,phi_uk_man,
631
q_source,phi_old,xs,m_ell_em_map);
632
Sweep();
633
634
const double rel_change = ComputeRelativePWChange(grid,sdm,phi_uk_man,
635
phi_new, phi_old);
636
637
std::stringstream outstr;
638
outstr << "Iteration " << std::setw(5) << iter << " ";
639
{
640
char buffer[100];
641
sprintf(buffer, "%11.3e\n", rel_change);
642
outstr << buffer;
643
}
644
645
chi::log.Log() << outstr.str();
646
647
phi_old = phi_new;
648
649
if (rel_change < 1.0e-6 and iter > 0)
650
break;
651
}//for iteration
652
\endcode
653
654
Notice here we have defined two routines:
655
\code
656
std::vector<double> SetSource(
657
const chi_mesh::MeshContinuum& grid,
658
const chi_math::SpatialDiscretization& sdm,
659
const chi_math::UnknownManager& phi_uk_man,
660
const std::vector<double>& q_source,
661
const std::vector<double>& phi_old,
662
const chi_physics::TransportCrossSections& xs,
663
const std::vector<YlmIndices>& m_ell_em_map)
664
{
665
const size_t num_local_phi_dofs = sdm.GetNumLocalDOFs(phi_uk_man);
666
std::vector<double> source_moments(num_local_phi_dofs, 0.0);
667
668
const size_t num_moments = phi_uk_man.unknowns.size();
669
const size_t num_groups = phi_uk_man.unknowns.front().num_components;
670
671
for (const auto& cell : grid.local_cells)
672
{
673
const auto& cell_mapping = sdm.GetCellMapping(cell);
674
const size_t num_nodes = cell_mapping.NumNodes();
675
const auto& S = xs.transfer_matrices;
676
677
for (size_t i=0; i<num_nodes; ++i)
678
{
679
for (size_t m=0; m<num_moments; ++m)
680
{
681
const int64_t dof_map = sdm.MapDOFLocal(cell,i,phi_uk_man,m,0);
682
const auto ell = m_ell_em_map[m].ell;
683
684
for (size_t g=0; g<num_groups; ++g)
685
{
686
//Fixed source
687
source_moments[dof_map + g] = q_source[dof_map + g];
688
689
//Inscattering
690
if (ell < S.size())
691
{
692
double inscat_g = 0.0;
693
for (const auto& [row_g, gprime, sigma_sm] : S[ell].Row(g))
694
inscat_g += sigma_sm * phi_old[dof_map + gprime];
695
696
source_moments[dof_map + g] += inscat_g;
697
}
698
}//for g
699
}//for m
700
}//for node i
701
}//for cell
702
703
return source_moments;
704
}
705
\endcode
706
and
707
\code
708
double ComputeRelativePWChange(
709
const chi_mesh::MeshContinuum& grid,
710
const chi_math::SpatialDiscretization& sdm,
711
const chi_math::UnknownManager& phi_uk_man,
712
const std::vector<double>& in_phi_new,
713
const std::vector<double>& in_phi_old
714
)
715
{
716
double pw_change = 0.0;
717
const size_t num_moments = phi_uk_man.unknowns.size();
718
const size_t num_groups = phi_uk_man.unknowns.front().num_components;
719
720
for (const auto& cell : grid.local_cells)
721
{
722
const auto& cell_mapping = sdm.GetCellMapping(cell);
723
const size_t num_nodes = cell_mapping.NumNodes();
724
725
for (size_t i=0; i<num_nodes; ++i)
726
{
727
//Get scalar moments
728
const int64_t m0_map = sdm.MapDOFLocal(cell,i,phi_uk_man,0,0);
729
730
const double* phi_new_m0 = &in_phi_new[m0_map];
731
const double* phi_old_m0 = &in_phi_old[m0_map];
732
for (size_t m=0; m<num_moments; ++m)
733
{
734
const int64_t m_map = sdm.MapDOFLocal(cell,i,phi_uk_man,m,0);
735
736
const double* phi_new_m = &in_phi_new[m_map];
737
const double* phi_old_m = &in_phi_old[m_map];
738
739
for (size_t g=0; g<num_groups; ++g)
740
{
741
const double abs_phi_new_g_m0 = std::fabs(phi_new_m0[g]);
742
const double abs_phi_old_g_m0 = std::fabs(phi_old_m0[g]);
743
744
const double max_denominator = std::max(abs_phi_new_g_m0,
745
abs_phi_old_g_m0);
746
747
const double delta_phi = std::fabs(phi_new_m[g] - phi_old_m[g]);
748
749
if (max_denominator >= std::numeric_limits<double>::min())
750
pw_change = std::max(delta_phi/max_denominator,pw_change);
751
else
752
pw_change = std::max(delta_phi,pw_change);
753
}//for g
754
}//for m
755
}//for i
756
}//for cell
757
758
return pw_change;
759
}
760
\endcode
761
762
The `SetSource` routine populates the `source_moments` vector, while the
763
`ComputeRelativePWChange` routine computes a modified version of the
764
\f$ L_\infty \f$ norm by compute the maximum change relative to the scalar moment
765
flux on each node-group pair.
766
767
768
769
770
\section CodeTut6Sec9 9 Exporting only the scalar flux
771
We finally want to export the scalar flux to VTK. We have a problem though. The
772
scalar flux is mixed in within the other flux moments. Therefore we need to
773
copy the scalar flux, with the `phi_old` vector, which has the unknown structure
774
define by `phi_uk_man` to another vector `m0_phi` with a different unknown
775
structure. We do this with the following code
776
\code
777
//============================================= Localize zeroth moment
778
//This routine extracts a single moment vector
779
//from the vector that contains multiple moments
780
const chi_math::UnknownManager m0_uk_man(
781
{chi_math::Unknown(chi_math::UnknownType::VECTOR_N,num_groups)});
782
const size_t num_m0_dofs = sdm.GetNumLocalDOFs(m0_uk_man);
783
784
std::vector<double> m0_phi(num_m0_dofs, 0.0);
785
786
sdm.CopyVectorWithUnknownScope(phi_old, //from vector
787
m0_phi, //to vector
788
phi_uk_man, //from dof-structure
789
0, //from unknown-id
790
m0_uk_man, //to dof-structure
791
0); //to unknown-id
792
\endcode
793
This code should be self explanatory.
794
795
Finally we create, update and export the field function like we did with the
796
other tutorials.
797
\code
798
//============================================= Create Field Function
799
auto phi_ff = std::make_shared<chi_physics::FieldFunction>(
800
"Phi", //Text name
801
sdm_ptr, //Spatial Discr.
802
chi_math::Unknown(chi_math::UnknownType::VECTOR_N,num_groups) //Unknown
803
);
804
805
phi_ff->UpdateFieldVector(m0_phi);
806
phi_ff->ExportToVTK("SimTest_06_WDD");
807
\endcode
808
809
The solution is shown below:
810
\image html "CodingTutorials/Tut6_solutiong0.png" width=700px
811
Notice the blocky appearance, a consequence of the finite volume discretization.
812
813
814
\section CodeTut6SecX The complete program
815
\code
816
#include "chi_runtime.h"
817
#include "chi_log.h"
818
819
#include "mesh/MeshHandler/chi_meshhandler.h"
820
#include "mesh/MeshContinuum/chi_meshcontinuum.h"
821
822
#include "math/SpatialDiscretization/FiniteVolume/fv.h"
823
#include "math/Quadratures/angular_quadrature_base.h"
824
#include "math/Quadratures/angular_product_quadrature.h"
825
#include "math/chi_math_range.h"
826
827
#include "physics/FieldFunction/fieldfunction2.h"
828
#include "physics/PhysicsMaterial/transportxsections/material_property_transportxsections.h"
829
830
#include "data_types/ndarray.h"
831
832
int main(int argc, char* argv[])
833
{
834
chi::Initialize(argc,argv);
835
chi::RunBatch(argc, argv);
836
837
const std::string fname = "Tutorial_06";
838
839
if (chi::mpi.process_count != 1)
840
throw std::logic_error(fname + ": Is serial only.");
841
842
//============================================= Get grid
843
auto grid_ptr = chi_mesh::GetCurrentHandler().GetGrid();
844
const auto& grid = *grid_ptr;
845
846
chi::log.Log() << "Global num cells: " << grid.GetGlobalNumberOfCells();
847
848
//============================================= Make Orthogonal mapping
849
const auto ijk_info = grid.GetIJKInfo();
850
const auto& ijk_mapping = grid.MakeIJKToGlobalIDMapping();
851
const auto cell_ortho_sizes = grid.MakeCellOrthoSizes();
852
853
const auto Nx = static_cast<int64_t>(ijk_info[0]);
854
const auto Ny = static_cast<int64_t>(ijk_info[1]);
855
const auto Nz = static_cast<int64_t>(ijk_info[2]);
856
857
const auto Dim1 = chi_mesh::DIMENSION_1;
858
const auto Dim2 = chi_mesh::DIMENSION_2;
859
const auto Dim3 = chi_mesh::DIMENSION_3;
860
861
int dimension = 0;
862
if (grid.Attributes() & Dim1) dimension = 1;
863
if (grid.Attributes() & Dim2) dimension = 2;
864
if (grid.Attributes() & Dim3) dimension = 3;
865
866
//============================================= Make SDM
867
typedef std::shared_ptr<chi_math::SpatialDiscretization> SDMPtr;
868
SDMPtr sdm_ptr = chi_math::SpatialDiscretization_FV::New(grid_ptr);
869
const auto& sdm = *sdm_ptr;
870
871
const auto& OneDofPerNode = sdm.UNITARY_UNKNOWN_MANAGER;
872
873
const size_t num_local_nodes = sdm.GetNumLocalDOFs(OneDofPerNode);
874
const size_t num_globl_nodes = sdm.GetNumGlobalDOFs(OneDofPerNode);
875
876
chi::log.Log() << "Num local nodes: " << num_local_nodes;
877
chi::log.Log() << "Num globl nodes: " << num_globl_nodes;
878
879
//============================================= Make an angular quadrature
880
std::shared_ptr<chi_math::AngularQuadrature> quadrature;
881
if (dimension == 1)
882
quadrature = std::make_shared<chi_math::AngularQuadratureProdGL>(8);
883
else if (dimension == 2)
884
{
885
quadrature = std::make_shared<chi_math::AngularQuadratureProdGLC>(8,8);
886
quadrature->OptimizeForPolarSymmetry(4.0*M_PI);
887
}
888
else if (dimension == 3)
889
quadrature = std::make_shared<chi_math::AngularQuadratureProdGLC>(8,8);
890
else
891
throw std::logic_error(fname + "Error with the dimensionality "
892
"of the mesh.");
893
chi::log.Log() << "Quadrature created." << std::endl;
894
895
//============================================= Set/Get params
896
const size_t scat_order = 1;
897
const size_t num_groups = 20;
898
899
quadrature->BuildMomentToDiscreteOperator(scat_order,dimension);
900
quadrature->BuildDiscreteToMomentOperator(scat_order,dimension);
901
902
const auto& m2d = quadrature->GetMomentToDiscreteOperator();
903
const auto& d2m = quadrature->GetDiscreteToMomentOperator();
904
const auto& m_ell_em_map = quadrature->GetMomentToHarmonicsIndexMap();
905
906
const size_t num_moments = m_ell_em_map.size();
907
const size_t num_dirs = quadrature->omegas.size();
908
909
chi::log.Log() << "End Set/Get params." << std::endl;
910
chi::log.Log() << "Num Moments: " << num_moments << std::endl;
911
912
//============================================= Make Unknown Managers
913
const auto VecN = chi_math::UnknownType::VECTOR_N;
914
using Unknown = chi_math::Unknown;
915
916
std::vector<Unknown> phi_uks(num_moments, Unknown(VecN, num_groups));
917
std::vector<Unknown> psi_uks(num_dirs, Unknown(VecN, num_groups));
918
919
const chi_math::UnknownManager phi_uk_man(phi_uks);
920
const chi_math::UnknownManager psi_uk_man(psi_uks);
921
922
const size_t num_local_phi_dofs = sdm.GetNumLocalDOFs(phi_uk_man);
923
const size_t num_local_psi_dofs = sdm.GetNumLocalDOFs(psi_uk_man);
924
925
chi::log.Log() << "End ukmanagers." << std::endl;
926
927
//============================================= Make XSs
928
chi_physics::TransportCrossSections xs;
929
xs.MakeFromCHIxsFile("tests/xs_graphite_pure.cxs");
930
931
//============================================= Initializes vectors
932
std::vector<double> phi_old(num_local_phi_dofs,0.0);
933
std::vector<double> psi(num_local_psi_dofs, 0.0);
934
auto source_moments = phi_old;
935
auto phi_new = phi_old;
936
auto q_source = phi_old;
937
938
chi::log.Log() << "End vectors." << std::endl;
939
940
//============================================= Make material source term
941
for (const auto& cell : grid.local_cells)
942
{
943
const auto& cc = cell.centroid;
944
const auto& cell_mapping = sdm.GetCellMapping(cell);
945
const size_t num_nodes = cell_mapping.NumNodes();
946
947
if (cc.x < 0.5 and cc.y < 0.5 and cc.z < 0.5 and
948
cc.x >-0.5 and cc.y >-0.5 and cc.z >-0.5)
949
{
950
for (size_t i=0; i<num_nodes; ++i)
951
{
952
const int64_t dof_map = sdm.MapDOFLocal(cell,i,phi_uk_man,0,0);
953
954
q_source[dof_map] = 1.0;
955
}//for node i
956
}//if inside box
957
}//for cell
958
959
//============================================= Define sweep chunk
960
typedef chi_data_types::NDArray<double> IJKArrayDbl;
961
IJKArrayDbl psi_ds_x(std::array<int64_t,4>{Nx,Ny,Nz,num_groups});
962
IJKArrayDbl psi_ds_y(std::array<int64_t,4>{Nx,Ny,Nz,num_groups});
963
IJKArrayDbl psi_ds_z(std::array<int64_t,4>{Nx,Ny,Nz,num_groups});
964
965
typedef chi_mesh::Vector3 Vec3;
966
auto SweepChunk = [&ijk_info, &ijk_mapping, &cell_ortho_sizes, //ortho-quantities
967
&grid, &sdm,
968
&num_moments,
969
&phi_uk_man, &psi_uk_man,
970
&m2d,&d2m,
971
&phi_new, &source_moments, &psi,
972
&psi_ds_x, &psi_ds_y, &psi_ds_z]
973
(const std::array<int64_t,3>& ijk,
974
const Vec3& omega,
975
const size_t d,
976
const chi_physics::TransportCrossSections& cell_xs)
977
{
978
const auto cell_global_id = ijk_mapping.MapNDtoLin(ijk[1],ijk[0],ijk[2]);
979
const auto& cell = grid.cells[cell_global_id];
980
const auto cell_local_id = cell.local_id;
981
const auto& cell_mapping = sdm.GetCellMapping(cell);
982
983
const auto& cell_ortho_size = cell_ortho_sizes[cell_local_id];
984
const double dx = cell_ortho_size.x;
985
const double dy = cell_ortho_size.y;
986
const double dz = cell_ortho_size.z;
987
988
const auto i = ijk[0]; const auto Nx = ijk_info[0];
989
const auto j = ijk[1]; const auto Ny = ijk_info[1];
990
const auto k = ijk[2]; const auto Nz = ijk_info[2];
991
992
const std::vector<double> zero_vector(num_groups,0.0);
993
994
const double* psi_us_x = zero_vector.data();
995
const double* psi_us_y = zero_vector.data();
996
const double* psi_us_z = zero_vector.data();
997
998
if (omega.x > 0.0 and i > 0 ) psi_us_x = &psi_ds_x(i-1,j ,k ,0);
999
if (omega.x < 0.0 and i < (Nx-1)) psi_us_x = &psi_ds_x(i+1,j ,k ,0);
1000
if (omega.y > 0.0 and j > 0 ) psi_us_y = &psi_ds_y(i ,j-1,k ,0);
1001
if (omega.y < 0.0 and j < (Ny-1)) psi_us_y = &psi_ds_y(i ,j+1,k ,0);
1002
if (omega.z > 0.0 and k > 0 ) psi_us_z = &psi_ds_z(i ,j ,k-1,0);
1003
if (omega.z < 0.0 and k < (Nz-1)) psi_us_z = &psi_ds_z(i ,j ,k+1,0);
1004
1005
for (size_t g=0; g<num_groups; ++g)
1006
{
1007
double rhs = 0.0;
1008
//Source moments
1009
for (size_t m=0; m<num_moments; ++m)
1010
{
1011
const int64_t dof_map = sdm.MapDOFLocal(cell,0,phi_uk_man,m,g);
1012
rhs += source_moments[dof_map]*m2d[m][d];
1013
}
1014
1015
if (Nx > 1) rhs += 2.0*std::fabs(omega.x)*psi_us_x[g]/dx;
1016
if (Ny > 1) rhs += 2.0*std::fabs(omega.y)*psi_us_y[g]/dy;
1017
if (Nz > 1) rhs += 2.0*std::fabs(omega.z)*psi_us_z[g]/dz;
1018
1019
double lhs = cell_xs.sigma_t[g];
1020
if (Nx > 1) lhs += 2.0*std::fabs(omega.x)/dx;
1021
if (Ny > 1) lhs += 2.0*std::fabs(omega.y)/dy;
1022
if (Nz > 1) lhs += 2.0*std::fabs(omega.z)/dz;
1023
1024
double psi_ijk = rhs/lhs;
1025
1026
//Accumulate flux-moments
1027
for (size_t m=0; m<num_moments; ++m)
1028
{
1029
const int64_t dof_map = sdm.MapDOFLocal(cell,0,phi_uk_man,m,g);
1030
phi_new[dof_map] += d2m[m][d]*psi_ijk;
1031
}
1032
1033
//Save angular fluxes
1034
const int64_t psi_map = sdm.MapDOFLocal(cell,0,psi_uk_man,d,g);
1035
psi[psi_map] = psi_ijk;
1036
1037
psi_ds_x(i,j,k,g) = 2.0*psi_ijk - psi_us_x[g];
1038
psi_ds_y(i,j,k,g) = 2.0*psi_ijk - psi_us_y[g];
1039
psi_ds_z(i,j,k,g) = 2.0*psi_ijk - psi_us_z[g];
1040
}//for g
1041
};
1042
1043
1044
//============================================= Define sweep for all dirs
1045
auto Sweep = [&num_dirs,&quadrature,Nx,Ny,Nz,&SweepChunk,&xs]()
1046
{
1047
for (size_t d=0; d<num_dirs; ++d)
1048
{
1049
const auto &omega = quadrature->omegas[d];
1050
const auto &weight = quadrature->weights[d];
1051
1052
std::vector<int64_t> iorder, jorder, korder;
1053
if (omega.x > 0.0) iorder = chi_math::Range<int64_t>(0, Nx);
1054
else iorder = chi_math::Range<int64_t>(Nx - 1, -1, -1);
1055
if (omega.y > 0.0) jorder = chi_math::Range<int64_t>(0, Ny);
1056
else jorder = chi_math::Range<int64_t>(Ny - 1, -1, -1);
1057
if (omega.z > 0.0) korder = chi_math::Range<int64_t>(0, Nz);
1058
else korder = chi_math::Range<int64_t>(Nz - 1, -1, -1);
1059
1060
for (auto i: iorder)
1061
for (auto j: jorder)
1062
for (auto k: korder)
1063
SweepChunk({i,j,k}, omega, d, xs);
1064
}//for d
1065
};
1066
1067
//============================================= Classic Richardson iteration
1068
chi::log.Log() << "Starting iterations" << std::endl;
1069
for (size_t iter=0; iter<200; ++iter)
1070
{
1071
phi_new.assign(phi_new.size(), 0.0);
1072
//Build rhs
1073
source_moments = SetSource(grid, sdm, phi_uk_man,
1074
q_source, phi_old, xs, m_ell_em_map);
1075
Sweep();
1076
1077
const double rel_change = ComputeRelativePWChange(grid, sdm, phi_uk_man,
1078
phi_new, phi_old);
1079
1080
std::stringstream outstr;
1081
outstr << "Iteration " << std::setw(5) << iter << " ";
1082
{
1083
char buffer[100];
1084
sprintf(buffer, "%11.3e\n", rel_change);
1085
outstr << buffer;
1086
}
1087
1088
chi::log.Log() << outstr.str();
1089
1090
phi_old = phi_new;
1091
1092
if (rel_change < 1.0e-6 and iter > 0)
1093
break;
1094
}//for iteration
1095
1096
//============================================= Localize zeroth moment
1097
//This routine extracts a single moment vector
1098
//from the vector that contains multiple moments
1099
const chi_math::UnknownManager m0_uk_man(
1100
{chi_math::Unknown(chi_math::UnknownType::VECTOR_N,num_groups)});
1101
const size_t num_m0_dofs = sdm.GetNumLocalDOFs(m0_uk_man);
1102
1103
std::vector<double> m0_phi(num_m0_dofs, 0.0);
1104
1105
sdm.CopyVectorWithUnknownScope(phi_old, //from vector
1106
m0_phi, //to vector
1107
phi_uk_man, //from dof-structure
1108
0, //from unknown-id
1109
m0_uk_man, //to dof-structure
1110
0); //to unknown-id
1111
1112
//============================================= Create Field Function
1113
auto phi_ff = std::make_shared<chi_physics::FieldFunction>(
1114
"Phi", //Text name
1115
sdm_ptr, //Spatial Discr.
1116
chi_math::Unknown(chi_math::UnknownType::VECTOR_N,num_groups) //Unknown
1117
);
1118
1119
phi_ff->UpdateFieldVector(m0_phi);
1120
phi_ff->ExportToVTK("SimTest_06_WDD");
1121
1122
chi::Finalize();
1123
return 0;
1124
}
1125
\endcode
1126
*/
doc
PAGES
ProgrammersManual
CodingTutorials
CodeTut6.h
Generated by
1.9.3