Chi-Tech
Coding Tutorial 1 - Poisson's problem with Finite Volume

Table of contents

1 Poisson's equation

The Poisson's equation states the following, for $ \phi, q \in \mathcal{R} $,

\begin{eqnarray*} -\boldsymbol{\nabla} \cdot \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr) = q(\mathbf{x}), \quad \quad \quad (\mathbf{x})\in\mathcal{D} \\ \phi(\mathbf{x}) = 0, \quad \quad \quad \mathbf{x}\in \partial \mathcal{D} \end{eqnarray*}

where $ \boldsymbol{\nabla} \cdot \bigr( \bigr) $ denotes the divergence-operator and $ \boldsymbol{\nabla} $ denotes the gradient-operator, i.e. $ \boldsymbol{\nabla} \phi $ denotes the gradient of $ \phi $. The boundary conditions here state that $ \phi=0 $ on the boundary.

1.1 Our specific problem

For our specific problem we will choose $ q(\mathbf{x})=1 $ and $ \mathcal{D} $ a cartesian domain, either 1D, 2D or 3D, with each dimension always between $ -1,+1 $. We can generate the mesh for this problem using an input file

--############################################### Setup mesh
mesh={}
N=20
L=2
xmin = -L/2
dx = L/N
for i=1,(N+1) do
k=i-1
mesh[i] = xmin + k*dx
end
--############################################### Set Material IDs
Handle chiMeshHandlerCreate()
Two chiMeshCreateUnpartitioned3DOrthoMesh(array_float x_nodes, array_float y_nodes, array_float z_nodes)
Two chiMeshCreateUnpartitioned2DOrthoMesh(array_float x_nodes, array_float y_nodes)
Two chiMeshCreateUnpartitioned1DOrthoMesh(array_float x_nodes)
void chiVolumeMesherSetMatIDToAll(int material_id)
void chiVolumeMesherExecute()
void Set(VecDbl &x, const double &val)

This code can be used to generate any of the following meshes,

[From left to right] 1D, 2D, 3D orthogonal mesh

1.2 The Finite Volume spatial discretization

To apply the Finite Volume spatial discretization we integrate Poisson's equation over the volume of the cell

\begin{eqnarray*} -\int_V \boldsymbol{\nabla} \cdot \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr) dV = \int_V q(\mathbf{x}) dV \end{eqnarray*}

Next we apply the approximation that both $ \phi $ and $ q $ are constant over a cell. Therefore, for cell $ P $, we have the source term

\begin{eqnarray*} \int_V q(\mathbf{x}) dV = q_P V_P. \end{eqnarray*}

For the left-hand side we first apply Gauss Divergence Theorem,

\begin{eqnarray*} \int_V \boldsymbol{\nabla} \cdot \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr) dV = \int_S \mathbf{n} \cdot \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr) dA \end{eqnarray*}

and for a discrete cell this surface integral can be discretized as the sum over all the faces

\begin{eqnarray*} \int_S \mathbf{n} \cdot \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr) dA = \sum_f A_f \mathbf{n}_f \cdot \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr)_f. \end{eqnarray*}

We now still have to resolve $ \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr)_f $. To comprehend the approximation we are about to make, consider the figure below. We observe a cell $ P $, the present cell, and its neighbor, cell $ N $, at face $ f $. The cell centroids are $ \mathbf{x}_P $ and $ \mathbf{x}_N $ respectively.

We next form $ \mathbf{x}_{PN} $, as in the diagram, after which we approximate the gradient of $ \phi $ at face $ f $ as

\begin{eqnarray*} \bigr(\boldsymbol{\nabla} \phi(\mathbf{x})\bigr)_f = \biggr( \frac{\phi_N - \phi_P}{||\mathbf{x}_{PN}||} \biggr) \ \frac{\mathbf{x}_{PN}}{||\mathbf{x}_{PN}||}. \end{eqnarray*}

Note that this is essentially a scalar component $ \frac{df}{ds} = \frac{\phi_N - \phi_P}{||\mathbf{x}_{PN}||}$ after which we give it a direction by normalizing $ \mathbf{x}_{PN} $ as $ \frac{\mathbf{x}_{PN}}{||\mathbf{x}_{PN}||} $.

Our discretized equations now become

\begin{align*} \sum_f c_f \phi_P - \sum_f c_f \phi_N &= q_P V_P \\ c_f &= \biggr[ \mathbf{A}_f \boldsymbol{\cdot} \frac{\mathbf{x}_{PN}}{||\mathbf{x}_{PN}||^2} \biggr]_f \end{align*}

where $ \mathbf{A}_f $ is the area-vector of the face, i.e. $ A_f \mathbf{n}_f $ and $ \phi_P=0 $ when face $ f $ is on a boundary (enforcing the Dirichlet boundary condition).

2 Setting up the problem

In order to implement the code, we must first follow the steps in Using Chi-Tech as a library. Before proceeding, make sure you are fully acquinted with this step.

For this tutorial we will be deviation from the library tutorial by first changing the cmake target and project name. Therefore, we pick a project folder and create the CMakeLists.txt file but we change the following lines.

set(TARGET code_tut1)
project(CodeTut1 CXX)

So now the entire CMakeLists.txt should look like this

cmake_minimum_required(VERSION 3.12)
set(TARGET code_tut1)
project(CodeTut1 CXX)
#------------------------------------------------ DEPENDENCIES
if (NOT DEFINED CHI_TECH_DIR)
if (NOT (DEFINED ENV{CHI_TECH_DIR}))
message(FATAL_ERROR "***** CHI_TECH_DIR is not set *****")
else()
set(CHI_TECH_DIR "$ENV{CHI_TECH_DIR}")
endif()
endif()
message(STATUS "CHI_TECH_DIR set to ${CHI_TECH_DIR}")
include("${CHI_TECH_DIR}/resources/Macros/Downstream.cmake")
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_SOURCE_DIR}/lib")
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY "${PROJECT_SOURCE_DIR}/lib")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${PROJECT_SOURCE_DIR}/bin")
file (GLOB_RECURSE SOURCES "*.cc")
add_executable(${TARGET} "${SOURCES}")
target_link_libraries(${TARGET} ${CHI_LIBS})
file(WRITE ${PROJECT_SOURCE_DIR}/Makefile "subsystem:\n" "\t$(MAKE) -C chi_build \n\n"
"clean:\n\t$(MAKE) -C chi_build clean\n")

Next we create the source file which we will call code_tut1.cc, and for now we will keep its contents the same as the library tutorial. Therefore code_tut1.cc should look like this

#include "chi_runtime.h"
#include "chi_log.h"
int main(int argc, char* argv[])
{
chi::Initialize(argc,argv);
chi::RunBatch(argc, argv);
chi::log.Log() << "Hello World!";
//We will add code here
chi::Finalize();
return 0;
}
int main(int argc, char **argv)

To compile this program we first have to set the CHI_TECH_DIR environment variable. This might be different for everyone but should generally look like this

export CHI_TECH_DIR=/Users/drjanisawesome/codes/ChiTech/chi-tech/

Note: the directory we want to specify here must contain bin/ (in otherwords) it shouldn't be bin/ itself).

Next we create a directory for running cmake. Inside your project folder, create a folder called chi_build and cd into it.

mkdir chi_build
cd chi_build

Then run cmake pointing to the folder that contains the CMakeLists.txt file, which in this case is the parent folder of the one we are in.

cmake ../

After this you should be able to make the project.

make

after which you should see the following

[ 50%] Building CXX object CMakeFiles/code_tut1.dir/code_tut1.cc.o
[100%] Linking CXX executable ../bin/code_tut1
[100%] Built target code_tut1

As a test you can try to execute the project with no arguments which should look like this

../bin/code_tut1
[0]  2022-11-14 13:02:43 Running ChiTech in batch-mode with 1 processes.
[0]  ChiTech version 1.0.2
[0]  ChiTech number of arguments supplied: 0
[0]
[0]  Usage: exe inputfile [options values]
[0]
[0]       -v                         Level of verbosity. Default 0. Can be either 0, 1 or 2.
[0]       a=b                        Executes argument as a lua string. i.e. x=2 or y=[["string"]]
[0]       -allow_petsc_error_handler Allow petsc error handler.
[0]
[0]
[0]
[0]  Final program time 00:00:00
[0]  2022-11-14 13:02:43 ChiTech finished execution of
[0]  Hello World!

3 Getting the grid

First we remove the lines

chi::log.Log() << "Hello World!";
//We will add code here

Next we include the headers for access to the chi_mesh::MeshHandler and the grid, aka chi_mesh::MeshContinuum

Next we add the following lines of code:

chi::log.Log() << "Coding Tutorial 1";
//============================================= Get grid
const auto& grid = *grid_ptr;
chi::log.Log() << "Global num cells: " << grid.GetGlobalNumberOfCells();
chi_mesh::MeshContinuumPtr & GetGrid() const
MeshHandler & GetCurrentHandler()

This is the default way of obtaining a grid loaded into the ChiTech state at any moment, normally via an input file.

When you compile and run this code it ought to fail by throwing an exception that looks like this

terminate called after throwing an instance of 'std::logic_error'
  what():  chi_mesh::GetCurrentHandler: No handlers on stack
Abort trap: 6

This means the code was not loaded with a mesh-handler and therefore no grid is defined. To fix this we first create a mesh.lua input file within the chi_build directory.

>mesh.lua

Use an editor to add the following lines to mesh.lua

--############################################### Setup mesh
mesh={}
N=20
L=2
xmin = -L/2
dx = L/N
for i=1,(N+1) do
k=i-1
mesh[i] = xmin + k*dx
end
--############################################### Set Material IDs

This is known as ChiTech lua-console input file. It contains lua code.

To run the program with this input file we simply add it to the command line

../bin/code_tut1 mesh.lua

If everything went well then you should see the following

[0]  Parsing argument 1 mesh.lua
[0]  2022-11-14 13:14:52 Running ChiTech in batch-mode with 1 processes.
[0]  ChiTech version 1.0.2
[0]  ChiTech number of arguments supplied: 1
[0]  Computing cell-centroids.
[0]  Done computing cell-centroids.
[0]  Checking cell-center-to-face orientations
[0]  Done checking cell-center-to-face orientations
[0]  00:00:00 Establishing cell connectivity.
[0]  00:00:00 Vertex cell subscriptions complete.
[0]  00:00:00 Surpassing cell 40 of 400 (10%)
[0]  00:00:00 Surpassing cell 80 of 400 (20%)
\\ STUFF
\\ STUFF
\\ STUFF
[0]
[0]  Final program time 00:00:00
[0]  2022-11-14 13:14:52 ChiTech finished execution of mesh.lua
[0]  Coding Tutorial 1
[0]  Global num cells: 400

4 Initializing the Finite Volume Spatial Discretization

All spatial discretizations in ChiTech are based on a grid. Right now we want chi_math::SpatialDiscretization_FV which is the Finite Volume Spatial discretization. It will place only a single node at the centroid of each cell.

To access this discretization we must first include the header for chi_math::SpatialDiscretization_FV,

#include "math/SpatialDiscretization/FiniteVolume/fv.h"

Next we add the following lines of code

//============================================= Make SDM
typedef std::shared_ptr<chi_math::SpatialDiscretization> SDMPtr;
SDMPtr sdm_ptr = chi_math::SpatialDiscretization_FV::New(grid_ptr);
const auto& sdm = *sdm_ptr;
const auto& OneDofPerNode = sdm.UNITARY_UNKNOWN_MANAGER;
const size_t num_local_dofs = sdm.GetNumLocalDOFs(OneDofPerNode);
const size_t num_globl_dofs = sdm.GetNumGlobalDOFs(OneDofPerNode);
chi::log.Log() << "Num local DOFs: " << num_local_dofs;
chi::log.Log() << "Num globl DOFs: " << num_globl_dofs;
std::shared_ptr< SpatialDiscretization > SDMPtr

What we did here is to first define the shared-pointer of type chi_math::SpatialDiscretization to be just SMDPtr which greatly reduced the effort on the next line, where we actually instantiate the spatial discretization. Notice the use of chi_math::SpatialDiscretization_FV::New. All of the ChiTech spatial discretizations can only be created as shared pointers, the need for this will only become obvious when you write your own solvers.

The creation of a spatial discretization involves a lot of steps. The first thing the spatial discretization (SD) will do is to create cell-mappings. These contain all the necessary information for each cell to help build codes, e.g., the number of nodes on a cell, its volume, face areas, etc. The second thing the SD does is to order the nodes in parallel in such a way that we can easily map the nodes either globally or locally.

The basic SD operates on the notion of nodes. A node is a place where a specific component of an unknown is located. Since it knows the underlying node structure it can provide index mappings for different unknown-structures. For example, if we stack velocity-x, velocity-y and velocity-z onto a node the SD only needs to know the structure of the unknowns, traditionally called Degrees-of-Freedom or DOFs, in order to provide index-mapping. To keep things simple, all SDs have a unitary chi_math::UnknownManager as a member called UNITARY_UNKNOWN_MANAGER. This unknown manager allows us to map indices assuming only one DOF per node. We obtained a reference to this chi_math::UnknownManager object with the line

const auto& OneDofPerNode = sdm.UNITARY_UNKNOWN_MANAGER;

With this in-hand we can now query the SD to provide us the number of local- and global DOFs. The local DOFs are those that are stored on the local processor whereas the global DOFs encompass all the DOFs across all processors. The way we ran the code above is only in serial. Therefore when we do this again, with this new code added, we will see the following new output:

[0]  Num local DOFs: 400
[0]  Num globl DOFs: 400

Since the code runs in serial mode the number of local- and global DOFS are reported to be the same.

To run this code in parallel we change our input, for execution with 3 processors, as

mpiexec -np 3 ../bin/code_tut1 mesh.lua

which now produces a different output:

[0]  Num local DOFs: 131
[0]  Num globl DOFs: 400

Notice the global number of DOFs remains the same. Only the number of local nodes has changed.

5 Creating and Initializing PETSc matrices and vectors

ChiTech has several macro-type functions for handling PETSc objects. All of these are accessed via the chi_math::PETScUtils namespace for which we need to include the header

Next we add the following code:

//============================================= Initializes Mats and Vecs
const auto n = static_cast<int64_t>(num_local_dofs);
const auto N = static_cast<int64_t>(num_globl_dofs);
Mat A;
Vec x,b;
std::vector<int64_t> nodal_nnz_in_diag;
std::vector<int64_t> nodal_nnz_off_diag;
sdm.BuildSparsityPattern(nodal_nnz_in_diag,nodal_nnz_off_diag,OneDofPerNode);
nodal_nnz_in_diag,
nodal_nnz_off_diag);
void InitMatrixSparsity(Mat &A, const std::vector< int64_t > &nodal_nnz_in_diag, const std::vector< int64_t > &nodal_nnz_off_diag)
Mat CreateSquareMatrix(int64_t local_size, int64_t global_size)
Vec CreateVector(int64_t local_size, int64_t global_size)
struct _p_Mat * Mat
struct _p_Vec * Vec

Notice that we made int64_t equivalents of num_local_dofs and its global counterpart. This is because PETSc operates on int64_t. The actual creation of A, x, and b is self explanatory noting that each needs a local-size as well as a global size.

The next two pieces of code after the creation of the PETSc entities allow us to accurately and efficiently set the matrix sparsity pattern, in-turn allowing PETSc to assemble the sparse-matrix efficiently. The sparsity pattern is defined per row and is split into entries locally stored ("in" the diagonal block) and entries stored elsewhere ("off" the diagonal block). We can get this information from our ChiTech SDs by using the BuildSparsityPattern routine. This routine populates nodal_nnz_in_diag and nodal_nnz_off_diag, according to an unknown-structure (which in our case is OneDofPerNode).

Finally we use our sparsity information to set the PETSc matrix's sparsity pattern using chi_math::PETScUtils::InitMatrixSparsity.

6 Assembling the matrix and right-hand-side

The code to assemble the matrix is as follows.

//============================================= Assemble the system
chi::log.Log() << "Assembling system: ";
for (const auto& cell : grid.local_cells)
{
const auto& cell_mapping = sdm.GetCellMapping(cell);
const int64_t imap = sdm.MapDOF(cell,0);
const auto& xp = cell.centroid;
const double V = cell_mapping.CellVolume();
size_t f=0;
for (const auto& face : cell.faces)
{
const auto Af = face.normal * cell_mapping.FaceArea(f);
if (face.has_neighbor)
{
const auto& adj_cell = grid.cells[face.neighbor_id];
const int64_t jnmap = sdm.MapDOF(adj_cell,0);
const auto& xn = adj_cell.centroid;
const auto xpn = xn - xp;
const auto cf = Af.Dot(xpn) / xpn.NormSquare();
MatSetValue(A, imap, imap , cf, ADD_VALUES);
MatSetValue(A, imap, jnmap, -cf, ADD_VALUES);
}
else
{
const auto& xn = xp + 2.0*(face.centroid - xp);
const auto xpn = xn - xp;
const auto cf = Af.Dot(xpn) / xpn.NormSquare();
MatSetValue(A, imap, imap , cf, ADD_VALUES);
}
++f;
}//for face
VecSetValue(b, imap, 1.0*V, ADD_VALUES);
}//for cell i
chi::log.Log() << "Global assembly";
MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
VecAssemblyBegin(b);
VecAssemblyEnd(b);
chi::log.Log() << "Done global assembly";

The first item to note is the loop over local cells. ChiTech can only loop over local cells, for more information on this see More detail on Mesh data structures.

Once we have a reference to a local-cell we can now obtain the SD's mapping of that cell using

const auto& cell_mapping = sdm.GetCellMapping(cell);

We can also immediately map the cell's $ \phi_P $ global location in the parallel matrix/vector by making a call to chi_math::SpatialDiscretization::MapDOF.

const int64_t imap = sdm.MapDOF(cell,0);

The syntax of this function is #1 A cell reference, and #2 the node number which, for Finite Volume, will always be zero.

Finally, before looping over the faces, we grab the cell centroid and volume.

The loop over faces is split into two cases, one for internal faces and one for boundary faces. Both cases need the face area-vector there we construct that first

for (const auto& face : cell.faces)
{
const auto Af = face.normal * cell_mapping.FaceArea(f);
//etc.

For internal faces, whether that face is internal, i.e., has a neighboring cell, is indicated by the member has_neighbor. Additionally the neighbor's global-id will be stored in the member neighbor_id. If the face does not have a neighbor then neighbor_id doubles as the boundary-id if boundary-ids are used. All cells that at minimum share a vertex with local cells are classified as Ghost-cells in ChiTech. And since a neighbor can be either a local- or ghost-cell we opt here to get a reference to the neighbor cell using its global-id for which we use grid.cells, instead of grid.local_cells which only uses local-ids.

const auto& adj_cell = grid.cells[face.neighbor_id];

Once we have a reference to the adjacent cell we can now map the address of its unknown $ \phi_N $ as

const int64_t jnmap = sdm.MapDOF(adj_cell,0);

We then compute

\begin{align*} c_f = \biggr[ \mathbf{A}_f \boldsymbol{\cdot} \frac{\mathbf{x}_{PN}}{||\mathbf{x}_{PN}||^2} \biggr]_f \end{align*}

with the code

const auto& xn = adj_cell.centroid;
const auto xpn = xn - xp;
const auto cf = Af.Dot(xpn) / xpn.NormSquare();

after that we set the matrix entries associated with $ \phi_P $ and $ \phi_N $ using the PETSc commands

MatSetValue(A, imap, imap , cf, ADD_VALUES);
MatSetValue(A, imap, jnmap, -cf, ADD_VALUES);

For boundary faces, we essentially do the same, however, we do not have an adjacent cell or $ \mathbf{x}_N $. Therefore we extrapolate a virtual boundary cell-centroid using the face-centroid, $ \mathbf{x}_f $, as

\[ \mathbf{x}_N = \mathbf{x}_P + 2 ( \mathbf{x}_f - \mathbf{x}_P ) \]

and then

const auto& xn = xp + 2.0*(face.centroid - xp);
const auto xpn = xn - xp;
const auto cf = Af.Dot(xpn) / xpn.NormSquare();

Finally, since there is no neighbor cell entry, we only have a diagonal entry

MatSetValue(A, imap, imap , cf, ADD_VALUES);

After the face loops we then set the right-hand side using

VecSetValue(b, imap, 1.0*V, ADD_VALUES);

where the 1.0 represents $ q(\mathbf{x})=1 $.

We finish parallel assembly with the following calls

MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
VecAssemblyBegin(b);
VecAssemblyEnd(b);

7 Solving the system with a Krylov Subspace solver

Most of the following code is self explanatory.

//============================================= Create Krylov Solver
chi::log.Log() << "Solving: ";
auto petsc_solver =
A, //Matrix
"FVDiffSolver", //Solver name
KSPCG, //Solver type
PCGAMG, //Preconditioner type
1.0e-6, //Relative residual tolerance
1000); //Max iterations
//============================================= Solve
KSPSolve(petsc_solver.ksp,b,x);
chi::log.Log() << "Done solving";
PETScSolverSetup CreateCommonKrylovSolverSetup(Mat ref_matrix, const std::string &in_solver_name="KSPSolver", const std::string &in_solver_type=KSPGMRES, const std::string &in_preconditioner_type=PCNONE, double in_relative_residual_tolerance=1.0e-6, int64_t in_maximum_iterations=100)

The solver type here was specified as KSPCG which represents the conjugate-gradient method. The preconditioner PCGAMG represents Geometric and/or Algebraic MultiGrid. The combination of this solver and preconditioner is known to be thee most efficient ways to solve these type of systems.

After this solve-step we want to get an STL representation of the PETSc vector so that we can think about visualizing the solution. We can get a local copy of the PETSc vector using the code

//============================================= Extract PETSc vector
std::vector<double> field(num_local_dofs, 0.0);
sdm.LocalizePETScVector(x,field,OneDofPerNode);
//============================================= Clean up
KSPDestroy(&petsc_solver.ksp);
VecDestroy(&x);
VecDestroy(&b);
MatDestroy(&A);
chi::log.Log() << "Done cleanup";

Here the SD took care of appropriately copying from the PETSc vector.

After this is completed we have no need for any of the PETSc entities and therefore we destroy them.

8 Visualizing the solution

ChiTech uses the notion of FieldFunctions and the visualization toolkit, VTK, to visual solutions. In order to gain access to chi_physics::FieldFunction we need to include the header

#include "physics/FieldFunction/fieldfunction2.h"

Next we create the field function using the code

//============================================= Create Field Function
auto ff = std::make_shared<chi_physics::FieldFunction>(
"Phi",
sdm_ptr,
);

Note here that a field function simply needs a name, a SD, and an unknown structure.

After this is created we can update the field function with a field vector

ff->UpdateFieldVector(field);

And finally we can export the solution to VTK with

ff->ExportToVTK("CodeTut1_FV");

which takes, as an argument, the base-name of the VTK output files. VTK will utilize the parallel nature of the simulation to generate a set of files

CodeTut1_FV.pvtu
CodeTut1_FV_0.vtu
CodeTut1_FV_1.vtu
CodeTut1_FV_2.vtu

The structure here is that each processor writes a .vtu file. The processor-id is appended to the basename before the .vtu extension is added. These files contain the actual data. An additional proxy-file, used to link all the data together, is written as the base-name with .pvtu appended to it. This .pvtu file can be opened in popular visualization tools such as Visit and Paraview to visualize the solution.

For this tutorial we used Paraview.

[From left to right] 1D, 2D, 3D solutions with a coarse mesh

And on a finer mesh, with the 3D case being 1 million cells:

[From left to right] 1D, 2D, 3D solutions with a fine mesh

The complete program

#include "chi_runtime.h"
#include "chi_log.h"
#include "math/SpatialDiscretization/FiniteVolume/fv.h"
#include "physics/FieldFunction/fieldfunction2.h"
int main(int argc, char* argv[])
{
chi::Initialize(argc,argv);
chi::RunBatch(argc, argv);
chi::log.Log() << "Coding Tutorial 1";
//============================================= Get grid
auto grid_ptr = chi_mesh::GetCurrentHandler().GetGrid();
const auto& grid = *grid_ptr;
chi::log.Log() << "Global num cells: " << grid.GetGlobalNumberOfCells();
//============================================= Make SDM
typedef std::shared_ptr<chi_math::SpatialDiscretization> SDMPtr;
SDMPtr sdm_ptr = chi_math::SpatialDiscretization_FV::New(grid_ptr);
const auto& sdm = *sdm_ptr;
const auto& OneDofPerNode = sdm.UNITARY_UNKNOWN_MANAGER;
const size_t num_local_dofs = sdm.GetNumLocalDOFs(OneDofPerNode);
const size_t num_globl_dofs = sdm.GetNumGlobalDOFs(OneDofPerNode);
chi::log.Log() << "Num local DOFs: " << num_local_dofs;
chi::log.Log() << "Num globl DOFs: " << num_globl_dofs;
//============================================= Initializes Mats and Vecs
const auto n = static_cast<int64_t>(num_local_dofs);
const auto N = static_cast<int64_t>(num_globl_dofs);
Mat A;
Vec x,b;
std::vector<int64_t> nodal_nnz_in_diag;
std::vector<int64_t> nodal_nnz_off_diag;
sdm.BuildSparsityPattern(nodal_nnz_in_diag,nodal_nnz_off_diag,OneDofPerNode);
nodal_nnz_in_diag,
nodal_nnz_off_diag);
//============================================= Assemble the system
chi::log.Log() << "Assembling system: ";
for (const auto& cell : grid.local_cells)
{
const auto& cell_mapping = sdm.GetCellMapping(cell);
const int64_t imap = sdm.MapDOF(cell,0);
const auto& xp = cell.centroid;
const double V = cell_mapping.CellVolume();
size_t f=0;
for (const auto& face : cell.faces)
{
const auto Af = face.normal * cell_mapping.FaceArea(f);
if (face.has_neighbor)
{
const auto& adj_cell = grid.cells[face.neighbor_id];
const int64_t jnmap = sdm.MapDOF(adj_cell,0);
const auto& xn = adj_cell.centroid;
const auto xpn = xn - xp;
const auto cf = Af.Dot(xpn) / xpn.NormSquare();
MatSetValue(A, imap, imap , cf, ADD_VALUES);
MatSetValue(A, imap, jnmap, -cf, ADD_VALUES);
}
else
{
const auto& xn = xp + 2.0*(face.centroid - xp);
const auto xpn = xn - xp;
const auto cf = Af.Dot(xpn) / xpn.NormSquare();
MatSetValue(A, imap, imap , cf, ADD_VALUES);
}
++f;
}//for face
VecSetValue(b, imap, 1.0*V, ADD_VALUES);
}//for cell i
chi::log.Log() << "Global assembly";
MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
VecAssemblyBegin(b);
VecAssemblyEnd(b);
chi::log.Log() << "Done global assembly";
//============================================= Create Krylov Solver
chi::log.Log() << "Solving: ";
auto petsc_solver =
A, //Matrix
"FVDiffSolver", //Solver name
KSPCG, //Solver type
PCGAMG, //Preconditioner type
1.0e-6, //Relative residual tolerance
1000); //Max iterations
//============================================= Solve
KSPSolve(petsc_solver.ksp,b,x);
chi::log.Log() << "Done solving";
//============================================= Extract PETSc vector
std::vector<double> field(num_local_dofs, 0.0);
sdm.LocalizePETScVector(x,field,OneDofPerNode);
//============================================= Clean up
KSPDestroy(&petsc_solver.ksp);
VecDestroy(&x);
VecDestroy(&b);
MatDestroy(&A);
chi::log.Log() << "Done cleanup";
//============================================= Create Field Function
auto ff = std::make_shared<chi_physics::FieldFunction>(
"Phi",
sdm_ptr,
);
ff->UpdateFieldVector(field);
ff->ExportToVTK("CodeTut1_FV");
chi::Finalize();
return 0;
}