1. Installation
1.1. Conan Configuration
We use the C++ package manager conan to install dependencies for this library. conan
is a python3 package and installable via the python package manager pip3
. To install conan
open a terminal and use the command
> pip3 install --user conan
to make user-wide installation. You might need to add ${HOME}/.local/bin
to your PATH
environment variable in case of a user-wide installation.
If you have root access on your machine you can also consider doing a system-wide installation via the command
> sudo pip3 install conan
To test whether conan is installed type conan
in your command line window.
Expected Output:
> conanConsumer commands install Installs the requirements specified in a recipe (conanfile.py or conanfile.txt). config Manages Conan configuration. get Gets a file or list a directory of a given reference or package. info Gets information about the dependency graph of a recipe. search Searches package recipes and binaries in the local cache or in a remote.Creator commands new Creates a new package recipe template with a 'conanfile.py' and optionally, 'test_package' testing files. create Builds a binary package for a recipe (conanfile.py). upload Uploads a recipe and binary packages to a remote. export Copies the recipe (conanfile.py & associated files) to your local cache. export-pkg Exports a recipe, then creates a package from local source and build folders. test Tests a package consuming it from a conanfile.py with a test() method.Package development commands source Calls your local conanfile.py 'source()' method. build Calls your local conanfile.py 'build()' method. package Calls your local conanfile.py 'package()' method. editable Manages editable packages (package that resides in the user workspace, but are consumed as if they were in the cache). workspace Manages a workspace (a set of packages consumed from the user workspace that belongs to the same project).Misc commands profile Lists profiles in the '.conan/profiles' folder, or shows profile details. remote Manages the remote list and the package recipes associated to a remote. user Authenticates against a remote with user/pass, caching the auth token. imports Calls your local conanfile.py or conanfile.txt 'imports' method. copy Copies conan recipes and packages to another user/channel. remove Removes packages or binaries matching pattern from local cache or remote. alias Creates and exports an 'alias package recipe'. download Downloads recipe and binaries to the local cache, without using settings. inspect Displays conanfile attributes, like name, version and options. Works locally, in local cache and remote. help Shows help for a specific command. graph Generates and manipulates lock files.Conan commands. Type "conan <command> -h" for help
As a first step, you have to create a conan profile, which describes which toolchain you want to use. To create an auto-detected tool-chain use the command
> conan profile new default --detect
In the case of GCC make sure to use the C++11 ABI. Unfortunately, conan
does not choose this by default and we have to adjust the configuration by
> conan profile update settings.compiler.libcxx=libstdc++11 default> conan profile show defaultConfiguration for profile default:[settings]os=Linuxos_build=Linuxarch=x86_64arch_build=x86_64compiler=gcccompiler.version=9compiler.libcxx=libstdc++11build_type=Release[options][build_requires][env]
Note: The library needs a minimum GCC version of 7 or a minimum LLVM clang version of 5. We regularly test in our CI at GitLab GCC version 9 and clang version 8.
We added some custom installation recipes which conan
can use to install dependencies like [AMReX] or [HDF5]. These are stored in a conan
repository and we need to point conan
to this repository. This is done via the command line
> conan remote add gitlab https://git.imp.fu-berlin.de/api/v4/packages/conan> conan user <username> -r gitlab -p <api-key>Hereby
The access token can be created via the user symbol on the top right of the gitlab interface, Settings->Access Tokens (check api).
1.2. MPI Installation
Make sure to have some MPI implementation on your system such that your current conan
profile can link against it. This can be achieved by adapting the compiler option of your profile.
1.3. CMake Installation
CMake is a build generation tool and generates build configuration as Makefiles for example.
Before we start installing all dependencies with conan
we need to install a rather new version of CMake
. AMReX
requires a minimal CMake
version of 3.14. Fortunately CMake
is quite simple to download and binary packages can be found here.
1.4. Building the Library
First use git and clone the library to a local directory
> git clone git@git.imp.fu-berlin.de:ag-klein/FiniteVolumeSolver.git
If you want to build the unit tests you need to pull Catch2
as a git submodule. Enter the source direction and call
> cd FiniteVolumeSolver/./FiniteVolumeSolver> git submodule update --init
This will checkout the develop
branch by default and create a folder named with relative path ./FiniteVolumeSolver
. Next, we create an out-of-source build directory where we want to build the library archives and example binaries.
./FiniteVolumeSolver> mkdir build./FiniteVolumeSolver> cd build./FiniteVolumeSolver/build> cd build
Inside the build
directory we use conan to install the dependencies with the options which we want to use. AMReX
for examples has the following configurable build options:
AMReX:eb = True|False [True] # enable embedded boundaries / cut-cellsAMReX:omp = True|False [True] # enable OpenMP parallelizationAMReX:dim = 1|2|3 [3] # spatial dimension used by AMReX
To install [AMReX] with embedded boundaries and without OpenMP support (there is no OpenMP support on Apple for example) use within the build directory
./FiniteVolumeSolver/build> conan install <Path-to-FiniteVolumeSolver-Source-Dir> -o AMReX:dim=2 -o AMReX:omp=False
In our case
./FiniteVolumeSolver/build> conan install ../ -o AMReX:dim=2 -o AMReX:omp=False
This will look into the file FiniteVolumeSolver/conanfile.txt
and tries to locally install all dependencies which are listed there. After installing these it creates a conanfile.cmake
in the build directory which will be read by our CMakeLists.txt
file. This, in turn, injects all necessary include and library paths which we need to build our application. Now we use cmake
to configure our specific build, i.e.
./FiniteVolumeSolver/build> cmake ../
to configure a Debug build, or
./FiniteVolumeSolver/build> cmake ../ -DCMAKE_BUILD_TYPE=Release
for a Release build. On Linux and macOS this will create Makefiles by default which you can invoke by typing
./FiniteVolumeSolver/build> make
which should build all targets. For other systems, a more general call would be
./FiniteVolumeSolver/build> cmake --build .
1.4.1. Useful build flags
If you want to link to another python version then the default one you can pass
-DPYBIND11_PYTHON_VERSION=3.8
or directly specify the python executable by-DPYTHON_EXECUTABLE=/path/to/python
If some targets fail to build feel free to raise an issue in GitLab.
2. Code Overview
We use a layered library design to achieve separation of concerns.
One of the main goals is to have reusable implementations for algorithms which are independent of the concrete storage backend.
We use namespaces to indicate the dependency on various structured grid libraries such as AMReX or SAMRAI.
All components inside of the namespace fub
are independent of the concrete AMR backend. Classes and functions of the namespaces fub::amrex
or fub::samrai
depend on their respective libraries.
This library provides tools to write driver files which form complete applications. Each driver makes its own choices on how to handle program options, input files or its output.
You can find many example applications in the examples/
subfolder within the Git project.
Every example composes layer for layer until a solver is built and run.
An application needs to build up its layers in the following order
PatchHierarchy (data storage)
GriddingAlgorithm (algorithmic choice)
IntegratorContext (data storage / algorithmic choice)
one or more composed LevelIntegrators (algorithmic choice)
SubcycleSolver (algorithmic choice)
Note: Each layer in this list is implemented in terms of its predecessor.
2.1. Policy Pattern
To provide customization points for the many algorithmic choices in an AMR-enabled simulation we make heavy use of the so-called policy pattern.
Wikipedia: In computer programming, the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use.
For each algorithmic customization point, we define a concept, which is simply a set of syntactic requirements. Any type which satisfies a particular policy concept can be used as a drop-in replacement for existing algorithms to adjust the method for your special needs. The customization points are chosen to be orthogonal and thus enable them to freely concentrate on a single aspect of the method.
UpdateConservatively
with a sufficient function signature satisfies the policy concept for TimeIntegrator<IntegratorContext>
.
This leads to a parametrization of the algorithm in terms of objects.
One could, in the case of a TimeIntegrator
for example, have multiple implementations for different parallelization strategies, as in the following code snippet fub :: amrex :: HypberbolicMethod method {}; if ( is_gpu_enabled ) { method . time_integrator = fub :: amrex :: EulerForwardTimeIntegrator ( fub :: execution :: gpu ); } else { method . time_integrator = fub :: amrex :: EulerForwardTimeIntegrator ( fub :: execution :: openmp ); }
The notion of concepts will be part of the C++ language as of the C++20 standard. With compilers which will support C++20 concepts already, this type requirement can be expressed in the code as in
template < typename TI , typename IntegratorContext > concept TimeIntegrator = std :: copy_assignable < TI > && requires ( TI & time_integrator , IntegratorContext & context , int level , Duration dt , Direction dir ) { { time_integrator . UpdateConservatively ( context , level , dt , dir ) }; };
2.2. Data Storage: PatchHierarchy
A PatchHierarchy
allocates simulation data for each refinement level on a PatchLevel
and can be either generated by a GriddingAlgorithm
or can be initialized from a checkpoint to restart a simulation. Data on a patch hierarchy shall represent a valid spatial state at some time point and does not contain a ghost cell layer.
The allocated data can be either associated to be cell-, node- or face-centered. The required data allocation is described by an object of type DataDescription
, which is defined as
struct DataDescription { int n_cell_components { 0 }; int n_node_components { 0 }; int n_face_components { 0 }; int dimension { AMREX_SPACEDIM }; };
Note: A patch hierarchy does not know what the components represent. It does not have an equation object as context and does not need it. Its only responsibility is to manage a certain amount of data over multiple levels and distributed over MPI ranks.
Note: [AMReX] represents hierarchies very often as pairs of (Vector<MultiFab*>, Vector<Geometry>)
where the vector is taken over the number of refinement levels. One can interpret a PatchHierarchy
as an equivalent representation.
2.3. Algorithmic Choice: GriddingAlgorithm
A gridding algorithm adds some algorithmic choices to the patch hierarchy which are needed for the box generation algorithms in the context of adaptive mesh refinement.
The present gridding algorithms use the AmrCore
class from the [AMReX] library to generate a distributed hierarchy of index boxes.
It owns a patch hierarchy and initializes or modifies it.
It uses a TaggingMethod
policy object to mask cells that need additional refinement.
In addition to the TaggingMethod
policy, a GriddingAlgorithm
also needs boundary and initial conditions, given by BoundaryCondition
and InitialData
policy objects.
The boundary conditions are used in communication routines to fill the ghost cell layer touching the computational domain, whereas the initial conditions are only used once at the beginning of a simulation.
#include "fub/AMReX.hpp"#include "fub/Solver.hpp"struct CircleData { void InitializeData ( fub :: amrex :: PatchLevel & patch_level , const fub :: amrex :: GriddingAlgorithm & grid , int level , fub :: Duration /*time*/ ) const { const amrex :: Geometry & geom = grid . GetPatchHierarchy (). GetGeometry ( level ); amrex :: MultiFab & data = patch_level . data ; fub :: amrex :: ForEachFab ( data , [ & ]( const amrex :: MFIter & mfi ) { const :: amrex :: Box & box = mfi . tilebox (); amrex :: FArrayBox & fab = data [ mfi ]; fub :: amrex :: ForEachIndex ( box , [ & ]( int i , int j ) { const double x = geom . CellCenter ( i , 0 ); const double y = geom . CellCenter ( j , 1 ); const double norm2 = x * x + y * y ; constexpr double r2 = 0.25 * 0.25 ; amrex :: IntVect iv ( i , j ); if ( norm2 < r2 ) { fab ( iv , 0 ) = 3.0 ; } else { fab ( iv , 0 ) = 1.0 ; } }); }); } }; int main () { fub :: amrex :: ScopeGuard guard {}; constexpr int Dim = AMREX_SPACEDIM ; static_assert ( AMREX_SPACEDIM == 2 ); const std :: array < int , Dim > n_cells { 128 , 128 }; const std :: array < double , Dim > xlower { - 1.0 , - 1.0 }; const std :: array < double , Dim > xupper { + 1.0 , + 1.0 }; fub :: Advection2d equation {{ 1.0 , 1.0 }}; fub :: amrex :: CartesianGridGeometry geometry ; geometry . cell_dimensions = n_cells ; geometry . coordinates = amrex :: RealBox ( xlower , xupper ); geometry . periodicity = std :: array < int , Dim > { 1 , 1 }; fub :: amrex :: PatchHierarchyOptions hier_opts {}; hier_opts . max_number_of_levels = 4 ; fub :: amrex :: PatchHierarchy hierarchy ( equation , geometry , hier_opts ); using State = fub :: Advection2d :: Complete ; fub :: amrex :: GradientDetector gradient { equation , std :: pair { & State :: mass , 1e-2 }}; auto gridding = std :: make_shared < fub :: amrex :: GriddingAlgorithm > ( hierarchy , CircleData {}, gradient ); gridding -> InitializeHierarchy ( 0.0 ); fub :: amrex :: WritePlotFile ( "InitialHierarchy/plt00000" , gridding -> GetPatchHierarchy (), equation ); }
This example produces the following figure and can be visualized either in VisIt, Paraview or python-yt.

Copying a GriddingAlgorithm
object will deeply copy its data and its policy objects on each MPI rank. This is useful if one wants to make a backup of the current grid state.
The GriddingAlgorithm
provides three customization points.
Their concepts InitialData<GriddingAlgorithm>
, TaggingMethod<GriddingAlgorithm>
and BoundaryCondition<GriddingAlgorithm>
are defined as
template < typename I , typename GriddingAlgorithm > concept InitialData = std :: copy_assignable < I > && requires ( const I & initial_data , const GriddingAlgorithm & grid , int level ) { { initial_data . InitializeData ( grid , level ) }; } template < typename T , typename GriddingAlgorithm > concept TaggingMethod = std :: copy_assignable < T > && requires ( const T & tagging_method , :: amrex :: TagBoxArray & tags , const GriddingAlgorithm & grid , int level ) { { tagging_method . TagCellsForRefinement ( tags , grid , level ) }; } template < typename BC , typename GriddingAlgorithm > concept BoundaryCondition = std :: copy_assignable < BC > && requires ( const BC & boundary_condition , :: amrex :: MultiFab & data , const GriddingAlgorithm & grid , int level ) { { boundary_condition . FillBoundary ( data , grid , level ) }; }
You find polymorphic value types AnyInitialData<GriddingAlgorithm>
, AnyTaggingMethod<GriddingAlgorithm>
and AnyBoundaryCondition<GriddingAlgorithm>
in the Finite Volume Solver library to match each one of the concepts. These polymorphic types are used to store any object of a class that satisfies those concepts.
fub::amrex::GriddingAlgorithm
is internally defined in something like class GriddingAlgorithm : private :: amrex :: AmrCore { private : PatchHierarchy hierarchy_ ; AnyInitialData initial_data_ ; AnyTaggingMethod tagging_method_ ; AnyBoundaryCondition boundary_condition_ ; public : /* etc... */ };
2.4. Data Storage: IntegratorContext
An IntegratorContext
allocates, manages and provides access to additional data that is needed for every conservative AMR scheme, such as cell-centered and face-centered data arrays with ghost layers. It also manages face-centered data on the coarse-fine interface between two refinement levels.
TimeIntegrator<fub::amrex::IntegratorContext>
concept. class GpuTimeIntegrator { /// \brief This function computes a conservative time update for cells in /// direction dir /// /// This specific implementation uses the AMReX tools to do the job void UpdateConservatively ( IntegratorContext & context , int level , Duration dt , Direction dir ) { using namespace amrex ; MultiFab & states = context . GetScratch ( level ); const MultiFab & fluxes = context . GetFluxes ( level , dir ); const int n_conservative_variables = context . GetNConservativeVariables (); const double dx = context . GetCellSize ( level , dir ); const double lambda = dt . count () / dx ; #ifdef _OPENMP #pragma omp parallel if (Gpu::notInLaunchRegion()) #endif for ( MFIter mfi ( cells , TilingIfNotGPU ()); mfi . isValid (); ++ mfi ) { FArrayBox & qs = cells [ mfi ]; const FArrayBox & fs = fluxes [ mfi ]; const Box cell_box = mfi . growntilebox (); Array4 < double > q = qs . array (); Array4 < const double > f = fs . array (); ParallelFor ( cell_box , n_conservative_variables , [ = ] AMREX_GPU_DEVICE ( int i , int j , int k , int var ) { std :: array < int , 3 > iv { i , j , k }; std :: array < int , 3 > iL = iv ; std :: array < int , 3 > iR = Shift ( iv , dir , 1 ); q ( iv , var ) = q ( iv , var ) + lambda * ( f ( iL , var ) - f ( iR , var )); }); } } };
In addition to managing the additional data, every integrator context defines a comprehensive list of functions to deal with the various data locations. The IntegratorContext
concept is defined by
template < typename I > concept IntegratorContext = std :: copy_assignable < I > && requires ( I & integrator_context , int level , Duration dt , Direction dir , double scale , BoundaryCondition & bc ) { // Access PatchLevel Data { integrator_context . GetGriddingAlgorithm () }; { integrator_context . GetData ( level ) }; { integrator_context . GetScratch ( level ) }; { integrator_context . GetFluxes ( level , dir ) }; // Coarse-fine interface interaction { integrator_context . ApplyFluxCorrection ( level , level - 1 ) }; { integrator_context . ResetCoarseFineFluxes ( level , level - 1 ) }; { integrator_context . AccumulateCoarseFineFluxes ( level , scale , dir ) }; // Action on scratch { integrator_context . CoarsenConservatively ( level , level - 1 ) }; { integrator_context . CompleteFromCons ( level ) }; // Copy data { integrator_context . CopyDataToScratch ( level ) }; { integrator_context . CopyScratchToData ( level ) }; // Fill ghost layers within scratch { integrator_context . FillGhostLayerTwoLevels ( level , bc , level - 1 , bc ) }; { integrator_context . FillGhostLayerTwoLevels ( level , level - 1 ) }; { integrator_context . FillGhostLayerSingleLevel ( level , bc ) }; { integrator_context . FillGhostLayerSingleLevel ( level ) }; // Reallocate internal data from level { integrator_context . ResetHierarchyConfiguration ( level ) }; };
This is the minimum amount of functions needed to build common conservative AMR schemes on top of an AMR library like [AMReX] or [SAMRAI]. The implementation of an IntegratorContext
will, in general, be very library-specific. The layers after the IntegratorContext
describe parts of the AMR time integration which are implemented on those concepts only.
Nevertheless, we still refine this concept to better fit the dimensionally split time integration.
template < typename I > concept DimensionalSplitIntegratorContext = IntegratorContext < I > && requires ( I & integrator_context , int level , Duration dt , Direction dir , double scale ) { { integrator_context . ComputeNumericFluxes ( level , dt , dir ) }; { integrator_context . ComputeStableDt ( level , dir ) }; { integrator_context . UpdateConservatively ( level , dir ) }; };
Each integrator context which is currently implemented satisfies the DimensionalSplitIntegratorContext
.
The functions ComputeStableDt
, ComputeNumericFluxes
, UpdateConservatively
and CompleteFromCons
are customization points for those imlpementations. I. e. one needs to provide objects for the policy concepts FluxMethod<IntegratorContext>
, TimeIntegrator<IntegratorContext>
and CompleteFromConsCalculation<IntegratorContext>
to define the behaviour of those functions.
The list of the currently implemented integrator contexts is
fub::amrex::IntegratorContext
fub::amrex::CompressibleAdvectionIntegratorContext
fub::amrex::cutcell::IntegratorContext
fub::amrex::MultiBlockIntegratorContext
fub::samrai::IntegratorContext
The MultiBlockIntegratorContext
class is special, because it is composed of two vectors std::vector<fub::amrex::IntegratorContext>
and std::vector<fub::amrex::cutcell::IntegratorContext>
where each integrator context describes a computational domain. Instead of defining policy objects for the composed context one provides them for each member context.
2.5. Algorithmic Choice: IntegratorContext
We now describe the customization points for an integrator context.
Its concepts FluxMethod<IntegratorContext>
, TimeIntegrator<IntegratorContext>
and CompleteFromConsCalculation<IntegratorContext>
are defined as
template < typename FM , typename IntegratorContext > concept FluxMethod = std :: copy_assignable < FM > && requires ( FM & flux_method , IntegratorContext & context , int level , Duration dt , Direction dir ) { { flux_method . ComputeNumericFlux ( context , level , dt , dir ) }; { flux_method . ComputeStableDt ( context , level , dir ) } -> Duration ; { FM :: GetStencilSize () } -> std :: integral_constant < int , StencilSize > ; }; template < typename TI , typename IntegratorContext > concept TimeIntegrator = std :: copy_assignable < TI > && requires ( TI & time_integrator , IntegratorContext & context , int level , Duration dt , Direction dir ) { { time_integrator . UpdateConservatively ( context , level , dt , dir ) }; }; template < typename R , typename IntegratorContext > concept CompleteFromConsCalculation = std :: copy_assignable < R > && requires ( R & reconstruct , IntegratorContext & context , int level ) { { reconstruct . CompleteFromCons ( context , level ) }; };
The member function ComputeStableDt
shall return a time step size \( \Delta t \) such that the CFL condition for the time integration is satisfied.
The ComputeNumericFluxes
shall fill the face-centered flux arrays in the specified direction which will be used by the TimeIntegrator
to update the conservative state variables.
The TimeIntegrator
provides the usual conservative scheme for cell-based time updates
\[ u^{n+1}_i = u^n_i + \frac{\Delta t}{\Delta x} \left( F^n_{i - \frac 12} - F^n_{i + \frac 12} \right). \]
The numerical fluxes \( F^n_i \) in this update denote the fluxes on the faces of the specified direction and are computed by the FluxMethod
which is configured for this integrator context.
An AMR integration scheme may use the CompleteFromConsCalculation
policy object whenever it changes the conservative variables to establish a valid state across all fields in the patch hierarchy.
We collect the three customization points for the IntegratorContext
in a class which we call HyperbolicMethod<IntegratorContext>
.
It is defined as
template < typename IntegratorContext > struct HyperbolicMethod { AnyFluxMethod < IntegratorContext > flux_method ; AnyTimeIntegrator < IntegratorContext > time_integrator ; AnyCompleteFromConsCalculation < IntegratorContext > reconstruct ; };
IntegratorContext
object one needs to provide a GriddingAlgorithm
and a HyperbolicMethod
.
The following code snippet shows how a concrete integrator context might be constructed in a driver file. void MyMain ( const fub :: ProgramOptions & opts ) { std :: chrono :: steady_clock :: time_point wall_time_reference = std :: chrono :: steady_clock :: now (); fub :: amrex :: ScopeGuard guard {}; fub :: Advection2d equation {{ 1.0 , 1.0 }}; fub :: SeverityLogger log = fub :: GetInfoLogger (); fub :: amrex :: CartesianGridGeometry geometry = fub :: GetOptions ( opts , "GridGeometry" ); BOOST_LOG ( log ) << "GridGeometry:" ; geometry . ( log ); fub :: amrex :: PatchHierarchyOptions hier_opts = fub :: GetOptions ( opts , "PatchHierarchy" ); BOOST_LOG ( log ) << "PatchHierarchy:" ; hier_opts . ( log ); using State = fub :: Advection2d :: Complete ; fub :: amrex :: GradientDetector gradient { equation , std :: pair { & State :: mass , 1e-3 }}; std :: shared_ptr grid = std :: make_shared < fub :: amrex :: GriddingAlgorithm > ( fub :: amrex :: PatchHierarchy ( equation , geometry , hier_opts ), CircleData {}, fub :: amrex :: TagAllOf ( gradient , fub :: amrex :: TagBuffer ( 2 ))); grid -> InitializeHierarchy ( 0.0 ); fub :: amrex :: HyperbolicMethod method { fub :: amrex :: FluxMethodAdapter ( fub :: GodunovMethod { equation }), fub :: amrex :: EulerForwardTimeIntegrator (), fub :: amrex :: NoReconstruction {}}; fub :: amrex :: IntegratorContext context ( grid , method ); // Now further use the context to do some things... }
In addition to computing the mathematical correct formula, an object that satisfies FluxMethod<IntegratorContext>
needs to repeat the logic of how to parallelize and how to iterate over the multi-dimensional index space.
We recognize that these are two orthogonal concerns which are both hard to get right.
To ease the burden for authors of a FluxMethod
we introduced some helper classes which separate these concerns. They will be discussed in more detail in Chapter 3.
2.6. Algorithmic Choice: LevelIntegrator
Level integrators have the task to advance the solution in time for a single refinement level only.
They will be used by larger subcycle schemes which may, in general, do smaller time steps on finer refinement levels.
We define the concept LevelIntegrator
such that it provides two functions AdvanceLevelNonRecursively
and ComputeStableDt
.
template < typename L > concept LevelIntegrator = std :: copy_assignable < L > && requires ( const L & level_integrator , int level , Duration dt , std :: pair < int , int > subcycle ) { { level_integrator . AdvanceLevelNonRecursively ( level , dt , subcycle ) } -> Result < void , TimeStepTooLarge > ; { level_integrator . ComputeStableDt ( level ) } -> Duration ; };
The name AdvanceLevelNonRecursively
emphasizes that this function will be called within a recursive subcycle algorithm.
We call level integrators which provide an integrator context HyperbolicSystemLevelIntegrator
.
An integrator context extents level integrators with functions such as ApplyFluxCorrection
, CoarsenConservatively
or CompleteFromCons
.
These additional functions will be used in a subcycle algorithm to maintain the conservation of the solution across refinement levels.
template < typename HS > concept HyperbolicSystemLevelIntegrator = LevelIntegrator < S > && requires ( const L & level_integrator ) { // Access underlying data storage implementation { level_integrator . GetIntegratorContext () } -> IntegratorContext ; };
This library includes currently the following list of level integrators
DimensionalSplitLevelIntegrator
(satisfiesHyperbolicSystemLevelIntegrator
)SystemSourceLevelIntegrator
(satisfiesHyperboilcSystemLevelIntegrator
)BK19LevelIntegrator
(satisfiesHyperbolicSystemLevelIntegrator
)KineticSourceTerm
(satisfiesLevelIntegrator
)AxialSourceTerm
(satisfiesLevelIntegrator
)IgniteDetonation
(satisfiesLevelIntegrator
)
SystemSourceLevelIntegrator
and BK19LevelIntegrator
are so-called composed level integrators.
This means, in order to build one of those level integrators you need to provide at least one HyperbolicSystemLevelIntegrator
.
The only HyperbolicSystemLevelIntegrator
currently defined in the library is the DimensionalSplitLevelIntegrator
which applies dimensional operator splitting ontop of a DimensionalSplitIntegratorContext
.
A DimensionalSplitLevelIntegrator
is constructible by providing a DimensionalSplitIntegratorContext
and SplittingMethod
, such as GodunovSplitting
or StrangSplitting
.
A single dimensionally split time step includes the following steps
// Apply boundary condition for the physical boundary only. Context :: ApplyBoundaryCondition ( this_level , dir ); // Check stable time step size and if the CFL condition is violated then // restart the coarse time step const Duration local_dt = Context :: ComputeStableDt ( this_level , dir ); MPI_Comm comm = GetMpiCommunicator (); const Duration level_dt = MinAll ( comm , local_dt ); if ( level_dt < split_dt ) { return TimeStepTooLarge { level_dt , this_level }; } // Compute fluxes in the specified direction Context :: ComputeNumericFluxes ( this_level , split_dt , dir ); // Use the updated fluxes to update cons variables at the "SCRATCH" // context. Context :: UpdateConservatively ( this_level , split_dt , dir ); // A time update happened on conservative variables. // We have to re-establish a valid complete state. Context :: CompleteFromCons ( this_level , split_dt ); // Accumulate the fluxes on the coarse fine interface const double scale = split_dt . count (); Context :: AccumulateCoarseFineFluxes ( this_level , scale , dir );
2.7. Algorithmic Choice: SubcycleSovler
Finally, subcycle solvers define in which order level integrators are applied on the refinement level of a hierarchy.
They define the functions ComputeStableDt
and AdvanceLevel
.
template < typename S > concept SubcycleSolver = std :: copy_assignable < S > && requires ( const S & solver , int level , Duration dt ) { { solver . ComputeStableDt () } -> Duration ; { solver . AdvanceLevel ( level , dt ) } -> Result < void , TimeStepTooLarge > ; };
The AdvanceLevel
function advances all levels finer or equal than the specified level.
Most implementations define this function recursively.
template < typename LevelIntegrator > Result < void , TimeStepTooLarge > SubcycleFineFirstSolver < LevelIntegrator >:: AdvanceLevel ( int this_level , Duration dt , std :: pair < int , int > subcycle ) { // PreAdvanceLevel might regrid all finer levels. // The context must make sure that scratch data is allocated PreAdvanceLevel ( this_level , dt , subcycle ); auto scale_dt_on_error = [ this ]( Result < void , TimeStepTooLarge > result ) { TimeStepTooLarge error = result . error (); int ratio = GetTotalRefineRatio ( error . level ); error . level = 0 ; error . dt *= ratio ; return error ; }; // If a finer level exists in the hierarchy, we subcycle that finer level // multiple times and use the fine fluxes on coarse-fine interfaces const int next_level = this_level + 1 ; if ( LevelExists ( next_level )) { ResetCoarseFineFluxes ( next_level , this_level ); const int refine_ratio = GetRatioToCoarserLevel ( next_level ). max (); for ( int r = 0 ; r < refine_ratio ; ++ r ) { auto result = AdvanceLevel ( next_level , dt / refine_ratio , { r , refine_ratio }); if ( ! result ) { return scale_dt_on_error ( result ); } } } Result < void , TimeStepTooLarge > result = AdvanceLevelNonRecursively ( this_level , dt , subcycle ); if ( ! result ) { return scale_dt_on_error ( result ); } if ( LevelExists ( next_level )) { ApplyFluxCorrection ( next_level , this_level , dt ); } // Coarsen inner regions from next finer level to this level. if ( LevelExists ( next_level )) { CoarsenConservatively ( next_level , this_level ); // The conservative update and the coarsening happened on conservative // variables. We have to reconstruct the missing variables in the complete // state. CompleteFromCons ( this_level , dt ); } CopyScratchToData ( this_level ); // Apply any further context related work after advancing this level. // This function can also indicate if some error occured. // For example the context could detect unphysical states and return a // TooLargeTimeStep error condition. result = PostAdvanceLevel ( this_level , dt , subcycle ); if ( ! result ) { return scale_dt_on_error ( result ); } return result ; }
The present subcycle solvers depend on a level integrator only. Future solvers may need additional options that may decsribe varying subcycle ratios between pairs of refinement levels.
3. Framework for defining FluxMethods
The FluxMethod<IntegratorContext>
policy is very generic and allows all kinds of possible logic in user-defined flux methods.
We provide adapter classes to separate the technical aspects of computing those numerical fluxes from the mathematical ones.
To achieve this separation we begin by introducing the notion of an equation.
An equation gives context to the components of a multi-dimensional array and provides the mathematical dependencies needed.
Using those tools we allow a formulation \( F(q(x, t), dx, dt) \) of flux methods, which depend on a stencil of equation states only.
3.1. Equation: Conservative and Complete States
The Finite Volume Solver defines so-called equation classes, for example, Advection2d
or PerfectGas<Rank>
.
Each equation defines two kinds of states: Conservative
and Complete
states.
The conservative states contain variables which see a hyperbolic time update of a time integrator.
The complete state variables are a superset of the conservative ones and may define more auxiliary member variables.
This is reasonable for variables which are needed very often but have a high computation cost.
The complete state is the location to cache those expensive computations.
template <int Rank> class PerfectGas
defines the conservative variables template < int Rank > struct PerfectGasConservative { double density ; Array < double , Rank > momentum ; double energy ; };
and the complete state variables are
template < int Rank > struct PerfectGasComplete : PerfectGasConservative < Rank > { double pressure ; double speed_of_sound ; };
Given a MultiFab
object we can iterate over all local FArrayBox
objects using the MFIter
iterator as in this simplified example:
template < typename Function > void DoSomethingForEachFArrayBox ( amrex :: MultiFab & multifab ) { for ( amrex :: MFIter mfi ( multifab ); mfi . isValid (); ++ mfi ) { amrex :: FArrayBox & fab = multifab [ mfi ]; // do something... } }
amrex::FArrayBox
is a container type in [AMReX] that stores a multi-dimensional data array for a single patch.
By using an equation object we can view this amrex::FArrayBox
as a set of multi-dimensional arrays for each variable.
FArrayBox
and we want to access the variables of this patch.
In order to so we need an equation object which interprets the raw data. struct MyClass { using Complete = :: fub :: Complete < PerfectGas < 3 >> ; using Conservative = :: fub :: Conservative < PerfectGas < 3 >> ; void foo ( :: amrex :: FArrayBox & raw_data ) { // Transform the raw data object to a View object. auto mutable_states = MakeView < Complete > ( raw_data , equation ); auto const_states = MakeView < const Complete > ( raw_data , equation ); auto conservatives = MakeView < Conservative > ( raw_data , equation ); // Iterate through all global grid indices ForEachIndex ( Box < 0 > ( const_states ), [ = ]( int i , int j , int k ) { // Access the variables mutable_states . density ( i , j , k ) = 1.0 ; conservatives . momentum ( i , j , k , 0 ) = 1.0 + const_states . density ( i , j , k ); conservatives . momentum ( i , j , k , 1 ) = 0.0 ; conservatives . momentum ( i , j , k , 2 ) = 0.0 ; // The objects view the same memory. assert ( const_states . momentum ( i , j , k , 0 ) == conservatives . momentum ( i , j , k , 0 )); assert ( const_states . momentum ( i , j , k , 1 ) == conservatives . momentum ( i , j , k , 1 )); assert ( const_states . momentum ( i , j , k , 2 ) == conservatives . momentum ( i , j , k , 2 )); }); } PerfectGas < 3 > equation ; };
Once we have a View
object that gives variable-wise access of multi-dimensional data we can load and store state objects, such as Complete<Equation>
and Conservative<Equation>
, from and to memory locations inidicated by global space indices (i, j, k)
.
Complete<Equation>
, to a function which is defined on a multi-dimensional array View<Complete<Equation>>
by applying its logic for each index. struct MyClass { using Complete = :: fub :: Complete < PerfectGas < 3 >> ; using Conservative = :: fub :: Conservative < PerfectGas < 3 >> ; void Foo ( Complete & state ) { state . density = 1.0 ; state . momentum [ 0 ] = 1.0 + state . pressure ; state . momentum [ 1 ] = 0.0 ; state . momentum [ 2 ] = 0.0 ; } void LiftedFoo ( const View < Complete >& view ) { ForEachIndex ( Box < 0 > ( view ), [ = ]( int i , int j , int k ) { Load ( state , view , { i , j , k }); Foo ( state ); Store ( view , state , { i , j , k }); }); } PerfectGas < 3 > equation ; Complete state ( equation ); };
The member function LiftedFoo
in this example can have multiple equivalent implementations that differ in their efficiency.
Its real implementation is often considered an implementation detail and might change.
3.2. Implement a new FluxMethod, the simple case
A class FM
satisfies the concept FluxMethod<Equation, StencilSize>
if the following constraints are satisfied.
template < typename FM , typename Equation , int StencilSize > concept FluxMethod = requires ( FM & flux_method , Conservative < Equation >& numeric_flux , span < const Complete < Equation > , StencilSize > stencil , double dx , Duration dt , Direction dir ) { { flux_method . ComputeNumericFlux ( numeric_flux , stencil , dx , dt , dir ) }; { flux_method . ComputeStableDt ( stencil , dx , dir ) } -> Duration ; { FM :: GetStencilSize () } -> std :: integral_constant < int , StencilSize > ; };
MusclHancockMethod
satisfies this concept. struct MusclHancockMethod { using Conservative = :: Conservative < PerfectGas < 2 >> ; using Complete = :: Complete < PerfectGas < 2 >> ; void ComputeNumericFlux ( Conservative & cons , span < const Complete , 4 > stencil , double dx , Duration dt , Direction dir ); Duration ComputeStableDt ( span < const Complete , 4 > stencil , double dx , int dir ); std :: integral_constant < int , 2 > GetStencilSize () const noexcept { return {}; } };
The implemented MUSCL-type flux methods for an equation Equation
satisfy FluxMethod<Equation, 4>
and are implemented in two steps.
First, we reconstruct a state to the left and the right of the face in question.
This gives a reduction of the stencil array from Complete state[4]
to Complete reconstruced_states[2]
.
Secondly, we call a (lower-order) base method BaseFM
which satisfies FluxMethod<Equation, 2>
.
4. Using Docker to share and persist a simulation
4.1. How to run an existing simulation
We describe in the following how to use docker to run the Divider2D application.
Install docker via https://www.docker.com/
Open a Terminal or Powershell in windows
Download the docker image that contains the Divider2D example via
$ docker pull maikelnadolski/divider2d
Run the
fish
command as theamrex
user and mount your$HOME
directory on the host/home/amrex/volume
within the container
$ docker run -it -v$HOME :/home/amrex/volume --user amrex -w /home/amrex maikelnadolski/divider2d fishWelcome to fish, the friendly interactive shellType <code data-opaque bs-autolink-syntax='`help`'>help</code> for instructions on how to use fish⋊> ~ ls -l 07:09:35total 9132-rwxr-xr-x 1 amrex amrex 9120504 Jan 8 21:22 AMReX.EB.Divider2D*-rw-r--r-- 1 amrex amrex 3408 Jan 8 21:29 Divider2D.py-rw-r--r-- 1 amrex amrex 2969 Jan 8 21:51 VisualizeHdf5.py-rw-r--r-- 1 amrex amrex 1433 Jan 8 21:33 VisualizePlotfiles.pydrwxr-xr-x 1 amrex amrex 4096 Jan 8 20:18 amrex/drwxr-xr-x 64 maikel maikel 45056 Jan 11 06:26 volume/-rw-r--r-- 1 amrex amrex 27081 Jan 8 18:33 wall_1.txt-rw-r--r-- 1 amrex amrex 54000 Jan 8 18:33 wall_2.txt-rw-r--r-- 1 amrex amrex 81078 Jan 8 18:33 wall_3.txt⋊> ~
On windows, you need to provide the HOST path directly as in the following command
$ docker run -it -v C:/Users/Studi:/home/amrex/volume --user amrex -w /home/amrex maikelnadolski/divider2d fish
If you run into problems regarding the shared memory capacity of your docker container you can try to use the option --shm-size=512m
to increase it.
For example as in
$ docker run -it -v $HOME :/home/amrex/volume --shm-size= 512m --user amrex -w /home/amrex maikelnadolski/divider2d fish
Test the simulation by running the examples with the provided input file
The folowing command will execute the application (AMReX.EB.Divider2D
) with a given default input file (Divider2D.py
).
This default output will be located within your running docker container.
Its output should be something like the following snippet
⋊> ~ OMP_NUM_THREADS=1 mpiexec AMReX.EB.Divider2D --config Divider2D.py 07:09:39MPI initialized with 4 MPI processesMPI initialized with thread support level 0OMP initialized with 1 OMP threadsAMReX (20.07-dirty) initialized2021-Jan-11 07:11:41.057392 -- [info] Equation:2021-Jan-11 07:11:41.057507 -- [info] - gamma = 1.42021-Jan-11 07:11:41.058302 -- [info] - Rspec = 287.0582021-Jan-11 07:11:41.058344 -- [info] GridGeometry:2021-Jan-11 07:11:41.058368 -- [info] - cell_dimensions = {800, 344}2021-Jan-11 07:11:41.058395 -- [info] - coordinates:lower = {0, -0.045}2021-Jan-11 07:11:41.058420 -- [info] - coordinates:upper = {0.23, 0.055}2021-Jan-11 07:11:41.058443 -- [info] - periodicity = {0, 0}2021-Jan-11 07:11:41.058475 -- [info] PatchHierarchy:2021-Jan-11 07:11:41.058498 -- [info] - max_number_of_levels = 12021-Jan-11 07:11:41.058521 -- [info] - refine_ratio = {2, 2}2021-Jan-11 07:11:41.058544 -- [info] - blocking_factor = {8, 8}2021-Jan-11 07:11:41.058566 -- [info] - max_grid_size = {800, 344}2021-Jan-11 07:11:41.058588 -- [info] - n_err_buf = {0, 0}2021-Jan-11 07:11:41.058610 -- [info] - grid_efficiency = 0.72021-Jan-11 07:11:41.058633 -- [info] - n_proper = 12021-Jan-11 07:11:41.058653 -- [info] - ngrow_eb_level_set = 92021-Jan-11 07:11:41.058674 -- [info] - cutcell_load_balance_weight = 62021-Jan-11 07:11:41.058704 -- [info] - remove_covered_grids = false2021-Jan-11 07:11:41.058726 -- [info] Read Wall Data...2021-Jan-11 07:11:41.065565 -- [info] Compute EB level set data...2021-Jan-11 07:11:50.140497 -- [info] Post-Shock-State:2021-Jan-11 07:11:50.140497 -- [info] density: 1.22 [kg/m^3]2021-Jan-11 07:11:50.140497 -- [info] velocity: 0 0 [m/s]2021-Jan-11 07:11:50.140497 -- [info] pressure: 101325 [Pa]2021-Jan-11 07:11:50.140804 -- [info] Calculated Pre-Shock-State:2021-Jan-11 07:11:50.140804 -- [info] density: 1.42628 [kg/m^3]2021-Jan-11 07:11:50.140804 -- [info] velocity: 54.2485 0 [m/s]2021-Jan-11 07:11:50.140804 -- [info] pressure: 126150 [Pa]2021-Jan-11 07:11:50.172436 -- [info] Reconstruction: Characteristics2021-Jan-11 07:11:50.172533 -- [info] scratch_gcw = 22021-Jan-11 07:11:50.172568 -- [info] flux_gcw = 02021-Jan-11 07:11:50.184254 -- [Plotfile] [info] [T = 0s] Write Plotfile output to 'Divider_DE5_800/Plotfiles/plt000000000'.2021-Jan-11 07:11:50.247063 -- [HDF5] [info] [T = 0s] Write Hdf5 output to 'Divider_c24_800.h5'.Table of Counters ---------------------------------------------------------------------------------------------------------------------------------------—<wbr>Counter Name #Calls Min [ms] Max [ms] Avg [ms] StdDev [ms] Percent [%] cutcell::GriddingAlgorithm::InitializeHierarchy 1 31 587 329 211 3.5 cutcell::GriddingAlgorithm::MakeNewLevelFromScratch 1 31 33 32 1 0.34cutcell::IntegratorContext::ResetHierarchyConfiguration 1 9 12 11 1 0.12 cutcell::GriddingAlgorithm::LoadBalance 1 4 5 5 0 0.056 cutcell::IntegratorContext::CopyDataToScratch 1 1 4 3 1 0.033 cutcell::GriddingAlgorithm::PostProcessBaseGrids 1 0 0 0 0 2e-06===========================================================================================================================================================2021-Jan-11 07:11:50.375671 -- [info] RunOptions:2021-Jan-11 07:11:50.375965 -- [info] - final_time = 0.001 [s]2021-Jan-11 07:11:50.376228 -- [info] - max_cycles = -1 [-]2021-Jan-11 07:11:50.376509 -- [info] - smallest_time_step_size = 1e-12 [s]2021-Jan-11 07:11:50.376932 -- [info] - cfl = 0.8 [-]2021-Jan-11 07:11:50.377185 -- [info] - do_backup = 0 [-]...
Attention: whenever you close the docker container all changes within the container will be lost.
If you want to persist some changes, for example an input file, you need to commit
the changes from a second terminal window as in
Darwin mos-mac-pro.hfi.lan 19.6.0 x86_6414:02 up 9 days, 14:09, 3 users, load averages: 10.48 6.31 4.14 ◰² base ~ docker container list CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES514cbfaf19cb maikelnadolski/divider2d "fish" 2 minutes ago Up 2 minutes sleepy_chaplygin ◰² base ~ docker stop 514cbfaf19cb 514cbfaf19cb ◰² base ~ docker container list CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES53c5439c4f36 maikelnadolski/divider2d "fish" About a minute ago Up About a minute upbeat_varahamihira ◰² base ~ docker commit 53c5439c4f36 mo/divider2d sha256:1015bdde15f48ab76b00380a91095fc74cff76465da21480689756d440d4e20c ◰² base ~ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZEmo/divider2d latest 1015bdde15f4 9 seconds ago 1.24GBmaikelnadolski/divider2d latest e2e6be51eb4f 2 weeks ago 1.24GB
Visualize the HDF5 output by executing VisualizeHdf5.py and the AMReX Plotfiles by VisualizePlotfiles.py
⋊> ~ python3 VisualizeHdf5.py 07:14:09Save figure to /home/amrex/Figures/Hdf5/Figure0000.png.Save figure to /home/amrex/Figures/Hdf5/Figure0001.png.Save figure to /home/amrex/Figures/Hdf5/Figure0002.png.Save figure to /home/amrex/Figures/Hdf5/Figure0003.png....⋊> ~ python3 VisualizePlotfiles.py 07:14:18/usr/lib/python3.9/_collections_abc.py:684: MatplotlibDeprecationWarning: The global colormaps dictionary is no longer considered public API. self[key]yt : [INFO ] 2021-01-11 07:14:29,846 Parameters: current_time = 0.0yt : [INFO ] 2021-01-11 07:14:29,846 Parameters: domain_dimensions = [800 344 1]yt : [INFO ] 2021-01-11 07:14:29,846 Parameters: domain_left_edge = [ 0. -0.045 0. ]yt : [INFO ] 2021-01-11 07:14:29,847 Parameters: domain_right_edge = [0.23 0.055 1. ]yt : [INFO ] 2021-01-11 07:14:29,994 xlim = 0.000000 0.230000yt : [INFO ] 2021-01-11 07:14:29,994 ylim = -0.045000 0.055000yt : [INFO ] 2021-01-11 07:14:29,995 xlim = 0.000000 0.230000yt : [INFO ] 2021-01-11 07:14:29,995 ylim = -0.045000 0.055000yt : [INFO ] 2021-01-11 07:14:29,995 Making a fixed resolution buffer of (('boxlib', 'Density')) 800 by 800yt : [INFO ] 2021-01-11 07:14:30,083 Making a fixed resolution buffer of (('boxlib', 'Pressure')) 800 by 800yt : [INFO ] 2021-01-11 07:14:30,102 Making a fixed resolution buffer of (('boxlib', 'vfrac')) 800 by 800yt : [INFO ] 2021-01-11 07:14:30,676 Making a fixed resolution buffer of (('boxlib', 'Density')) 800 by 344yt : [INFO ] 2021-01-11 07:14:30,691 Making a fixed resolution buffer of (('boxlib', 'Pressure')) 800 by 344yt : [INFO ] 2021-01-11 07:14:30,706 Making a fixed resolution buffer of (('boxlib', 'vfrac')) 800 by 344yt : [INFO ] 2021-01-11 07:14:30,720 Making a fixed resolution buffer of (Pressure) 800 by 344yt : [INFO ] 2021-01-11 07:14:30,736 Making a fixed resolution buffer of (Density) 800 by 344yt : [INFO ] 2021-01-11 07:14:30,752 Making a fixed resolution buffer of (vfrac) 800 by 344/home/amrex/Divider_DE5_800//Plotfiles/plt000000000: 0.0, [ 0. 0.23 -0.045 0.055]Save figure to /home/amrex/Figures/Plotfiles/Figure0000.png.yt : [INFO ] 2021-01-11 07:14:31,640 Parameters: current_time = 0.000100000001yt : [INFO ] 2021-01-11 07:14:31,640 Parameters: domain_dimensions = [800 344 1]yt : [INFO ] 2021-01-11 07:14:31,640 Parameters: domain_left_edge = [ 0. -0.045 0. ]yt : [INFO ] 2021-01-11 07:14:31,641 Parameters: domain_right_edge = [0.23 0.055 1. ]yt : [INFO ] 2021-01-11 07:14:31,774 xlim = 0.000000 0.230000yt : [INFO ] 2021-01-11 07:14:31,775 ylim = -0.045000 0.055000yt : [INFO ] 2021-01-11 07:14:31,775 xlim = 0.000000 0.230000yt : [INFO ] 2021-01-11 07:14:31,775 ylim = -0.045000 0.055000yt : [INFO ] 2021-01-11 07:14:31,776 Making a fixed resolution buffer of (('boxlib', 'Density')) 800 by 800yt : [INFO ] 2021-01-11 07:14:31,864 Making a fixed resolution buffer of (('boxlib', 'Pressure')) 800 by 800yt : [INFO ] 2021-01-11 07:14:31,884 Making a fixed resolution buffer of (('boxlib', 'vfrac')) 800 by 800yt : [INFO ] 2021-01-11 07:14:32,325 Making a fixed resolution buffer of (('boxlib', 'Density')) 800 by 344yt : [INFO ] 2021-01-11 07:14:32,340 Making a fixed resolution buffer of (('boxlib', 'Pressure')) 800 by 344yt : [INFO ] 2021-01-11 07:14:32,355 Making a fixed resolution buffer of (('boxlib', 'vfrac')) 800 by 344yt : [INFO ] 2021-01-11 07:14:32,369 Making a fixed resolution buffer of (Pressure) 800 by 344yt : [INFO ] 2021-01-11 07:14:32,384 Making a fixed resolution buffer of (Density) 800 by 344yt : [INFO ] 2021-01-11 07:14:32,402 Making a fixed resolution buffer of (vfrac) 800 by 344/home/amrex/Divider_DE5_800//Plotfiles/plt000000182: 0.000100000001, [ 0. 0.23 -0.045 0.055]Save figure to /home/amrex/Figures/Plotfiles/Figure0001.png....
Copy the figures to a location of your host computer. You need a directory that is writable for others.
On Unix-like machines you can do on your host side
$ mkdir $HOME/docker-output$ chown a+w $HOME/docker-output
Then copy from within your docker container
⋊> ~ cp -r Figures/ volume/docker-output
Adjust the provided input file
Divider2D.py
by your needs
⋊> ~ cp Divider2D.py volume/docker-output# Edit the config file⋊> ~ OMP_NUM_THREADS=1 mpiexec AMReX.EB.Divider2D --config volume/docker-output/Divider2D.py
For reference, the provided config file is
import math ############################################################################ # These variables are only auxiliary and are not directly used by the app # def nCellsY ( nCellsX , ratio , blocking_factor = 8 ): base = int( nCellsX * ratio ) ncells = base - base % blocking_factor return ncells xlower = 0.0 xupper = 0.23 xlen = xupper - xlower ylower = - 0.05 + 0.005 yupper = + 0.05 + 0.005 ylen = yupper - ylower blocking_factor = 8 nx = 800 ny = nCellsY ( nx , ylen / xlen , blocking_factor = blocking_factor ) n_level = 1 n_error_buf = 0 if n_level == 1 else 1 # The output files are written in this directory from the perspective of this # docker container base_path = '/home/amrex' ############################################################################ # These variables and dictionaries will be read by the application #Equation = { # 'gamma': 1.4, # 'Rspec': 287 #} RunOptions = { 'cfl' : 0.8 , 'final_time' : 1e-3 , 'do_backup' : False , 'max_cycles' : - 1 , # -1 means infinite and 0 means only initial condition } # This option defines the Mach number of the shock # Default is 1.1 Mach_number = 1.1 reconstruction = "Characteristics" GridGeometry = { # This option tells the solver how many cells to use 'cell_dimensions' : [ nx , ny , 1 ], # This option defines the coordinates of the computational domain 'coordinates' : { 'lower' : [ xlower , ylower , + 0.00 ], 'upper' : [ xupper , yupper , + 0.10 ], }, # We have no periodicity in Divider 'periodicity' : [ 0 , 0 , 0 ] } # defines the number of ghost cells scratch_gcw = 2 if n_level == 1 else 4 # defines the number of ghost faces flux_gcw = scratch_gcw - 2 PatchHierarchy = { # Defines the number of refinement levels # 1 means no adaptive refinement 'max_number_of_levels' : n_level , # Set this to 1 if max_number_of_levles > 1 'n_error_buf' : [ n_error_buf , n_error_buf , n_error_buf ], # Defines the maximal size of a single patch # patches can by smaller thou 'max_grid_size' : [ nx , ny , 1 ], # Do not change this, it defines how many ghost cells are needed for the # cut-cell geometry 'ngrow_eb_level_set' : max( scratch_gcw + 1 , 9 ), # If a patch is completely covered by behind cut-cells remove them from the grid 'remove_covered_grids' : False , # Every patch size is a multiple of blocking_factor (in each direction) # i.e. blocking_factor = 8 means patches have the size of 8 or 16 or 24 or ... 'blocking_factor' : [ blocking_factor , blocking_factor , blocking_factor ], # How many cells are needed inbetween AMR levels 'n_proper' : 1 , } # Adjust those paths to read the files that describe the wall boundaries for the Divider. # The first and the last point need within each wall file need to be equal wall_filenames = [ ' {} /wall_1.txt' . format ( base_path ), ' {} /wall_2.txt' . format ( base_path ), ' {} /wall_3.txt' . format ( base_path )] # Defines the Output = { 'outputs' : [ # Write AMReX-output directories. This output preserves the AMR hierarchy # It is readable from Paraview, VisIt and Python { 'type' : 'Plotfiles' , 'directory' : 'Divider_DE5_ {} /Plotfiles' . format ( nx ), 'intervals' : [ 1e-4 ]}, # Write simple HDF5 files # It is faster and writes a single grid without patches { 'type' : 'HDF5' , 'path' : 'Divider_c24_ {} .h5' . format ( nx ), 'intervals' : [ 1e-5 ]}, # Print out timer statistics { 'type' : 'CounterOutput' , 'intervals' : [ 1e-4 ] }, ] }
The usual workflow is to
Copy the provided input file to your HOST system and edit them
Run the application from within the docker container using the input file on your HOST machine instead of the provided one.
⋊> ~ OMP_NUM_THREADS=1 mpiexec AMReX.EB.Divider2D --config volume/docker-output/Divider2D.py
Reiterate and change more settings
Copy publication relevant input files and auxiliary files into the container. Then commit the container to a named image in order to persist it.
If you wish to write the output files directly in the ouput folder replace the following in devider2d*.py as Divider2D_output_on_windows.py The hdf output will be directly written in the windows "docker_output" folder.
And then run# Defines the Output = { 'outputs' : [ # Write AMReX-output directories. This output preserves the AMR hierarchy # It is readable from Paraview, VisIt and Python #{'type': 'Plotfiles', 'directory': 'DividerDE5{}/Plotfiles'.format(nx), 'intervals': [1e-4]}, # Write simple HDF5 files # It is faster and writes a single grid without patches { 'type' : 'HDF5' , 'path' : 'volume/docker_output/Dividerc24 {} .h5' . format ( nx ), 'intervals' : [ 1e-5 ]}, # Print out timer statistics { 'type' : 'CounterOutput' , 'intervals' : [ 1e-4 ] }, ] }
OMP_NUM_THREADS=1 mpiexec AMReX.EB.Divider2D --config volume/docker_output/Divider2D_output_on_windows.py