Factory module¶
The maia.factory module can be used to generate parallel CGNS trees. Those can
be obtained from an other kind of tree, or can be generated from user parameters.
Generation¶
Generation functions create a distributed tree from some parameters.
- generate_dist_points(n_vtx, zone_type, comm, origin=array([0., 0., 0.]), max_coords=array([1., 1., 1.]))¶
Generate a distributed mesh including only cartesian points.
Returns a distributed CGNSTree containing a single CGNSBase_t and Zone_t. The kind of the zone is controled by the
zone_typeparameter:"Structured"(or"S") produces a structured zone"Unstructured"(or"U") produces an unstructured zone
In all cases, the created zone contains only the cartesian grid coordinates; no connectivities are created. The physical dimension of the output is set equal to the length of the origin parameter.
- Parameters
n_vtx (int or array of int) – Number of vertices in each direction. Scalars automatically extend to uniform array.
zone_type (str) – requested kind of points cloud
comm (MPIComm) – MPI communicator
origin (array, optional) – Coordinates of the origin of the generated mesh. Defaults to zero vector.
max_coords (array, optional) – Coordinates of the higher point of the generated mesh. Defaults to ones vector.
- Returns
CGNSTree – distributed cgns tree
Example
from mpi4py import MPI import maia import maia.pytree as PT dist_tree = maia.factory.generate_dist_points(10, 'Unstructured', MPI.COMM_WORLD) zone = PT.get_node_from_label(dist_tree, 'Zone_t') assert PT.Zone.n_vtx(zone) == 10**3
- generate_dist_block(n_vtx, cgns_elmt_name, comm, origin=array([0., 0., 0.]), edge_length=1.0)¶
Generate a distributed mesh with a cartesian topology.
Returns a distributed CGNSTree containing a single CGNSBase_t and Zone_t. The kind and cell dimension of the zone is controled by the cgns_elmt_name parameter:
"Structured"(or"S") produces a structured zone,"Poly"produces an unstructured 3d zone with a NGon+PE connectivity,"NFACE_n"produces an unstructured 3d zone with a NFace+NGon connectivity,"NGON_n"produces an unstructured 2d zone with faces described by a NGon node (not yet implemented),Other names must be in
["TRI_3", "QUAD_4", "TETRA_4", "PYRA_5", "PENTA_6", "HEXA_8"]and produces an unstructured 2d or 3d zone with corresponding standard elements.
In all cases, the created zone contains the cartesian grid coordinates and the relevant number of boundary conditions.
When creating 2 dimensional zones, the physical dimension is set equal to the length of the origin parameter.
- Parameters
n_vtx (int or array of int) – Number of vertices in each direction. Scalars automatically extend to uniform array.
cgns_elmt_name (str) – requested kind of elements
comm (MPIComm) – MPI communicator
origin (array, optional) – Coordinates of the origin of the generated mesh. Defaults to zero vector.
edge_length (float, optional) – Edge size of the generated mesh. Defaults to 1.
- Returns
CGNSTree – distributed cgns tree
Example
from mpi4py import MPI import maia import maia.pytree as PT dist_tree = maia.factory.generate_dist_block([10,20,10], 'Structured', MPI.COMM_WORLD) zone = PT.get_node_from_label(dist_tree, 'Zone_t') assert PT.Zone.Type(zone) == "Structured" dist_tree = maia.factory.generate_dist_block(10, 'Poly', MPI.COMM_WORLD) zone = PT.get_node_from_label(dist_tree, 'Zone_t') assert PT.Element.CGNSName(PT.get_child_from_label(zone, 'Elements_t')) == 'NGON_n' dist_tree = maia.factory.generate_dist_block(10, 'TETRA_4', MPI.COMM_WORLD) zone = PT.get_node_from_label(dist_tree, 'Zone_t') assert PT.Element.CGNSName(PT.get_child_from_label(zone, 'Elements_t')) == 'TETRA_4'
- generate_dist_sphere(m, cgns_elmt_name, comm, origin=array([0., 0., 0.]), radius=1.0)¶
Generate a distributed mesh with a spherical topology.
Returns a distributed CGNSTree containing a single CGNSBase_t and Zone_t. The kind and cell dimension of the zone is controled by the cgns_elmt_name parameter:
"NFACE_n"produces an unstructured 3d zone with a NFace+NGon connectivity,"NGON_n"produces an unstructured 2d zone with a NGon+Bar connectivity,Other names must be in
["TRI_3", "TETRA_4"]and produces an unstructured 2d or 3d zone with corresponding standard elements.
In all cases, the created zone contains the grid coordinates and the relevant number of boundary conditions.
Spherical meshes are class I geodesic polyhedra (icosahedral). Number of vertices on the external surface is equal to \(10m^2+2\).
- Parameters
m (int) – Strict. positive integer who controls the number of vertices (see above)
cgns_elmt_name (str) – requested kind of elements
comm (MPIComm) – MPI communicator
origin (array, optional) – Coordinates of the origin of the generated mesh. Defaults to zero vector.
radius (float, optional) – Radius of the generated sphere. Defaults to 1.
- Returns
CGNSTree – distributed cgns tree
Example
from mpi4py import MPI import maia import maia.pytree as PT dist_tree = maia.factory.generate_dist_sphere(10, 'TRI_3', MPI.COMM_WORLD) assert PT.Element.CGNSName(PT.get_node_from_label(dist_tree, 'Elements_t')) == 'TRI_3'
Partitioning¶
- partition_dist_tree(dist_tree, comm, **kwargs)¶
Perform the partitioning operation: create a partitioned tree from the input distributed tree.
The input tree can be structured or unstuctured, but hybrid meshes are not yet supported.
Important
Geometric information (such as boundary conditions, zone subregion, etc.) are reported on the partitioned tree; however, data fields (BCDataSet, FlowSolution, etc.) are not transfered automatically. See
maia.transfermodule.See reference documentation for the description of the keyword arguments.
- Parameters
dist_tree (CGNSTree) – Distributed tree
comm (MPIComm) – MPI communicator
**kwargs – Partitioning options
- Returns
CGNSTree – partitioned cgns tree
Example
from mpi4py import MPI import maia comm = MPI.COMM_WORLD i_rank, n_rank = comm.Get_rank(), comm.Get_size() dist_tree = maia.factory.generate_dist_block(10, 'Poly', comm) #Basic use part_tree = maia.factory.partition_dist_tree(dist_tree, comm) #Crazy partitioning where each proc get as many partitions as its rank n_part_tot = n_rank * (n_rank + 1) // 2 part_tree = maia.factory.partition_dist_tree(dist_tree, comm, \ zone_to_parts={'Base/zone' : [1./n_part_tot for i in range(i_rank+1)]}) assert len(maia.pytree.get_all_Zone_t(part_tree)) == i_rank+1
Partitioning options¶
Partitioning can be customized with the following keywords arguments:
- graph_part_tool¶
Method used to split unstructured blocks. Irrelevent for structured blocks.
- Admissible values
parmetis,ptscotch: graph partitioning methods,hilbert: geometric method (only for NGon connectivities),gnum: cells are dispatched according to their absolute numbering.
- Default value
parmetis, if installed; elseptscotch, if installed;hilbertotherwise.
- zone_to_parts¶
Control the number, size and repartition of partitions. See Repartition.
- Default value
Computed such that partitioning is balanced using
maia.factory.partitioning.compute_balanced_weights().
- part_interface_loc¶
GridLocation for the created partitions interfaces. Pre-existing interface keep their original location.
- Admissible values
FaceCenter,Vertex- Default value
FaceCenterfor unstructured zones with NGon connectivities;Vertexotherwise.
- reordering¶
Dictionnary of reordering options, which are used to renumber the entities in each partitioned zone. See corresponding documentation.
- preserve_orientation¶
If True, the created interface faces are not reversed and keep their original orientation. Consequently, NGonElements can have a zero left parent and a non zero right parent. Only relevant for U/NGon partitions.
- Default value
False
- dump_pdm_output¶
If True, dump the raw arrays created by paradigm in a CGNSNode at (partitioned) zone level. For debug only.
- Default value
False
Repartition¶
The number, size, and repartition (over the processes) of the created partitions is
controlled through the zone_to_parts keyword argument: each process must provide a
dictionary associating the path of every distributed zone to a list of floats.
For a given distributed zone, current process will receive as many partitions
as the length of the list (missing path or empty list means no partitions); the size of these partitions
corresponds to the values in the list (expressed in fraction of the input zone).
For a given distributed zone, the sum of all the fractions across all the processes must
be 1.
This dictionary can be created by hand; for convenience, Maia provides three functions in the
maia.factory.partitioning module to create this dictionary.
- compute_regular_weights(tree, comm, n_part=1)¶
Compute a basic zone_to_parts repartition.
Each process request n_part partitions on each original zone (n_part can differ for each proc). The weights of all the parts produced from a given zone are homogeneous and equal to the number of cells in the zone divided by the total number of partitions requested for this zone.
- Parameters
tree (CGNSTree) – (Minimal) distributed tree : only zone names and sizes are needed
comm (MPI.Comm) – MPI Communicator
n_part (int,optional) – Number of partitions to produce on each zone by the proc. Defaults to 1.
- Returns
dict –
zone_to_partsdictionnary expected bypartition_dist_tree()
Example
from mpi4py import MPI import maia from maia.utils.test_utils import mesh_dir from maia.factory import partitioning as mpart dist_tree = maia.io.file_to_dist_tree(mesh_dir/'S_twoblocks.yaml', MPI.COMM_WORLD) zone_to_parts = mpart.compute_regular_weights(dist_tree, MPI.COMM_WORLD) if MPI.COMM_WORLD.Get_size() == 2: assert zone_to_parts == {'Base/Large': [0.5], 'Base/Small': [0.5]}
- compute_balanced_weights(tree, comm, only_uniform=False)¶
Compute a well balanced zone_to_parts repartition.
Each process request or not partitions with heterogeneous weight on each original zone such that:
the computational load is well balanced, ie the total number of cells per process is nearly equal,
the number of splits within a given zone is minimized,
produced partitions are not too small.
Note
Heterogeneous weights are not managed by ptscotch. Use parmetis as graph_part_tool for partitioning if repartition was computed with this function, or set optional argument only_uniform to True.
- Parameters
tree (CGNSTree) – (Minimal) distributed tree : only zone names and sizes are needed
comm (MPI.Comm) – MPI Communicator
only_uniform (bool, optional) – If true, an alternative balancing method is used in order to request homogeneous weights, but load balance is less equilibrated. Default to False.
- Returns
dict –
zone_to_partsdictionnary expected bypartition_dist_tree()
Example
from mpi4py import MPI import maia from maia.utils.test_utils import mesh_dir from maia.factory import partitioning as mpart comm = MPI.COMM_WORLD dist_tree = maia.io.file_to_dist_tree(mesh_dir/'S_twoblocks.yaml', comm) zone_to_parts = mpart.compute_balanced_weights(dist_tree, comm) if comm.Get_size() == 2 and comm.Get_rank() == 0: assert zone_to_parts == {'Base/Large': [0.375], 'Base/Small': [1.0]} if comm.Get_size() == 2 and comm.Get_rank() == 1: assert zone_to_parts == {'Base/Large': [0.625]}
- compute_nosplit_weights(tree, comm)¶
Compute a zone_to_parts repartition without splitting the blocks.
The initial blocks will be simply distributed over the available processes, minimizing the total number of cells affected to a proc. This leads to a poor load balancing and possibly to procs having no partitions at all.
- Parameters
tree (CGNSTree) – (Minimal) distributed tree : only zone names and sizes are needed
comm (MPI.Comm) – MPI Communicator
- Returns
dict –
zone_to_partsdictionnary expected bypartition_dist_tree()
Example
from mpi4py import MPI import maia from maia.utils.test_utils import mesh_dir from maia.factory import partitioning as mpart dist_tree = maia.io.file_to_dist_tree(mesh_dir/'S_twoblocks.yaml', MPI.COMM_WORLD) zone_to_parts = mpart.compute_nosplit_weights(dist_tree, MPI.COMM_WORLD) if MPI.COMM_WORLD.Get_size() == 2: assert len(zone_to_parts) == 1
Reordering options¶
For unstructured zones, the reordering options are transmitted to ParaDiGM in order to control the renumbering of mesh entities in the partitions.
Kwarg |
Admissible values |
Effect |
Default |
|---|---|---|---|
cell_renum_method |
“NONE”, “RANDOM”, “HILBERT”, “CUTHILL”, “CACHEBLOCKING”, “CACHEBLOCKING2”, “HPC” |
Renumbering method for the cells |
NONE |
face_renum_method |
“NONE”, “RANDOM”, “LEXICOGRAPHIC” |
Renumbering method for the faces |
NONE |
vtx_renum_method |
“NONE”, “SORT_INT_EXT” |
Renumbering method for the vertices |
NONE |
n_cell_per_cache |
Integer >= 0 |
Specific to cacheblocking |
0 |
n_face_per_pack |
Integer >= 0 |
Specific to cacheblocking |
0 |
graph_part_tool |
“parmetis”, “ptscotch”, “hyperplane” |
Graph partitioning library to use for renumbering |
Same as partitioning tool |
Recovering from partitions¶
- recover_dist_tree(part_tree, comm)¶
Regenerate a distributed tree from a partitioned tree.
The partitioned tree should have been created using Maia, or must at least contains GlobalNumbering nodes as defined by Maia (see Partitioned trees).
The following nodes are managed : GridCoordinates, Elements, ZoneBC, ZoneGridConnectivity FlowSolution, DiscreteData and ZoneSubRegion.
- Parameters
part_tree (CGNSTree) – Partitioned CGNS Tree
comm (MPIComm) – MPI communicator
- Returns
CGNSTree – distributed cgns tree
Example
from mpi4py import MPI import maia comm = MPI.COMM_WORLD dist_tree_bck = maia.factory.generate_dist_block(5, 'TETRA_4', comm) part_tree = maia.factory.partition_dist_tree(dist_tree_bck, comm) dist_tree = maia.factory.recover_dist_tree(part_tree, comm) assert maia.pytree.is_same_tree(dist_tree, dist_tree_bck)
Managing full trees¶
Almost no function in maia supports full (not distributed) CGNS trees, but they can be useful for compatibility with sequential libraries.
- full_to_dist_tree(tree, comm, owner=None)¶
Generate a distributed tree from a standard (full) CGNS Tree.
Input tree can be defined on a single process (using
owner = rank_id), or a copy can be known by all the processes (usingowner=None).In both cases, output distributed tree will be equilibrated over all the processes.
- Parameters
tree (CGNSTree) – Full (not distributed) tree.
comm (MPIComm) – MPI communicator
owner (int, optional) – MPI rank holding the input tree. Defaults to None.
- Returns
CGNSTree – distributed cgns tree
Example
from mpi4py import MPI import maia from maia.utils.test_utils import mesh_dir comm = MPI.COMM_WORLD if comm.Get_rank() == 0: tree = maia.io.read_tree(mesh_dir/'S_twoblocks.yaml', comm) else: tree = None dist_tree = maia.factory.full_to_dist_tree(tree, comm, owner=0)
- dist_to_full_tree(dist_tree, comm, target=0)¶
Generate a standard (full) CGNS Tree from a distributed tree.
The output tree can be used with sequential tools, but is no more compatible with maia parallel algorithms.
- Parameters
dist_tree (CGNSTree) – Distributed CGNS tree
comm (MPIComm) – MPI communicator
target (int, optional) – MPI rank holding the output tree. Defaults to 0.
- Returns
CGNSTree – Full (not distributed) tree or None
Example
from mpi4py import MPI import maia comm = MPI.COMM_WORLD dist_tree = maia.factory.generate_dist_sphere(10, 'TRI_3', comm) full_tree = maia.factory.dist_to_full_tree(dist_tree, comm, target=0) if comm.Get_rank() == 0: assert maia.pytree.get_node_from_name(full_tree, ":CGNS#Distribution") is None else: assert full_tree is None