Factory module

The maia.factory module can be used to generate parallel CGNS trees. Those can be obtained from an other kind of tree, or can be generated from user parameters.

Generation

Generation functions create a distributed tree from some parameters.

generate_dist_points(n_vtx, zone_type, comm, origin=(0, 0, 0), max_coords=(1, 1, 1))

Generate a distributed mesh including only cartesian points.

Returns a distributed CGNSTree containing a single CGNSBase_t and Zone_t. The kind of the zone is controled by the zone_type parameter:

  • "Structured" (or "S") produces a structured zone

  • "Unstructured" (or "U") produces an unstructured zone

In all cases, the created zone contains only the cartesian grid coordinates; no connectivities are created. The physical dimension of the output is set equal to the length of the origin parameter.

Parameters
  • n_vtx (int or array of int) – Number of vertices in each direction. Scalars automatically extend to uniform array.

  • zone_type (str) – requested kind of points cloud

  • comm (MPIComm) – MPI communicator

  • origin (array, optional) – Coordinates of the origin of the generated mesh. Defaults to zero vector.

  • max_coords (array, optional) – Coordinates of the higher point of the generated mesh. Defaults to ones vector.

Returns

CGNSTree – distributed cgns tree

Example

from mpi4py import MPI
import maia
import maia.pytree as PT

dist_tree = maia.factory.generate_dist_points(10, 'Unstructured', MPI.COMM_WORLD)
zone = PT.get_node_from_label(dist_tree, 'Zone_t')
assert PT.Zone.n_vtx(zone) == 10**3
generate_dist_block(n_vtx, cgns_elmt_name, comm, origin=(0, 0, 0), length=1.0)

Generate a distributed mesh with a block shape (line, parallelogram or parallelepiped).

This function returns a distributed CGNSTree containing a single CGNSBase_t and Zone_t. The created zone contains the grid coordinates and the relevant number of boundary conditions.

The kind and CGNS cell dimension \(d_m\) of the zone is controled by the cgns_elmt_name parameter:

  • "Structured" (or "S") produces a structured zone, which dimension is equal to n_vtx.size,

  • "NFACE_n" produces a hexa filled unstructured 3d zone with NFace+NGon connectivity,

  • "NGON_n" produces a quad filled unstructured 2d zone with NGon+Bar connectivity (not yet implemented),

  • Other names must be in ["BAR_2", "TRI_3", "QUAD_4", "TETRA_4", "PYRA_5", "PENTA_6", "HEXA_8"] and produces an unstructured 1d, 2d or 3d zone with corresponding standard elements.

The CGNS physical dimension \(d_\phi\) is deduced from the shape of the origin parameter. Note that the physical dimension must be upper or equal to the cell dimension.

The number of vertices in each direction is given by n_vtx parameter, which is a tuple of size \(d_m\). If a scalar is provided, its value is broadcasted to a uniform tuple.

Lastly, the geometric size and the position of the zone is computed from the combination of origin and length parameters. The first one, which is an array of size \(d_\phi\), set the position of the ‘first vertex’ of the zone. The length parameter can be either:

  • a scalar, which leads to a line, a square or a cube aligned with the canonical axes. Its length is then the same in each direction;

  • a tuple of size \(d_m\), which leads to a line, a rectangle or a rectangular cuboid aligned with the canonical axes. Its length is then equal to the specified value in each direction;

  • a list of \(d_m\) vectors, each one of size \(d_\phi\). In this case, the generated line, parallelogram or parallelepiped is no more aligned with the canonical axes, but with the provided basis. Its length is equal to the norm of the basis vector in each direction.

Note that length can contain negative values.

Parameters
  • n_vtx (int or tuple of int) – Number of vertices in each direction. Scalars automatically extend to uniform array.

  • cgns_elmt_name (str) – requested kind of elements

  • comm (MPIComm) – MPI communicator

  • origin (array, optional) – Coordinates of the origin of the generated mesh. Defaults to zero vector.

  • length (float or tuple(s) of floats, optional) – Length for each dimension of the generated mesh (see above). Defaults to 1.

Returns

CGNSTree – distributed cgns tree

Example

from mpi4py import MPI
comm = MPI.COMM_WORLD

from maia.factory import generate_dist_block

# 3D unstructured polyedric zone (basic)
dist_tree = generate_dist_block(11, 'Poly', comm)
# 3D structured zone, different number of vertices in each direction
dist_tree = generate_dist_block((11,21,11), 'Structured', comm)
# 3D unstructured zone, different length in each direction
dist_tree = generate_dist_block(11, 'TETRA_4', comm, length=(1.,1.,10.))

# 2D structured zone, PhyDim=3, variable nb. of vertices, custom origin
dist_tree = generate_dist_block((11,21), 'S', comm, origin=[2.,3.,1.5])
# 2D unstructured zone, PhyDim=2, custom length direction
dist_tree = generate_dist_block(6, 'QUAD_4', comm, origin=[0., 0.], length=[(0.707,0.707), (-0.707,0.707)])

# 1D unstructured zone, PhyDim=3, custom origin and end point
dist_tree = generate_dist_block(6, 'BAR_2', comm, origin=[0.25, 0.25, 0.], length=[(1,0.,-1.)])
generate_dist_sphere(m, cgns_elmt_name, comm, origin=(0.0, 0.0, 0.0), radius=1.0)

Generate a distributed mesh with a spherical topology.

Returns a distributed CGNSTree containing a single CGNSBase_t and Zone_t. The kind and cell dimension of the zone is controled by the cgns_elmt_name parameter:

  • "NFACE_n" produces an unstructured 3d zone with a NFace+NGon connectivity,

  • "NGON_n" produces an unstructured 2d zone with a NGon+Bar connectivity,

  • Other names must be in ["TRI_3", "TETRA_4"] and produces an unstructured 2d or 3d zone with corresponding standard elements.

In all cases, the created zone contains the grid coordinates and the relevant number of boundary conditions.

Spherical meshes are class I geodesic polyhedra (icosahedral). Number of vertices on the external surface is equal to \(10m^2+2\).

Parameters
  • m (int) – Strict. positive integer who controls the number of vertices (see above)

  • cgns_elmt_name (str) – requested kind of elements

  • comm (MPIComm) – MPI communicator

  • origin (array, optional) – Coordinates of the origin of the generated mesh. Defaults to zero vector.

  • radius (float, optional) – Radius of the generated sphere. Defaults to 1.

Returns

CGNSTree – distributed cgns tree

Example

from mpi4py import MPI
import maia
import maia.pytree as PT

dist_tree = maia.factory.generate_dist_sphere(10, 'TRI_3', MPI.COMM_WORLD)
assert PT.Element.Type(PT.get_node_from_label(dist_tree, 'Elements_t')) == 'TRI_3'

Partitioning

partition_dist_tree(dist_tree, comm, **kwargs)

Perform the partitioning operation: create a partitioned tree from the input distributed tree.

Important

Geometric information (such as boundary conditions, zone subregion, etc.) are reported on the partitioned tree; however, data fields (BCDataSet, FlowSolution, etc.) are not transfered automatically. Use data_transfer keyword argument or see Transfer module.

See reference documentation for the description of the keyword arguments.

Note: unstructured zones described by standard elements must have their Element_t nodes ordered according to their dimension (either increasing or decreasing).

Parameters
  • dist_tree (CGNSDistTree) – Distributed tree

  • comm (MPIComm) – MPI communicator

  • **kwargs – Partitioning options

Returns

CGNSTree – partitioned cgns tree

Example

from mpi4py import MPI
import maia

comm = MPI.COMM_WORLD
i_rank, n_rank = comm.Get_rank(), comm.Get_size()
dist_tree  = maia.factory.generate_dist_block(10, 'Poly', comm)

#Basic use
part_tree = maia.factory.partition_dist_tree(dist_tree, comm)

#Crazy partitioning where each proc get as many partitions as its rank
n_part_tot = n_rank * (n_rank + 1) // 2
part_tree = maia.factory.partition_dist_tree(dist_tree, comm, \
    zone_to_parts={'Base/zone' : [1./n_part_tot for i in range(i_rank+1)]})
assert len(maia.pytree.get_all_Zone_t(part_tree)) == i_rank+1

Partitioning options

Partitioning can be customized with the following keywords arguments:

graph_part_tool

Method used to split unstructured blocks. Irrelevent for structured blocks.

Admissible values
  • parmetis, ptscotch : graph partitioning methods,

  • hilbert : geometric method (only for NGon connectivities),

  • gnum : cells are dispatched according to their absolute numbering.

Default value

parmetis, if installed; else ptscotch, if installed; hilbert otherwise.

zone_to_parts

Control the number, size and repartition of partitions. See Repartition.

Default value

Computed such that partitioning is balanced using maia.factory.partitioning.compute_balanced_weights().

part_interface_loc

GridLocation for the created partitions interfaces. Pre-existing interface keep their original location.

Admissible values

FaceCenter, Vertex

Default value

FaceCenter for unstructured zones with NGon connectivities; Vertex otherwise.

reordering

Dictionnary of reordering options, which are used to renumber the entities in each partitioned zone. See corresponding documentation.

preserve_orientation

If True, the created interface faces are not reversed and keep their original orientation. Consequently, NGonElements can have a zero left parent and a non zero right parent. Only relevant for U/NGon partitions.

Default value

False

In addition, we provide these convenience options:

data_transfer

Automatically transfer some data after partitioning. The selection of data to transfer is done per label:

Admissible values
  • a list of supported labels, namely:

  • shortcut 'FIELDS' which selects all labels supported by fields transfer,

  • shortcut 'ALL' which selects 'FIELDS' and UserDefinedData_t.

Default value

Empty list [] (nothing is transfered).

Repartition

The number, size, and repartition (over the processes) of the created partitions is controlled through the zone_to_parts keyword argument: each process must provide a dictionary associating the path of every distributed zone to a list of floats. For a given distributed zone, current process will receive as many partitions as the length of the list (missing path or empty list means no partitions); the size of these partitions corresponds to the values in the list (expressed in fraction of the input zone). For a given distributed zone, the sum of all the fractions across all the processes must be 1.

This dictionary can be created by hand; for convenience, Maia provides three functions in the maia.factory.partitioning module to create this dictionary.

compute_regular_weights(tree, comm, n_part=1)

Compute a basic zone_to_parts repartition.

Each process request n_part partitions on each original zone (n_part can differ for each proc). The weights of all the parts produced from a given zone are homogeneous and equal to the number of cells in the zone divided by the total number of partitions requested for this zone.

Parameters
  • tree (CGNSTree) – (Minimal) distributed tree : only zone names and sizes are needed

  • comm (MPI.Comm) – MPI Communicator

  • n_part (int,optional) – Number of partitions to produce on each zone by the proc. Defaults to 1.

Returns

dictzone_to_parts dictionnary expected by partition_dist_tree()

Example

from mpi4py import MPI
import maia
from   maia.utils.test_utils import mesh_dir
from   maia.factory import partitioning as mpart

dist_tree = maia.io.file_to_dist_tree(mesh_dir/'S_twoblocks.yaml', MPI.COMM_WORLD)

zone_to_parts = mpart.compute_regular_weights(dist_tree, MPI.COMM_WORLD)
if MPI.COMM_WORLD.Get_size() == 2:
  assert zone_to_parts == {'Base/Large': [0.5], 'Base/Small': [0.5]}
compute_balanced_weights(tree, comm, only_uniform=False)

Compute a well balanced zone_to_parts repartition.

Each process request or not partitions with heterogeneous weight on each original zone such that:

  • the computational load is well balanced, ie the total number of cells per process is nearly equal,

  • the number of splits within a given zone is minimized,

  • produced partitions are not too small.

Note

Heterogeneous weights are not managed by ptscotch. Use parmetis as graph_part_tool for partitioning if repartition was computed with this function, or set optional argument only_uniform to True.

Parameters
  • tree (CGNSTree) – (Minimal) distributed tree : only zone names and sizes are needed

  • comm (MPI.Comm) – MPI Communicator

  • only_uniform (bool, optional) – If true, an alternative balancing method is used in order to request homogeneous weights, but load balance is less equilibrated. Default to False.

Returns

dictzone_to_parts dictionnary expected by partition_dist_tree()

Example

from mpi4py import MPI
import maia
from   maia.utils.test_utils import mesh_dir
from   maia.factory import partitioning as mpart

comm = MPI.COMM_WORLD
dist_tree = maia.io.file_to_dist_tree(mesh_dir/'S_twoblocks.yaml', comm)

zone_to_parts = mpart.compute_balanced_weights(dist_tree, comm)
if comm.Get_size() == 2 and comm.Get_rank() == 0:
  assert zone_to_parts == {'Base/Large': [0.375], 'Base/Small': [1.0]}
if comm.Get_size() == 2 and comm.Get_rank() == 1:
  assert zone_to_parts == {'Base/Large': [0.625]}
compute_nosplit_weights(tree, comm)

Compute a zone_to_parts repartition without splitting the blocks.

The initial blocks will be simply distributed over the available processes, minimizing the total number of cells affected to a proc. This leads to a poor load balancing and possibly to procs having no partitions at all.

Parameters
  • tree (CGNSTree) – (Minimal) distributed tree : only zone names and sizes are needed

  • comm (MPI.Comm) – MPI Communicator

Returns

dictzone_to_parts dictionnary expected by partition_dist_tree()

Example

from mpi4py import MPI
import maia
from   maia.utils.test_utils import mesh_dir
from   maia.factory import partitioning as mpart

dist_tree = maia.io.file_to_dist_tree(mesh_dir/'S_twoblocks.yaml', MPI.COMM_WORLD)

zone_to_parts = mpart.compute_nosplit_weights(dist_tree, MPI.COMM_WORLD)
if MPI.COMM_WORLD.Get_size() == 2:
  assert len(zone_to_parts) == 1

Reordering options

For unstructured zones, the reordering options are transmitted to ParaDiGM in order to control the renumbering of mesh entities in the partitions.

Kwarg

Admissible values

Effect

Default

cell_renum_method

“NONE”, “RANDOM”, “HILBERT”, “CUTHILL”, “CACHEBLOCKING”, “CACHEBLOCKING2”, “HPC”

Renumbering method for the cells

NONE

face_renum_method

“NONE”, “RANDOM”, “LEXICOGRAPHIC”

Renumbering method for the faces

NONE

vtx_renum_method

“NONE”, “SORT_INT_EXT”

Renumbering method for the vertices

NONE

n_cell_per_cache

Integer >= 0

Specific to cacheblocking

0

n_face_per_pack

Integer >= 0

Specific to cacheblocking

0

graph_part_tool

“parmetis”, “ptscotch”, “hyperplane”

Graph partitioning library to use for renumbering

Same as partitioning tool

Recovering from partitions

recover_dist_tree(part_tree, comm, data_transfer=[])

Regenerate a distributed tree from a partitioned tree.

The partitioned tree should have been created using Maia, or must at least contains GlobalNumbering nodes as defined by Maia (see Partitioned trees).

Important

Similarly to partition_dist_tree(), this function reports only geometric information (such as boundary conditions, zone subregion, etc.) on the created dist_tree; data fields are not transfered automatically. Use data_transfer keyword argument or see Transfer module.

Parameters
  • part_tree (CGNSPartTree) – Partitioned CGNS Tree

  • comm (MPIComm) – MPI communicator

  • data_transfer (list of str) – Labels of data nodes to transfer during operation (see data_transfer)

Returns

CGNSDistTree – distributed cgns tree

Example

from mpi4py import MPI
import maia
comm = MPI.COMM_WORLD

dist_tree_bck  = maia.factory.generate_dist_block(5, 'TETRA_4', comm)
part_tree = maia.factory.partition_dist_tree(dist_tree_bck, comm)

dist_tree = maia.factory.recover_dist_tree(part_tree, comm)
assert maia.pytree.is_same_tree(dist_tree, dist_tree_bck)

Managing full trees

Almost no function in maia supports full (not distributed) CGNS trees, but they can be useful for compatibility with sequential libraries.

full_to_dist_tree(full_tree, comm, owner=None)

Generate a distributed tree from a standard (full) CGNS Tree.

Input tree can be defined on a single process (using owner = rank_id), or a copy can be known by all the processes (using owner=None).

In both cases, output distributed tree will be equilibrated over all the processes.

Parameters
  • full_tree (CGNSTree) – Full (not distributed) tree.

  • comm (MPIComm) – MPI communicator

  • owner (int, optional) – MPI rank holding the input tree. Defaults to None.

Returns

CGNSTree – distributed cgns tree

Example

from   mpi4py import MPI
import maia
from   maia.utils.test_utils import mesh_dir
comm = MPI.COMM_WORLD

if comm.Get_rank() == 0:
  tree = maia.io.read_tree(mesh_dir/'S_twoblocks.yaml')
else:
  tree = None
dist_tree = maia.factory.full_to_dist_tree(tree, comm, owner=0)
dist_to_full_tree(dist_tree, comm, target=0)

Generate a standard (full) CGNS Tree from a distributed tree.

The output tree can be used with sequential tools, but is no more compatible with maia parallel algorithms.

Parameters
  • dist_tree (CGNSDistTree) – Distributed CGNS tree

  • comm (MPIComm) – MPI communicator

  • target (int, optional) – MPI rank holding the output tree. Defaults to 0.

Returns

CGNSTree – Full (not distributed) tree or None

Example

from   mpi4py import MPI
import maia
comm = MPI.COMM_WORLD

dist_tree = maia.factory.generate_dist_sphere(10, 'TRI_3', comm)
full_tree = maia.factory.dist_to_full_tree(dist_tree, comm, target=0)

if comm.Get_rank() == 0:
  assert maia.pytree.get_node_from_name(full_tree, ":CGNS#Distribution") is None
else:
  assert full_tree is None