libMesh
Public Member Functions | Static Public Member Functions | Public Attributes | Protected Member Functions | Protected Attributes | Static Protected Attributes | List of all members
libMesh::SubdomainPartitioner Class Reference

The SubdomainPartitioner partitions the elements in "chunks" of user-specified subdomain ids. More...

#include <subdomain_partitioner.h>

Inheritance diagram for libMesh::SubdomainPartitioner:
[legend]

Public Member Functions

 SubdomainPartitioner ()
 Constructor, default initializes the internal Partitioner object to a MetisPartitioner so the class is usable, although this type can be customized later. More...
 
virtual UniquePtr< Partitionerclone () const libmesh_override
 
UniquePtr< Partitioner > & internal_partitioner ()
 Get a reference to the Partitioner used internally by the SubdomainPartitioner. More...
 
virtual void partition (MeshBase &mesh, const unsigned int n)
 Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems. More...
 
virtual void partition (MeshBase &mesh)
 Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems. More...
 
virtual void partition_range (MeshBase &, MeshBase::element_iterator, MeshBase::element_iterator, const unsigned int)
 Partitions elements in the range (it, end) into n parts. More...
 
void repartition (MeshBase &mesh, const unsigned int n)
 Repartitions the MeshBase into n parts. More...
 
void repartition (MeshBase &mesh)
 Repartitions the MeshBase into mesh.n_processors() parts. More...
 
virtual void attach_weights (ErrorVector *)
 Attach weights that can be used for partitioning. More...
 

Static Public Member Functions

static void partition_unpartitioned_elements (MeshBase &mesh)
 These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0. More...
 
static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n)
 
static void set_parent_processor_ids (MeshBase &mesh)
 This function is called after partitioning to set the processor IDs for the inactive parent elements. More...
 
static void set_node_processor_ids (MeshBase &mesh)
 This function is called after partitioning to set the processor IDs for the nodes. More...
 

Public Attributes

std::vector< std::set< subdomain_id_type > > chunks
 Each entry of "chunks" represents a set of subdomains which are to be partitioned together. More...
 

Protected Member Functions

virtual void _do_partition (MeshBase &mesh, const unsigned int n) libmesh_override
 Partition the MeshBase into n subdomains. More...
 
void single_partition (MeshBase &mesh)
 Trivially "partitions" the mesh for one processor. More...
 
void single_partition_range (MeshBase::element_iterator it, MeshBase::element_iterator end)
 Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end). More...
 
virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 This is the actual re-partitioning method which can be overridden in derived classes. More...
 

Protected Attributes

UniquePtr< Partitioner_internal_partitioner
 The internal Partitioner we use. More...
 
ErrorVector_weights
 The weights that might be used for partitioning. More...
 

Static Protected Attributes

static const dof_id_type communication_blocksize = 1000000
 The blocksize to use when doing blocked parallel communication. More...
 

Detailed Description

The SubdomainPartitioner partitions the elements in "chunks" of user-specified subdomain ids.

Once all the chunks are partitioned, the overall mesh partitioning is simply the union of the chunk partitionings. For example, if the "chunks" vector is given by: chunks[0] = {1, 2, 4} chunks[1] = {3, 7, 8} chunks[2] = {5, 6} then we will call the internal Partitioner three times, once for subdomains 1, 2, and 4, once for subdomains 3, 7, and 8, and once for subdomains 5 and 6.

Note
This Partitioner may produce highly non-optimal communication patterns and is likely to place geometrically disjoint sets of elements on the same processor. Its intended use is to help facilitate load balancing. That is, if the user knows that certain subdomains (or groups of subdomains) are more expensive to compute than others, he/she can ensure that they are partitioned more or less evenly among the available processors by specifying them together in a single entry of the "chunk" vector.
Author
John W. Peterson
Date
2017 Independently partitions chunks of subdomains and combines the results.

Definition at line 54 of file subdomain_partitioner.h.

Constructor & Destructor Documentation

libMesh::SubdomainPartitioner::SubdomainPartitioner ( )

Constructor, default initializes the internal Partitioner object to a MetisPartitioner so the class is usable, although this type can be customized later.

Definition at line 29 of file subdomain_partitioner.C.

Referenced by clone().

29  :
30  _internal_partitioner(new MetisPartitioner)
31 {}
UniquePtr< Partitioner > _internal_partitioner
The internal Partitioner we use.

Member Function Documentation

void libMesh::SubdomainPartitioner::_do_partition ( MeshBase mesh,
const unsigned int  n 
)
protectedvirtual

Partition the MeshBase into n subdomains.

Implements libMesh::Partitioner.

Definition at line 34 of file subdomain_partitioner.C.

References _internal_partitioner, libMesh::MeshBase::active_subdomain_set_elements_begin(), libMesh::MeshBase::active_subdomain_set_elements_end(), chunks, end, and libMesh::Partitioner::single_partition().

36 {
37  libmesh_assert_greater (n, 0);
38 
39  // Check for an easy return. If the user has not specified any
40  // entries in the chunks vector, we just do a single partitioning.
41  if ((n == 1) || chunks.empty())
42  {
43  this->single_partition (mesh);
44  return;
45  }
46 
47  // Now actually do the partitioning.
48  LOG_SCOPE ("_do_partition()", "SubdomainPartitioner");
49 
50  // For each chunk, construct an iterator range for the set of
51  // subdomains in question, and pass it to the internal Partitioner.
52  for (std::size_t c=0; c<chunks.size(); ++c)
53  {
54  MeshBase::element_iterator
55  it = mesh.active_subdomain_set_elements_begin(chunks[c]),
56  end = mesh.active_subdomain_set_elements_end(chunks[c]);
57 
58  _internal_partitioner->partition_range(mesh, it, end, n);
59  }
60 }
void single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:151
MeshBase & mesh
std::vector< std::set< subdomain_id_type > > chunks
Each entry of "chunks" represents a set of subdomains which are to be partitioned together...
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...
UniquePtr< Partitioner > _internal_partitioner
The internal Partitioner we use.
virtual void libMesh::Partitioner::_do_repartition ( MeshBase mesh,
const unsigned int  n 
)
protectedvirtualinherited

This is the actual re-partitioning method which can be overridden in derived classes.

Note
The default behavior is to simply call the partition function.

Reimplemented in libMesh::ParmetisPartitioner.

Definition at line 204 of file partitioner.h.

References libMesh::Partitioner::_do_partition().

Referenced by libMesh::Partitioner::repartition().

205  { this->_do_partition (mesh, n); }
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
This is the actual partitioning method which must be overridden in derived classes.
virtual void libMesh::Partitioner::attach_weights ( ErrorVector )
virtualinherited

Attach weights that can be used for partitioning.

This ErrorVector should be exactly the same on every processor and should have mesh->max_elem_id() entries.

Reimplemented in libMesh::MetisPartitioner.

Definition at line 170 of file partitioner.h.

References libMesh::Partitioner::_do_partition(), end, libMesh::Partitioner::single_partition(), and libMesh::Partitioner::single_partition_range().

170 { libmesh_not_implemented(); }
virtual UniquePtr<Partitioner> libMesh::SubdomainPartitioner::clone ( ) const
virtual
Returns
A copy of this partitioner wrapped in a smart pointer.

Implements libMesh::Partitioner.

Definition at line 68 of file subdomain_partitioner.h.

References SubdomainPartitioner().

69  {
70  return UniquePtr<Partitioner>(new SubdomainPartitioner());
71  }
SubdomainPartitioner()
Constructor, default initializes the internal Partitioner object to a MetisPartitioner so the class i...
UniquePtr<Partitioner>& libMesh::SubdomainPartitioner::internal_partitioner ( )

Get a reference to the Partitioner used internally by the SubdomainPartitioner.

Note
The internal Partitioner cannot also be a SubdomainPartitioner, otherwise an infinite loop will result. To have this class use e.g. the space-filling curve Partitioner internally, one could do:
sp.internal_partitioner().reset(new SFCPartitioner);

Definition at line 95 of file subdomain_partitioner.h.

References _internal_partitioner.

95 { return _internal_partitioner; }
UniquePtr< Partitioner > _internal_partitioner
The internal Partitioner we use.
void libMesh::Partitioner::partition ( MeshBase mesh,
const unsigned int  n 
)
virtualinherited

Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 49 of file partitioner.C.

References libMesh::Partitioner::_do_partition(), libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_valid_remote_elems(), mesh, std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::redistribute(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), libMesh::Partitioner::single_partition(), and libMesh::MeshBase::update_post_partitioning().

Referenced by libMesh::ParmetisPartitioner::_do_repartition(), main(), libMesh::Partitioner::partition(), and libMesh::Partitioner::~Partitioner().

51 {
52  libmesh_parallel_only(mesh.comm());
53 
54  // BSK - temporary fix while redistribution is integrated 6/26/2008
55  // Uncomment this to not repartition in parallel
56  // if (!mesh.is_serial())
57  // return;
58 
59  // we cannot partition into more pieces than we have
60  // active elements!
61  const unsigned int n_parts =
62  static_cast<unsigned int>
63  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
64 
65  // Set the number of partitions in the mesh
66  mesh.set_n_partitions()=n_parts;
67 
68  if (n_parts == 1)
69  {
70  this->single_partition (mesh);
71  return;
72  }
73 
74  // First assign a temporary partitioning to any unpartitioned elements
76 
77  // Call the partitioning function
78  this->_do_partition(mesh,n_parts);
79 
80  // Set the parent's processor ids
82 
83  // Redistribute elements if necessary, before setting node processor
84  // ids, to make sure those will be set consistently
85  mesh.redistribute();
86 
87 #ifdef DEBUG
89 
90  // Messed up elem processor_id()s can leave us without the child
91  // elements we need to restrict vectors on a distributed mesh
92  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
93 #endif
94 
95  // Set the node's processor ids
97 
98 #ifdef DEBUG
99  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
100 #endif
101 
102  // Give derived Mesh classes a chance to update any cached data to
103  // reflect the new partitioning
104  mesh.update_post_partitioning();
105 }
void single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:151
void libmesh_assert_valid_remote_elems(const MeshBase &mesh)
A function for verifying that active local elements&#39; neighbors are never remote elements.
Definition: mesh_tools.C:1037
static void set_node_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the nodes.
Definition: partitioner.C:416
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
This is the actual partitioning method which must be overridden in derived classes.
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the inactive parent elements...
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition ( MeshBase mesh)
virtualinherited

Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 42 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::partition().

43 {
44  this->partition(mesh,mesh.n_processors());
45 }
MeshBase & mesh
virtual void partition(MeshBase &mesh, const unsigned int n)
Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.
Definition: partitioner.C:49
virtual void libMesh::Partitioner::partition_range ( MeshBase ,
MeshBase::element_iterator  ,
MeshBase::element_iterator  ,
const unsigned int   
)
virtualinherited

Partitions elements in the range (it, end) into n parts.

The mesh from which the iterators are created must also be passed in, since it is a parallel object and has other useful information in it.

Although partition_range() is part of the public Partitioner interface, it should not generally be called by applications. Its main purpose is to support the SubdomainPartitioner, which uses it internally to individually partition ranges of elements before combining them into the final partitioning. Most of the time, the protected _do_partition() function is implemented in terms of partition_range() by passing a range which includes all the elements of the Mesh.

Reimplemented in libMesh::CentroidPartitioner, libMesh::MappedSubdomainPartitioner, libMesh::SFCPartitioner, libMesh::LinearPartitioner, and libMesh::MetisPartitioner.

Definition at line 120 of file partitioner.h.

References libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::Partitioner::repartition(), libMesh::Partitioner::set_node_processor_ids(), and libMesh::Partitioner::set_parent_processor_ids().

124  { libmesh_not_implemented(); }
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh)
staticinherited

These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0.

Definition at line 175 of file partitioner.C.

References libMesh::ParallelObject::n_processors().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

176 {
178 }
MeshBase & mesh
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:175
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh,
const unsigned int  n 
)
staticinherited

Definition at line 182 of file partitioner.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::create_bounding_box(), distance(), end, libMesh::MeshCommunication::find_global_indices(), libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::DofObject::processor_id(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

184 {
185  MeshBase::element_iterator it = mesh.unpartitioned_elements_begin();
186  const MeshBase::element_iterator end = mesh.unpartitioned_elements_end();
187 
188  const dof_id_type n_unpartitioned_elements = MeshTools::n_elem (it, end);
189 
190  // the unpartitioned elements must exist on all processors. If the range is empty on one
191  // it is empty on all, and we can quit right here.
192  if (!n_unpartitioned_elements) return;
193 
194  // find the target subdomain sizes
195  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
196 
197  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
198  {
199  dof_id_type tgt_subdomain_size = 0;
200 
201  // watch out for the case that n_subdomains < n_processors
202  if (pid < n_subdomains)
203  {
204  tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
205 
206  if (pid < n_unpartitioned_elements%n_subdomains)
207  tgt_subdomain_size++;
208 
209  }
210 
211  //libMesh::out << "pid, #= " << pid << ", " << tgt_subdomain_size << std::endl;
212  if (pid == 0)
213  subdomain_bounds[0] = tgt_subdomain_size;
214  else
215  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
216  }
217 
218  libmesh_assert_equal_to (subdomain_bounds.back(), n_unpartitioned_elements);
219 
220  // create the unique mapping for all unpartitioned elements independent of partitioning
221  // determine the global indexing for all the unpartitioned elements
222  std::vector<dof_id_type> global_indices;
223 
224  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.
225  // Only the indices for the elements we pass in are returned in the array.
226  MeshCommunication().find_global_indices (mesh.comm(),
228  global_indices);
229 
230  for (dof_id_type cnt=0; it != end; ++it)
231  {
232  Elem * elem = *it;
233 
234  libmesh_assert_less (cnt, global_indices.size());
235  const dof_id_type global_index =
236  global_indices[cnt++];
237 
238  libmesh_assert_less (global_index, subdomain_bounds.back());
239  libmesh_assert_less (global_index, n_unpartitioned_elements);
240 
241  const processor_id_type subdomain_id =
242  cast_int<processor_id_type>
243  (std::distance(subdomain_bounds.begin(),
244  std::upper_bound(subdomain_bounds.begin(),
245  subdomain_bounds.end(),
246  global_index)));
247  libmesh_assert_less (subdomain_id, n_subdomains);
248 
249  elem->processor_id() = subdomain_id;
250  //libMesh::out << "assigning " << global_index << " to " << subdomain_id << std::endl;
251  }
252 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:656
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
The same functionality as the deprecated MeshTools::bounding_box().
Definition: mesh_tools.C:329
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...
Real distance(const Point &p)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh,
const unsigned int  n 
)
inherited

Repartitions the MeshBase into n parts.

(Some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.) The default behavior is to simply call this->partition(mesh,n).

Definition at line 116 of file partitioner.C.

References libMesh::Partitioner::_do_repartition(), std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

118 {
119  // we cannot partition into more pieces than we have
120  // active elements!
121  const unsigned int n_parts =
122  static_cast<unsigned int>
123  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
124 
125  // Set the number of partitions in the mesh
126  mesh.set_n_partitions()=n_parts;
127 
128  if (n_parts == 1)
129  {
130  this->single_partition (mesh);
131  return;
132  }
133 
134  // First assign a temporary partitioning to any unpartitioned elements
136 
137  // Call the partitioning function
138  this->_do_repartition(mesh,n_parts);
139 
140  // Set the parent's processor ids
142 
143  // Set the node's processor ids
145 }
void single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:151
static void set_node_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the nodes.
Definition: partitioner.C:416
MeshBase & mesh
virtual void _do_repartition(MeshBase &mesh, const unsigned int n)
This is the actual re-partitioning method which can be overridden in derived classes.
Definition: partitioner.h:204
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the inactive parent elements...
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh)
inherited

Repartitions the MeshBase into mesh.n_processors() parts.

This is required since some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.

Definition at line 109 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::repartition().

110 {
111  this->repartition(mesh,mesh.n_processors());
112 }
MeshBase & mesh
void repartition(MeshBase &mesh, const unsigned int n)
Repartitions the MeshBase into n parts.
Definition: partitioner.C:116
void libMesh::Partitioner::set_node_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the nodes.

By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 416 of file partitioner.C.

References libMesh::MeshBase::active_element_ptr_range(), libMesh::ParallelObject::comm(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), mesh, std::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::Elem::node_ptr(), libMesh::MeshBase::node_ptr_range(), libMesh::MeshBase::node_ref(), libMesh::MeshBase::not_active_elements_begin(), libMesh::MeshBase::not_active_elements_end(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::Parallel::Communicator::send_receive(), libMesh::MeshBase::subactive_elements_begin(), libMesh::MeshBase::subactive_elements_end(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::UnstructuredMesh::all_first_order(), libMesh::Partitioner::partition(), libMesh::MeshBase::partition(), libMesh::Partitioner::partition_range(), libMesh::XdrIO::read(), libMesh::Partitioner::repartition(), and libMesh::BoundaryInfo::sync().

417 {
418  LOG_SCOPE("set_node_processor_ids()","Partitioner");
419 
420  // This function must be run on all processors at once
421  libmesh_parallel_only(mesh.comm());
422 
423  // If we have any unpartitioned elements at this
424  // stage there is a problem
425  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
426  mesh.unpartitioned_elements_end()) == 0);
427 
428 
429  // const dof_id_type orig_n_local_nodes = mesh.n_local_nodes();
430 
431  // libMesh::err << "[" << mesh.processor_id() << "]: orig_n_local_nodes="
432  // << orig_n_local_nodes << std::endl;
433 
434  // Build up request sets. Each node is currently owned by a processor because
435  // it is connected to an element owned by that processor. However, during the
436  // repartitioning phase that element may have been assigned a new processor id, but
437  // it is still resident on the original processor. We need to know where to look
438  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
439  // for the wrong information.
440  //
441  // The only remaining issue is what to do with unpartitioned nodes. Since they are required
442  // to live on all processors we can simply rely on ourselves to number them properly.
443  std::vector<std::vector<dof_id_type>>
444  requested_node_ids(mesh.n_processors());
445 
446  // Loop over all the nodes, count the ones on each processor. We can skip ourself
447  std::vector<dof_id_type> ghost_nodes_from_proc(mesh.n_processors(), 0);
448 
449  for (auto & node : mesh.node_ptr_range())
450  {
451  libmesh_assert(node);
452  const processor_id_type current_pid = node->processor_id();
453  if (current_pid != mesh.processor_id() &&
454  current_pid != DofObject::invalid_processor_id)
455  {
456  libmesh_assert_less (current_pid, ghost_nodes_from_proc.size());
457  ghost_nodes_from_proc[current_pid]++;
458  }
459  }
460 
461  // We know how many objects live on each processor, so reserve()
462  // space for each.
463  for (processor_id_type pid=0; pid != mesh.n_processors(); ++pid)
464  requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);
465 
466  // We need to get the new pid for each node from the processor
467  // which *currently* owns the node. We can safely skip ourself
468  for (auto & node : mesh.node_ptr_range())
469  {
470  libmesh_assert(node);
471  const processor_id_type current_pid = node->processor_id();
472  if (current_pid != mesh.processor_id() &&
473  current_pid != DofObject::invalid_processor_id)
474  {
475  libmesh_assert_less (current_pid, requested_node_ids.size());
476  libmesh_assert_less (requested_node_ids[current_pid].size(),
477  ghost_nodes_from_proc[current_pid]);
478  requested_node_ids[current_pid].push_back(node->id());
479  }
480 
481  // Unset any previously-set node processor ids
482  node->invalidate_processor_id();
483  }
484 
485  // Loop over all the active elements
486  for (auto & elem : mesh.active_element_ptr_range())
487  {
488  libmesh_assert(elem);
489 
490  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
491 
492  // For each node, set the processor ID to the min of
493  // its current value and this Element's processor id.
494  //
495  // TODO: we would probably get better parallel partitioning if
496  // we did something like "min for even numbered nodes, max for
497  // odd numbered". We'd need to be careful about how that would
498  // affect solution ordering for I/O, though.
499  for (unsigned int n=0; n<elem->n_nodes(); ++n)
500  elem->node_ptr(n)->processor_id() = std::min(elem->node_ptr(n)->processor_id(),
501  elem->processor_id());
502  }
503 
504  // And loop over the subactive elements, but don't reassign
505  // nodes that are already active on another processor.
506  MeshBase::element_iterator sub_it = mesh.subactive_elements_begin();
507  const MeshBase::element_iterator sub_end = mesh.subactive_elements_end();
508 
509  for ( ; sub_it != sub_end; ++sub_it)
510  {
511  Elem * elem = *sub_it;
512  libmesh_assert(elem);
513 
514  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
515 
516  for (unsigned int n=0; n<elem->n_nodes(); ++n)
517  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
518  elem->node_ptr(n)->processor_id() = elem->processor_id();
519  }
520 
521  // Same for the inactive elements -- we will have already gotten most of these
522  // nodes, *except* for the case of a parent with a subset of children which are
523  // ghost elements. In that case some of the parent nodes will not have been
524  // properly handled yet
525  MeshBase::element_iterator not_it = mesh.not_active_elements_begin();
526  const MeshBase::element_iterator not_end = mesh.not_active_elements_end();
527 
528  for ( ; not_it != not_end; ++not_it)
529  {
530  Elem * elem = *not_it;
531  libmesh_assert(elem);
532 
533  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
534 
535  for (unsigned int n=0; n<elem->n_nodes(); ++n)
536  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
537  elem->node_ptr(n)->processor_id() = elem->processor_id();
538  }
539 
540  // We can't assert that all nodes are connected to elements, because
541  // a DistributedMesh with NodeConstraints might have pulled in some
542  // remote nodes solely for evaluating those constraints.
543  // MeshTools::libmesh_assert_connected_nodes(mesh);
544 
545  // For such nodes, we'll do a sanity check later when making sure
546  // that we successfully reset their processor ids to something
547  // valid.
548 
549  // Next set node ids from other processors, excluding self
550  for (processor_id_type p=1; p != mesh.n_processors(); ++p)
551  {
552  // Trade my requests with processor procup and procdown
553  processor_id_type procup = cast_int<processor_id_type>
554  ((mesh.processor_id() + p) % mesh.n_processors());
555  processor_id_type procdown = cast_int<processor_id_type>
556  ((mesh.n_processors() + mesh.processor_id() - p) %
557  mesh.n_processors());
558  std::vector<dof_id_type> request_to_fill;
559  mesh.comm().send_receive(procup, requested_node_ids[procup],
560  procdown, request_to_fill);
561 
562  // Fill those requests in-place
563  for (std::size_t i=0; i != request_to_fill.size(); ++i)
564  {
565  Node & node = mesh.node_ref(request_to_fill[i]);
566  const processor_id_type new_pid = node.processor_id();
567 
568  // We may have an invalid processor_id() on nodes that have been
569  // "detached" from coarsened-away elements but that have not yet
570  // themselves been removed.
571  // libmesh_assert_not_equal_to (new_pid, DofObject::invalid_processor_id);
572  // libmesh_assert_less (new_pid, mesh.n_partitions()); // this is the correct test --
573  request_to_fill[i] = new_pid; // the number of partitions may
574  } // not equal the number of processors
575 
576  // Trade back the results
577  std::vector<dof_id_type> filled_request;
578  mesh.comm().send_receive(procdown, request_to_fill,
579  procup, filled_request);
580  libmesh_assert_equal_to (filled_request.size(), requested_node_ids[procup].size());
581 
582  // And copy the id changes we've now been informed of
583  for (std::size_t i=0; i != filled_request.size(); ++i)
584  {
585  Node & node = mesh.node_ref(requested_node_ids[procup][i]);
586 
587  // this is the correct test -- the number of partitions may
588  // not equal the number of processors
589 
590  // But: we may have an invalid processor_id() on nodes that
591  // have been "detached" from coarsened-away elements but
592  // that have not yet themselves been removed.
593  // libmesh_assert_less (filled_request[i], mesh.n_partitions());
594 
595  node.processor_id(cast_int<processor_id_type>(filled_request[i]));
596  }
597  }
598 
599 #ifdef DEBUG
600  MeshTools::libmesh_assert_valid_procids<Node>(mesh);
601 #endif
602 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:656
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
libmesh_assert(j)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
long double min(long double a, double b)
void libMesh::Partitioner::set_parent_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the inactive parent elements.

A parent's processor ID is the same as its first child.

Definition at line 256 of file partitioner.C.

References libMesh::MeshBase::active_element_ptr_range(), libMesh::Elem::active_family_tree(), libMesh::MeshBase::ancestor_elements_begin(), libMesh::MeshBase::ancestor_elements_end(), libMesh::Elem::child_ref_range(), libMesh::ParallelObject::comm(), libMesh::Partitioner::communication_blocksize, libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::MeshBase::max_elem_id(), std::min(), libMesh::Parallel::Communicator::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::parent(), libMesh::processor_id(), libMesh::DofObject::processor_id(), libMesh::Elem::total_family_tree(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

257 {
258  // Ignore the parameter when !LIBMESH_ENABLE_AMR
260 
261  LOG_SCOPE("set_parent_processor_ids()", "Partitioner");
262 
263 #ifdef LIBMESH_ENABLE_AMR
264 
265  // If the mesh is serial we have access to all the elements,
266  // in particular all the active ones. We can therefore set
267  // the parent processor ids indirectly through their children, and
268  // set the subactive processor ids while examining their active
269  // ancestors.
270  // By convention a parent is assigned to the minimum processor
271  // of all its children, and a subactive is assigned to the processor
272  // of its active ancestor.
273  if (mesh.is_serial())
274  {
275  for (auto & child : mesh.active_element_ptr_range())
276  {
277  // First set descendents
278  std::vector<const Elem *> subactive_family;
279  child->total_family_tree(subactive_family);
280  for (std::size_t i = 0; i != subactive_family.size(); ++i)
281  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
282 
283  // Then set ancestors
284  Elem * parent = child->parent();
285 
286  while (parent)
287  {
288  // invalidate the parent id, otherwise the min below
289  // will not work if the current parent id is less
290  // than all the children!
291  parent->invalidate_processor_id();
292 
293  for (auto & child : parent->child_ref_range())
294  {
295  libmesh_assert(!child.is_remote());
296  libmesh_assert_not_equal_to (child.processor_id(), DofObject::invalid_processor_id);
297  parent->processor_id() = std::min(parent->processor_id(),
298  child.processor_id());
299  }
300  parent = parent->parent();
301  }
302  }
303  }
304 
305  // When the mesh is parallel we cannot guarantee that parents have access to
306  // all their children.
307  else
308  {
309  // Setting subactive processor ids is easy: we can guarantee
310  // that children have access to all their parents.
311 
312  // Loop over all the active elements in the mesh
313  for (auto & child : mesh.active_element_ptr_range())
314  {
315  std::vector<const Elem *> subactive_family;
316  child->total_family_tree(subactive_family);
317  for (std::size_t i = 0; i != subactive_family.size(); ++i)
318  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
319  }
320 
321  // When the mesh is parallel we cannot guarantee that parents have access to
322  // all their children.
323 
324  // We will use a brute-force approach here. Each processor finds its parent
325  // elements and sets the parent pid to the minimum of its
326  // semilocal descendants.
327  // A global reduction is then performed to make sure the true minimum is found.
328  // As noted, this is required because we cannot guarantee that a parent has
329  // access to all its children on any single processor.
330  libmesh_parallel_only(mesh.comm());
331  libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
332  mesh.unpartitioned_elements_end()) == 0);
333 
334  const dof_id_type max_elem_id = mesh.max_elem_id();
335 
336  std::vector<processor_id_type>
337  parent_processor_ids (std::min(communication_blocksize,
338  max_elem_id));
339 
340  for (dof_id_type blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
341  {
342  last_elem_id =
343  std::min(static_cast<dof_id_type>((blk+1)*communication_blocksize),
344  max_elem_id);
345  const dof_id_type first_elem_id = blk*communication_blocksize;
346 
347  std::fill (parent_processor_ids.begin(),
348  parent_processor_ids.end(),
350 
351  // first build up local contributions to parent_processor_ids
352  MeshBase::element_iterator not_it = mesh.ancestor_elements_begin();
353  const MeshBase::element_iterator not_end = mesh.ancestor_elements_end();
354 
355  bool have_parent_in_block = false;
356 
357  for ( ; not_it != not_end; ++not_it)
358  {
359  Elem * parent = *not_it;
360 
361  const dof_id_type parent_idx = parent->id();
362  libmesh_assert_less (parent_idx, max_elem_id);
363 
364  if ((parent_idx >= first_elem_id) &&
365  (parent_idx < last_elem_id))
366  {
367  have_parent_in_block = true;
369 
370  std::vector<const Elem *> active_family;
371  parent->active_family_tree(active_family);
372  for (std::size_t i = 0; i != active_family.size(); ++i)
373  parent_pid = std::min (parent_pid, active_family[i]->processor_id());
374 
375  const dof_id_type packed_idx = parent_idx - first_elem_id;
376  libmesh_assert_less (packed_idx, parent_processor_ids.size());
377 
378  parent_processor_ids[packed_idx] = parent_pid;
379  }
380  }
381 
382  // then find the global minimum
383  mesh.comm().min (parent_processor_ids);
384 
385  // and assign the ids, if we have a parent in this block.
386  if (have_parent_in_block)
387  for (not_it = mesh.ancestor_elements_begin();
388  not_it != not_end; ++not_it)
389  {
390  Elem * parent = *not_it;
391 
392  const dof_id_type parent_idx = parent->id();
393 
394  if ((parent_idx >= first_elem_id) &&
395  (parent_idx < last_elem_id))
396  {
397  const dof_id_type packed_idx = parent_idx - first_elem_id;
398  libmesh_assert_less (packed_idx, parent_processor_ids.size());
399 
400  const processor_id_type parent_pid =
401  parent_processor_ids[packed_idx];
402 
403  libmesh_assert_not_equal_to (parent_pid, DofObject::invalid_processor_id);
404 
405  parent->processor_id() = parent_pid;
406  }
407  }
408  }
409  }
410 
411 #endif // LIBMESH_ENABLE_AMR
412 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:656
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
libmesh_assert(j)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
void libmesh_ignore(const T &)
static const dof_id_type communication_blocksize
The blocksize to use when doing blocked parallel communication.
Definition: partitioner.h:211
processor_id_type processor_id()
Definition: libmesh_base.h:96
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::single_partition ( MeshBase mesh)
protectedinherited

Trivially "partitions" the mesh for one processor.

Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 151 of file partitioner.C.

References libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Partitioner::single_partition_range().

Referenced by _do_partition(), libMesh::Partitioner::attach_weights(), libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

152 {
153  this->single_partition_range(mesh.elements_begin(),
154  mesh.elements_end());
155 }
MeshBase & mesh
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Slightly generalized version of single_partition which acts on a range of elements defined by the pai...
Definition: partitioner.C:159
void libMesh::Partitioner::single_partition_range ( MeshBase::element_iterator  it,
MeshBase::element_iterator  end 
)
protectedinherited

Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end).

Definition at line 159 of file partitioner.C.

References end, libMesh::Elem::n_nodes(), libMesh::Elem::node_ptr(), and libMesh::DofObject::processor_id().

Referenced by libMesh::Partitioner::attach_weights(), libMesh::LinearPartitioner::partition_range(), libMesh::MappedSubdomainPartitioner::partition_range(), libMesh::CentroidPartitioner::partition_range(), and libMesh::Partitioner::single_partition().

161 {
162  LOG_SCOPE("single_partition_range()", "Partitioner");
163 
164  for ( ; it != end; ++it)
165  {
166  Elem * elem = *it;
167  elem->processor_id() = 0;
168 
169  // Assign all this element's nodes to processor 0 as well.
170  for (unsigned int n=0; n<elem->n_nodes(); ++n)
171  elem->node_ptr(n)->processor_id() = 0;
172  }
173 }
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...

Member Data Documentation

UniquePtr<Partitioner> libMesh::SubdomainPartitioner::_internal_partitioner
protected

The internal Partitioner we use.

Public access via the internal_partitioner() member function.

Definition at line 102 of file subdomain_partitioner.h.

Referenced by _do_partition(), and internal_partitioner().

ErrorVector* libMesh::Partitioner::_weights
protectedinherited

The weights that might be used for partitioning.

Definition at line 216 of file partitioner.h.

Referenced by libMesh::MetisPartitioner::attach_weights().

std::vector<std::set<subdomain_id_type> > libMesh::SubdomainPartitioner::chunks

Each entry of "chunks" represents a set of subdomains which are to be partitioned together.

The internal Partitioner will be called once for each entry of chunks, and the resulting partitioning will simply be the union of all these partitionings.

Definition at line 79 of file subdomain_partitioner.h.

Referenced by _do_partition().

const dof_id_type libMesh::Partitioner::communication_blocksize = 1000000
staticprotectedinherited

The blocksize to use when doing blocked parallel communication.

This limits the maximum vector size which can be used in a single communication step.

Definition at line 211 of file partitioner.h.

Referenced by libMesh::Partitioner::set_parent_processor_ids().


The documentation for this class was generated from the following files: