libMesh
Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes | Static Protected Attributes | List of all members
libMesh::MetisPartitioner Class Reference

The MetisPartitioner uses the Metis graph partitioner to partition the elements. More...

#include <metis_partitioner.h>

Inheritance diagram for libMesh::MetisPartitioner:
[legend]

Public Member Functions

 MetisPartitioner ()
 Constructor. More...
 
virtual UniquePtr< Partitionerclone () const libmesh_override
 
virtual void attach_weights (ErrorVector *weights) libmesh_override
 Attach weights that can be used for partitioning. More...
 
virtual void partition_range (MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) libmesh_override
 Called by the SubdomainPartitioner to partition elements in the range (it, end). More...
 
virtual void partition (MeshBase &mesh, const unsigned int n)
 Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems. More...
 
virtual void partition (MeshBase &mesh)
 Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems. More...
 
void repartition (MeshBase &mesh, const unsigned int n)
 Repartitions the MeshBase into n parts. More...
 
void repartition (MeshBase &mesh)
 Repartitions the MeshBase into mesh.n_processors() parts. More...
 

Static Public Member Functions

static void partition_unpartitioned_elements (MeshBase &mesh)
 These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0. More...
 
static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n)
 
static void set_parent_processor_ids (MeshBase &mesh)
 This function is called after partitioning to set the processor IDs for the inactive parent elements. More...
 
static void set_node_processor_ids (MeshBase &mesh)
 This function is called after partitioning to set the processor IDs for the nodes. More...
 

Protected Member Functions

virtual void _do_partition (MeshBase &mesh, const unsigned int n) libmesh_override
 Partition the MeshBase into n subdomains. More...
 
void single_partition (MeshBase &mesh)
 Trivially "partitions" the mesh for one processor. More...
 
void single_partition_range (MeshBase::element_iterator it, MeshBase::element_iterator end)
 Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end). More...
 
virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 This is the actual re-partitioning method which can be overridden in derived classes. More...
 

Protected Attributes

ErrorVector_weights
 The weights that might be used for partitioning. More...
 

Static Protected Attributes

static const dof_id_type communication_blocksize = 1000000
 The blocksize to use when doing blocked parallel communication. More...
 

Detailed Description

The MetisPartitioner uses the Metis graph partitioner to partition the elements.

Author
Benjamin S. Kirk
Date
2003 Partitioner which interfaces with the METIS library.

Definition at line 37 of file metis_partitioner.h.

Constructor & Destructor Documentation

libMesh::MetisPartitioner::MetisPartitioner ( )

Constructor.

Definition at line 44 of file metis_partitioner.h.

Referenced by clone().

44 {}

Member Function Documentation

void libMesh::MetisPartitioner::_do_partition ( MeshBase mesh,
const unsigned int  n 
)
protectedvirtual

Partition the MeshBase into n subdomains.

Implements libMesh::Partitioner.

Definition at line 510 of file metis_partitioner.C.

References libMesh::MeshBase::active_elements_begin(), and libMesh::MeshBase::active_elements_end().

Referenced by attach_weights().

512 {
513  this->partition_range(mesh,
514  mesh.active_elements_begin(),
515  mesh.active_elements_end(),
516  n_pieces);
517 }
MeshBase & mesh
virtual void partition_range(MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) libmesh_override
Called by the SubdomainPartitioner to partition elements in the range (it, end).
virtual void libMesh::Partitioner::_do_repartition ( MeshBase mesh,
const unsigned int  n 
)
protectedvirtualinherited

This is the actual re-partitioning method which can be overridden in derived classes.

Note
The default behavior is to simply call the partition function.

Reimplemented in libMesh::ParmetisPartitioner.

Definition at line 204 of file partitioner.h.

References libMesh::Partitioner::_do_partition().

Referenced by libMesh::Partitioner::repartition().

205  { this->_do_partition (mesh, n); }
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
This is the actual partitioning method which must be overridden in derived classes.
virtual void libMesh::MetisPartitioner::attach_weights ( ErrorVector )
virtual

Attach weights that can be used for partitioning.

This ErrorVector should be exactly the same on every processor and should have mesh->max_elem_id() entries.

Reimplemented from libMesh::Partitioner.

Definition at line 54 of file metis_partitioner.h.

References _do_partition(), libMesh::Partitioner::_weights, end, mesh, and partition_range().

54 { _weights = weights; }
ErrorVector * _weights
The weights that might be used for partitioning.
Definition: partitioner.h:216
virtual UniquePtr<Partitioner> libMesh::MetisPartitioner::clone ( ) const
virtual
Returns
A copy of this partitioner wrapped in a smart pointer.

Implements libMesh::Partitioner.

Definition at line 49 of file metis_partitioner.h.

References MetisPartitioner().

50  {
51  return UniquePtr<Partitioner>(new MetisPartitioner());
52  }
void libMesh::Partitioner::partition ( MeshBase mesh,
const unsigned int  n 
)
virtualinherited

Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 49 of file partitioner.C.

References libMesh::Partitioner::_do_partition(), libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_valid_remote_elems(), mesh, std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::redistribute(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), libMesh::Partitioner::single_partition(), and libMesh::MeshBase::update_post_partitioning().

Referenced by libMesh::ParmetisPartitioner::_do_repartition(), main(), libMesh::Partitioner::partition(), and libMesh::Partitioner::~Partitioner().

51 {
52  libmesh_parallel_only(mesh.comm());
53 
54  // BSK - temporary fix while redistribution is integrated 6/26/2008
55  // Uncomment this to not repartition in parallel
56  // if (!mesh.is_serial())
57  // return;
58 
59  // we cannot partition into more pieces than we have
60  // active elements!
61  const unsigned int n_parts =
62  static_cast<unsigned int>
63  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
64 
65  // Set the number of partitions in the mesh
66  mesh.set_n_partitions()=n_parts;
67 
68  if (n_parts == 1)
69  {
70  this->single_partition (mesh);
71  return;
72  }
73 
74  // First assign a temporary partitioning to any unpartitioned elements
76 
77  // Call the partitioning function
78  this->_do_partition(mesh,n_parts);
79 
80  // Set the parent's processor ids
82 
83  // Redistribute elements if necessary, before setting node processor
84  // ids, to make sure those will be set consistently
85  mesh.redistribute();
86 
87 #ifdef DEBUG
89 
90  // Messed up elem processor_id()s can leave us without the child
91  // elements we need to restrict vectors on a distributed mesh
92  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
93 #endif
94 
95  // Set the node's processor ids
97 
98 #ifdef DEBUG
99  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
100 #endif
101 
102  // Give derived Mesh classes a chance to update any cached data to
103  // reflect the new partitioning
104  mesh.update_post_partitioning();
105 }
void single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:151
void libmesh_assert_valid_remote_elems(const MeshBase &mesh)
A function for verifying that active local elements&#39; neighbors are never remote elements.
Definition: mesh_tools.C:1037
static void set_node_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the nodes.
Definition: partitioner.C:416
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
This is the actual partitioning method which must be overridden in derived classes.
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the inactive parent elements...
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition ( MeshBase mesh)
virtualinherited

Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 42 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::partition().

43 {
44  this->partition(mesh,mesh.n_processors());
45 }
MeshBase & mesh
virtual void partition(MeshBase &mesh, const unsigned int n)
Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.
Definition: partitioner.C:49
void libMesh::MetisPartitioner::partition_range ( MeshBase mesh,
MeshBase::element_iterator  it,
MeshBase::element_iterator  end,
const unsigned int  n 
)
virtual

Called by the SubdomainPartitioner to partition elements in the range (it, end).

Reimplemented from libMesh::Partitioner.

Definition at line 56 of file metis_partitioner.C.

References libMesh::Elem::active(), libMesh::as_range(), libMesh::Parallel::Communicator::broadcast(), libMesh::ParallelObject::comm(), libMesh::vectormap< Key, Tp >::count(), libMesh::MeshTools::create_bounding_box(), libMesh::Elem::dim(), distance(), end, libMesh::err, libMesh::vectormap< Key, Tp >::find(), libMesh::MeshCommunication::find_global_indices(), libMesh::Elem::find_interior_neighbors(), libMesh::DofObject::id(), libMesh::vectormap< Key, Tp >::insert(), libMesh::Elem::interior_parent(), libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libmesh_nullptr, std::max(), libMesh::Elem::n_nodes(), libMesh::Elem::neighbor_ptr(), libMesh::Elem::neighbor_ptr_range(), libMesh::METIS_CSR_Graph< IndexType >::offsets, libMesh::SFCPartitioner::partition_range(), libMesh::METIS_CSR_Graph< IndexType >::prep_n_nonzeros(), libMesh::METIS_CSR_Graph< IndexType >::prepare_for_use(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::MeshBase::query_elem_ptr(), and libMesh::METIS_CSR_Graph< IndexType >::vals.

Referenced by attach_weights().

60 {
61  libmesh_assert_greater (n_pieces, 0);
62  libmesh_assert (mesh.is_serial());
63 
64  // Check for an easy return
65  if (n_pieces == 1)
66  {
67  this->single_partition_range (beg, end);
68  return;
69  }
70 
71  // What to do if the Metis library IS NOT present
72 #ifndef LIBMESH_HAVE_METIS
73 
74  libmesh_here();
75  libMesh::err << "ERROR: The library has been built without" << std::endl
76  << "Metis support. Using a space-filling curve" << std::endl
77  << "partitioner instead!" << std::endl;
78 
79  SFCPartitioner sfcp;
80  sfcp.partition_range (mesh, beg, end, n_pieces);
81 
82  // What to do if the Metis library IS present
83 #else
84 
85  LOG_SCOPE("partition_range()", "MetisPartitioner");
86 
87  const dof_id_type n_range_elem = std::distance(beg, end);
88 
89  // Metis will only consider the elements in the range.
90  // We need to map the range element ids into a
91  // contiguous range. Further, we want the unique range indexing to be
92  // independent of the element ordering, otherwise a circular dependency
93  // can result in which the partitioning depends on the ordering which
94  // depends on the partitioning...
95  vectormap<dof_id_type, dof_id_type> global_index_map;
96  global_index_map.reserve (n_range_elem);
97 
98  {
99  std::vector<dof_id_type> global_index;
100 
101  MeshCommunication().find_global_indices (mesh.comm(),
103  beg, end, global_index);
104 
105  libmesh_assert_equal_to (global_index.size(), n_range_elem);
106 
107  MeshBase::element_iterator it = beg;
108  for (std::size_t cnt=0; it != end; ++it)
109  {
110  const Elem * elem = *it;
111 
112  global_index_map.insert (std::make_pair(elem->id(), global_index[cnt++]));
113  }
114  libmesh_assert_equal_to (global_index_map.size(), n_range_elem);
115  }
116 
117  // If we have boundary elements in this mesh, we want to account for
118  // the connectivity between them and interior elements. We can find
119  // interior elements from boundary elements, but we need to build up
120  // a lookup map to do the reverse.
121  typedef std::unordered_multimap<const Elem *, const Elem *> map_type;
122  map_type interior_to_boundary_map;
123 
124  {
125  MeshBase::element_iterator it = beg;
126  for (; it != end; ++it)
127  {
128  const Elem * elem = *it;
129 
130  // If we don't have an interior_parent then there's nothing
131  // to look us up.
132  if ((elem->dim() >= LIBMESH_DIM) ||
133  !elem->interior_parent())
134  continue;
135 
136  // get all relevant interior elements
137  std::set<const Elem *> neighbor_set;
138  elem->find_interior_neighbors(neighbor_set);
139 
140  std::set<const Elem *>::iterator n_it = neighbor_set.begin();
141  for (; n_it != neighbor_set.end(); ++n_it)
142  {
143  // FIXME - non-const versions of the std::set<const Elem
144  // *> returning methods would be nice
145  Elem * neighbor = const_cast<Elem *>(*n_it);
146 
147 #if defined(LIBMESH_HAVE_UNORDERED_MULTIMAP) || \
148  defined(LIBMESH_HAVE_TR1_UNORDERED_MULTIMAP) || \
149  defined(LIBMESH_HAVE_HASH_MULTIMAP) || \
150  defined(LIBMESH_HAVE_EXT_HASH_MULTIMAP)
151  interior_to_boundary_map.insert(std::make_pair(neighbor, elem));
152 #else
153  interior_to_boundary_map.insert(interior_to_boundary_map.begin(),
154  std::make_pair(neighbor, elem));
155 #endif
156  }
157  }
158  }
159 
160  // Data structure that Metis will fill up on processor 0 and broadcast.
161  std::vector<Metis::idx_t> part(n_range_elem);
162 
163  // Invoke METIS, but only on processor 0.
164  // Then broadcast the resulting decomposition
165  if (mesh.processor_id() == 0)
166  {
167  // Data structures and parameters needed only on processor 0 by Metis.
168  // std::vector<Metis::idx_t> options(5);
169  std::vector<Metis::idx_t> vwgt(n_range_elem);
170 
171  Metis::idx_t
172  n = static_cast<Metis::idx_t>(n_range_elem), // number of "nodes" (elements) in the graph
173  // wgtflag = 2, // weights on vertices only, none on edges
174  // numflag = 0, // C-style 0-based numbering
175  nparts = static_cast<Metis::idx_t>(n_pieces), // number of subdomains to create
176  edgecut = 0; // the numbers of edges cut by the resulting partition
177 
178  // Set the options
179  // options[0] = 0; // use default options
180 
181  // build the graph
182  METIS_CSR_Graph<Metis::idx_t> csr_graph;
183 
184  csr_graph.offsets.resize(n_range_elem + 1, 0);
185 
186  // Local scope for these
187  {
188  // build the graph in CSR format. Note that
189  // the edges in the graph will correspond to
190  // face neighbors
191 
192 #ifdef LIBMESH_ENABLE_AMR
193  std::vector<const Elem *> neighbors_offspring;
194 #endif
195 
196 #ifndef NDEBUG
197  std::size_t graph_size=0;
198 #endif
199 
200  // (1) first pass - get the row sizes for each element by counting the number
201  // of face neighbors. Also populate the vwght array if necessary
202  MeshBase::element_iterator it = beg;
203  for (; it != end; ++it)
204  {
205  const Elem * elem = *it;
206 
207  const dof_id_type elem_global_index =
208  global_index_map[elem->id()];
209 
210  libmesh_assert_less (elem_global_index, vwgt.size());
211 
212  // maybe there is a better weight?
213  // The weight is used to define what a balanced graph is
214  if (!_weights)
215  vwgt[elem_global_index] = elem->n_nodes();
216  else
217  vwgt[elem_global_index] = static_cast<Metis::idx_t>((*_weights)[elem->id()]);
218 
219  unsigned int num_neighbors = 0;
220 
221  // Loop over the element's neighbors. An element
222  // adjacency corresponds to a face neighbor
223  for (auto neighbor : elem->neighbor_ptr_range())
224  {
225  if (neighbor != libmesh_nullptr)
226  {
227  // If the neighbor is active, but is not in the
228  // range of elements being partitioned, treat it
229  // as a NULL neighbor.
230  if (neighbor->active() && !global_index_map.count(neighbor->id()))
231  continue;
232 
233  // If the neighbor is active treat it
234  // as a connection
235  if (neighbor->active())
236  num_neighbors++;
237 
238 #ifdef LIBMESH_ENABLE_AMR
239 
240  // Otherwise we need to find all of the
241  // neighbor's children that are connected to
242  // us and add them
243  else
244  {
245  // The side of the neighbor to which
246  // we are connected
247  const unsigned int ns =
248  neighbor->which_neighbor_am_i (elem);
249  libmesh_assert_less (ns, neighbor->n_neighbors());
250 
251  // Get all the active children (& grandchildren, etc...)
252  // of the neighbor.
253 
254  // FIXME - this is the wrong thing, since we
255  // should be getting the active family tree on
256  // our side only. But adding too many graph
257  // links may cause hanging nodes to tend to be
258  // on partition interiors, which would reduce
259  // communication overhead for constraint
260  // equations, so we'll leave it.
261  neighbor->active_family_tree (neighbors_offspring);
262 
263  // Get all the neighbor's children that
264  // live on that side and are thus connected
265  // to us
266  for (std::size_t nc=0; nc<neighbors_offspring.size(); nc++)
267  {
268  const Elem * child =
269  neighbors_offspring[nc];
270 
271  // Skip neighbor offspring which are not in the range of elements being partitioned.
272  if (!global_index_map.count(child->id()))
273  continue;
274 
275  // This does not assume a level-1 mesh.
276  // Note that since children have sides numbered
277  // coincident with the parent then this is a sufficient test.
278  if (child->neighbor_ptr(ns) == elem)
279  {
280  libmesh_assert (child->active());
281  num_neighbors++;
282  }
283  }
284  }
285 
286 #endif /* ifdef LIBMESH_ENABLE_AMR */
287 
288  }
289  }
290 
291  // Check for any interior neighbors
292  if ((elem->dim() < LIBMESH_DIM) && elem->interior_parent())
293  {
294  // get all relevant interior elements
295  std::set<const Elem *> neighbor_set;
296  elem->find_interior_neighbors(neighbor_set);
297 
298  num_neighbors += neighbor_set.size();
299  }
300 
301  // Check for any boundary neighbors
302  typedef map_type::iterator map_it_type;
303  std::pair<map_it_type, map_it_type>
304  bounds = interior_to_boundary_map.equal_range(elem);
305  num_neighbors += std::distance(bounds.first, bounds.second);
306 
307  csr_graph.prep_n_nonzeros(elem_global_index, num_neighbors);
308 #ifndef NDEBUG
309  graph_size += num_neighbors;
310 #endif
311  }
312 
313  csr_graph.prepare_for_use();
314 
315  // (2) second pass - fill the compressed adjacency array
316  it = beg;
317 
318  for (; it != end; ++it)
319  {
320  const Elem * elem = *it;
321 
322  const dof_id_type elem_global_index =
323  global_index_map[elem->id()];
324 
325  unsigned int connection=0;
326 
327  // Loop over the element's neighbors. An element
328  // adjacency corresponds to a face neighbor
329  for (auto neighbor : elem->neighbor_ptr_range())
330  {
331  if (neighbor != libmesh_nullptr)
332  {
333  // If the neighbor is active, but is not in the
334  // range of elements being partitioned, treat it
335  // as a NULL neighbor.
336  if (neighbor->active() && !global_index_map.count(neighbor->id()))
337  continue;
338 
339  // If the neighbor is active treat it
340  // as a connection
341  if (neighbor->active())
342  csr_graph(elem_global_index, connection++) = global_index_map[neighbor->id()];
343 
344 #ifdef LIBMESH_ENABLE_AMR
345 
346  // Otherwise we need to find all of the
347  // neighbor's children that are connected to
348  // us and add them
349  else
350  {
351  // The side of the neighbor to which
352  // we are connected
353  const unsigned int ns =
354  neighbor->which_neighbor_am_i (elem);
355  libmesh_assert_less (ns, neighbor->n_neighbors());
356 
357  // Get all the active children (& grandchildren, etc...)
358  // of the neighbor.
359  neighbor->active_family_tree (neighbors_offspring);
360 
361  // Get all the neighbor's children that
362  // live on that side and are thus connected
363  // to us
364  for (std::size_t nc=0; nc<neighbors_offspring.size(); nc++)
365  {
366  const Elem * child =
367  neighbors_offspring[nc];
368 
369  // Skip neighbor offspring which are not in the range of elements being partitioned.
370  if (!global_index_map.count(child->id()))
371  continue;
372 
373  // This does not assume a level-1 mesh.
374  // Note that since children have sides numbered
375  // coincident with the parent then this is a sufficient test.
376  if (child->neighbor_ptr(ns) == elem)
377  {
378  libmesh_assert (child->active());
379 
380  csr_graph(elem_global_index, connection++) = global_index_map[child->id()];
381  }
382  }
383  }
384 
385 #endif /* ifdef LIBMESH_ENABLE_AMR */
386 
387  }
388  }
389 
390  if ((elem->dim() < LIBMESH_DIM) &&
391  elem->interior_parent())
392  {
393  // get all relevant interior elements
394  std::set<const Elem *> neighbor_set;
395  elem->find_interior_neighbors(neighbor_set);
396 
397  std::set<const Elem *>::iterator n_it = neighbor_set.begin();
398  for (; n_it != neighbor_set.end(); ++n_it)
399  {
400  const Elem * neighbor = *n_it;
401 
402  // Not all interior neighbors are necessarily in
403  // the same Mesh (hence not in the global_index_map).
404  // This will be the case when partitioning a
405  // BoundaryMesh, whose elements all have
406  // interior_parents() that belong to some other
407  // Mesh.
408  const Elem * queried_elem = mesh.query_elem_ptr(neighbor->id());
409 
410  // Compare the neighbor and the queried_elem
411  // pointers, make sure they are the same.
412  if (queried_elem && queried_elem == neighbor)
413  {
415  global_index_map.find(neighbor->id());
416 
417  // If the interior_neighbor is in the Mesh but
418  // not in the global_index_map, we have other issues.
419  if (global_index_map_it == global_index_map.end())
420  libmesh_error_msg("Interior neighbor with id " << neighbor->id() << " not found in global_index_map.");
421 
422  else
423  csr_graph(elem_global_index, connection++) = global_index_map_it->second;
424  }
425  }
426  }
427 
428  // Check for any boundary neighbors
429  for (const auto & pr : as_range(interior_to_boundary_map.equal_range(elem)))
430  {
431  const Elem * neighbor = pr.second;
432  csr_graph(elem_global_index, connection++) =
433  global_index_map[neighbor->id()];
434  }
435  }
436 
437  // We create a non-empty vals for a disconnected graph, to
438  // work around a segfault from METIS.
439  libmesh_assert_equal_to (csr_graph.vals.size(),
440  std::max(graph_size, std::size_t(1)));
441  } // done building the graph
442 
443  Metis::idx_t ncon = 1;
444 
445  // Select which type of partitioning to create
446 
447  // Use recursive if the number of partitions is less than or equal to 8
448  if (n_pieces <= 8)
449  Metis::METIS_PartGraphRecursive(&n,
450  &ncon,
451  &csr_graph.offsets[0],
452  &csr_graph.vals[0],
453  &vwgt[0],
456  &nparts,
460  &edgecut,
461  &part[0]);
462 
463  // Otherwise use kway
464  else
465  Metis::METIS_PartGraphKway(&n,
466  &ncon,
467  &csr_graph.offsets[0],
468  &csr_graph.vals[0],
469  &vwgt[0],
472  &nparts,
476  &edgecut,
477  &part[0]);
478 
479  } // end processor 0 part
480 
481  // Broadcast the resulting partition
482  mesh.comm().broadcast(part);
483 
484  // Assign the returned processor ids. The part array contains
485  // the processor id for each active element, but in terms of
486  // the contiguous indexing we defined above
487  {
488  MeshBase::element_iterator it = beg;
489  for (; it!=end; ++it)
490  {
491  Elem * elem = *it;
492 
493  libmesh_assert (global_index_map.count(elem->id()));
494 
495  const dof_id_type elem_global_index =
496  global_index_map[elem->id()];
497 
498  libmesh_assert_less (elem_global_index, part.size());
499  const processor_id_type elem_procid =
500  static_cast<processor_id_type>(part[elem_global_index]);
501 
502  elem->processor_id() = elem_procid;
503  }
504  }
505 #endif
506 }
OStreamProxy err
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
The same functionality as the deprecated MeshTools::bounding_box().
Definition: mesh_tools.C:329
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
const class libmesh_nullptr_t libmesh_nullptr
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...
long double max(long double a, double b)
Real distance(const Point &p)
libmesh_assert(j)
ErrorVector * _weights
The weights that might be used for partitioning.
Definition: partitioner.h:216
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Slightly generalized version of single_partition which acts on a range of elements defined by the pai...
Definition: partitioner.C:159
SimpleRange< I > as_range(const std::pair< I, I > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh)
staticinherited

These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0.

Definition at line 175 of file partitioner.C.

References libMesh::ParallelObject::n_processors().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

176 {
178 }
MeshBase & mesh
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:175
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh,
const unsigned int  n 
)
staticinherited

Definition at line 182 of file partitioner.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::create_bounding_box(), distance(), end, libMesh::MeshCommunication::find_global_indices(), libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::DofObject::processor_id(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

184 {
185  MeshBase::element_iterator it = mesh.unpartitioned_elements_begin();
186  const MeshBase::element_iterator end = mesh.unpartitioned_elements_end();
187 
188  const dof_id_type n_unpartitioned_elements = MeshTools::n_elem (it, end);
189 
190  // the unpartitioned elements must exist on all processors. If the range is empty on one
191  // it is empty on all, and we can quit right here.
192  if (!n_unpartitioned_elements) return;
193 
194  // find the target subdomain sizes
195  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
196 
197  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
198  {
199  dof_id_type tgt_subdomain_size = 0;
200 
201  // watch out for the case that n_subdomains < n_processors
202  if (pid < n_subdomains)
203  {
204  tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
205 
206  if (pid < n_unpartitioned_elements%n_subdomains)
207  tgt_subdomain_size++;
208 
209  }
210 
211  //libMesh::out << "pid, #= " << pid << ", " << tgt_subdomain_size << std::endl;
212  if (pid == 0)
213  subdomain_bounds[0] = tgt_subdomain_size;
214  else
215  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
216  }
217 
218  libmesh_assert_equal_to (subdomain_bounds.back(), n_unpartitioned_elements);
219 
220  // create the unique mapping for all unpartitioned elements independent of partitioning
221  // determine the global indexing for all the unpartitioned elements
222  std::vector<dof_id_type> global_indices;
223 
224  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.
225  // Only the indices for the elements we pass in are returned in the array.
226  MeshCommunication().find_global_indices (mesh.comm(),
228  global_indices);
229 
230  for (dof_id_type cnt=0; it != end; ++it)
231  {
232  Elem * elem = *it;
233 
234  libmesh_assert_less (cnt, global_indices.size());
235  const dof_id_type global_index =
236  global_indices[cnt++];
237 
238  libmesh_assert_less (global_index, subdomain_bounds.back());
239  libmesh_assert_less (global_index, n_unpartitioned_elements);
240 
241  const processor_id_type subdomain_id =
242  cast_int<processor_id_type>
243  (std::distance(subdomain_bounds.begin(),
244  std::upper_bound(subdomain_bounds.begin(),
245  subdomain_bounds.end(),
246  global_index)));
247  libmesh_assert_less (subdomain_id, n_subdomains);
248 
249  elem->processor_id() = subdomain_id;
250  //libMesh::out << "assigning " << global_index << " to " << subdomain_id << std::endl;
251  }
252 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:656
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
The same functionality as the deprecated MeshTools::bounding_box().
Definition: mesh_tools.C:329
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...
Real distance(const Point &p)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh,
const unsigned int  n 
)
inherited

Repartitions the MeshBase into n parts.

(Some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.) The default behavior is to simply call this->partition(mesh,n).

Definition at line 116 of file partitioner.C.

References libMesh::Partitioner::_do_repartition(), std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

118 {
119  // we cannot partition into more pieces than we have
120  // active elements!
121  const unsigned int n_parts =
122  static_cast<unsigned int>
123  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
124 
125  // Set the number of partitions in the mesh
126  mesh.set_n_partitions()=n_parts;
127 
128  if (n_parts == 1)
129  {
130  this->single_partition (mesh);
131  return;
132  }
133 
134  // First assign a temporary partitioning to any unpartitioned elements
136 
137  // Call the partitioning function
138  this->_do_repartition(mesh,n_parts);
139 
140  // Set the parent's processor ids
142 
143  // Set the node's processor ids
145 }
void single_partition(MeshBase &mesh)
Trivially "partitions" the mesh for one processor.
Definition: partitioner.C:151
static void set_node_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the nodes.
Definition: partitioner.C:416
MeshBase & mesh
virtual void _do_repartition(MeshBase &mesh, const unsigned int n)
This is the actual re-partitioning method which can be overridden in derived classes.
Definition: partitioner.h:204
static void partition_unpartitioned_elements(MeshBase &mesh)
These functions assign processor IDs to newly-created elements (in parallel) which are currently assi...
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
This function is called after partitioning to set the processor IDs for the inactive parent elements...
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh)
inherited

Repartitions the MeshBase into mesh.n_processors() parts.

This is required since some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.

Definition at line 109 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::repartition().

110 {
111  this->repartition(mesh,mesh.n_processors());
112 }
MeshBase & mesh
void repartition(MeshBase &mesh, const unsigned int n)
Repartitions the MeshBase into n parts.
Definition: partitioner.C:116
void libMesh::Partitioner::set_node_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the nodes.

By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 416 of file partitioner.C.

References libMesh::MeshBase::active_element_ptr_range(), libMesh::ParallelObject::comm(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), mesh, std::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::Elem::node_ptr(), libMesh::MeshBase::node_ptr_range(), libMesh::MeshBase::node_ref(), libMesh::MeshBase::not_active_elements_begin(), libMesh::MeshBase::not_active_elements_end(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::Parallel::Communicator::send_receive(), libMesh::MeshBase::subactive_elements_begin(), libMesh::MeshBase::subactive_elements_end(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::UnstructuredMesh::all_first_order(), libMesh::Partitioner::partition(), libMesh::MeshBase::partition(), libMesh::Partitioner::partition_range(), libMesh::XdrIO::read(), libMesh::Partitioner::repartition(), and libMesh::BoundaryInfo::sync().

417 {
418  LOG_SCOPE("set_node_processor_ids()","Partitioner");
419 
420  // This function must be run on all processors at once
421  libmesh_parallel_only(mesh.comm());
422 
423  // If we have any unpartitioned elements at this
424  // stage there is a problem
425  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
426  mesh.unpartitioned_elements_end()) == 0);
427 
428 
429  // const dof_id_type orig_n_local_nodes = mesh.n_local_nodes();
430 
431  // libMesh::err << "[" << mesh.processor_id() << "]: orig_n_local_nodes="
432  // << orig_n_local_nodes << std::endl;
433 
434  // Build up request sets. Each node is currently owned by a processor because
435  // it is connected to an element owned by that processor. However, during the
436  // repartitioning phase that element may have been assigned a new processor id, but
437  // it is still resident on the original processor. We need to know where to look
438  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
439  // for the wrong information.
440  //
441  // The only remaining issue is what to do with unpartitioned nodes. Since they are required
442  // to live on all processors we can simply rely on ourselves to number them properly.
443  std::vector<std::vector<dof_id_type>>
444  requested_node_ids(mesh.n_processors());
445 
446  // Loop over all the nodes, count the ones on each processor. We can skip ourself
447  std::vector<dof_id_type> ghost_nodes_from_proc(mesh.n_processors(), 0);
448 
449  for (auto & node : mesh.node_ptr_range())
450  {
451  libmesh_assert(node);
452  const processor_id_type current_pid = node->processor_id();
453  if (current_pid != mesh.processor_id() &&
454  current_pid != DofObject::invalid_processor_id)
455  {
456  libmesh_assert_less (current_pid, ghost_nodes_from_proc.size());
457  ghost_nodes_from_proc[current_pid]++;
458  }
459  }
460 
461  // We know how many objects live on each processor, so reserve()
462  // space for each.
463  for (processor_id_type pid=0; pid != mesh.n_processors(); ++pid)
464  requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);
465 
466  // We need to get the new pid for each node from the processor
467  // which *currently* owns the node. We can safely skip ourself
468  for (auto & node : mesh.node_ptr_range())
469  {
470  libmesh_assert(node);
471  const processor_id_type current_pid = node->processor_id();
472  if (current_pid != mesh.processor_id() &&
473  current_pid != DofObject::invalid_processor_id)
474  {
475  libmesh_assert_less (current_pid, requested_node_ids.size());
476  libmesh_assert_less (requested_node_ids[current_pid].size(),
477  ghost_nodes_from_proc[current_pid]);
478  requested_node_ids[current_pid].push_back(node->id());
479  }
480 
481  // Unset any previously-set node processor ids
482  node->invalidate_processor_id();
483  }
484 
485  // Loop over all the active elements
486  for (auto & elem : mesh.active_element_ptr_range())
487  {
488  libmesh_assert(elem);
489 
490  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
491 
492  // For each node, set the processor ID to the min of
493  // its current value and this Element's processor id.
494  //
495  // TODO: we would probably get better parallel partitioning if
496  // we did something like "min for even numbered nodes, max for
497  // odd numbered". We'd need to be careful about how that would
498  // affect solution ordering for I/O, though.
499  for (unsigned int n=0; n<elem->n_nodes(); ++n)
500  elem->node_ptr(n)->processor_id() = std::min(elem->node_ptr(n)->processor_id(),
501  elem->processor_id());
502  }
503 
504  // And loop over the subactive elements, but don't reassign
505  // nodes that are already active on another processor.
506  MeshBase::element_iterator sub_it = mesh.subactive_elements_begin();
507  const MeshBase::element_iterator sub_end = mesh.subactive_elements_end();
508 
509  for ( ; sub_it != sub_end; ++sub_it)
510  {
511  Elem * elem = *sub_it;
512  libmesh_assert(elem);
513 
514  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
515 
516  for (unsigned int n=0; n<elem->n_nodes(); ++n)
517  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
518  elem->node_ptr(n)->processor_id() = elem->processor_id();
519  }
520 
521  // Same for the inactive elements -- we will have already gotten most of these
522  // nodes, *except* for the case of a parent with a subset of children which are
523  // ghost elements. In that case some of the parent nodes will not have been
524  // properly handled yet
525  MeshBase::element_iterator not_it = mesh.not_active_elements_begin();
526  const MeshBase::element_iterator not_end = mesh.not_active_elements_end();
527 
528  for ( ; not_it != not_end; ++not_it)
529  {
530  Elem * elem = *not_it;
531  libmesh_assert(elem);
532 
533  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
534 
535  for (unsigned int n=0; n<elem->n_nodes(); ++n)
536  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
537  elem->node_ptr(n)->processor_id() = elem->processor_id();
538  }
539 
540  // We can't assert that all nodes are connected to elements, because
541  // a DistributedMesh with NodeConstraints might have pulled in some
542  // remote nodes solely for evaluating those constraints.
543  // MeshTools::libmesh_assert_connected_nodes(mesh);
544 
545  // For such nodes, we'll do a sanity check later when making sure
546  // that we successfully reset their processor ids to something
547  // valid.
548 
549  // Next set node ids from other processors, excluding self
550  for (processor_id_type p=1; p != mesh.n_processors(); ++p)
551  {
552  // Trade my requests with processor procup and procdown
553  processor_id_type procup = cast_int<processor_id_type>
554  ((mesh.processor_id() + p) % mesh.n_processors());
555  processor_id_type procdown = cast_int<processor_id_type>
556  ((mesh.n_processors() + mesh.processor_id() - p) %
557  mesh.n_processors());
558  std::vector<dof_id_type> request_to_fill;
559  mesh.comm().send_receive(procup, requested_node_ids[procup],
560  procdown, request_to_fill);
561 
562  // Fill those requests in-place
563  for (std::size_t i=0; i != request_to_fill.size(); ++i)
564  {
565  Node & node = mesh.node_ref(request_to_fill[i]);
566  const processor_id_type new_pid = node.processor_id();
567 
568  // We may have an invalid processor_id() on nodes that have been
569  // "detached" from coarsened-away elements but that have not yet
570  // themselves been removed.
571  // libmesh_assert_not_equal_to (new_pid, DofObject::invalid_processor_id);
572  // libmesh_assert_less (new_pid, mesh.n_partitions()); // this is the correct test --
573  request_to_fill[i] = new_pid; // the number of partitions may
574  } // not equal the number of processors
575 
576  // Trade back the results
577  std::vector<dof_id_type> filled_request;
578  mesh.comm().send_receive(procdown, request_to_fill,
579  procup, filled_request);
580  libmesh_assert_equal_to (filled_request.size(), requested_node_ids[procup].size());
581 
582  // And copy the id changes we've now been informed of
583  for (std::size_t i=0; i != filled_request.size(); ++i)
584  {
585  Node & node = mesh.node_ref(requested_node_ids[procup][i]);
586 
587  // this is the correct test -- the number of partitions may
588  // not equal the number of processors
589 
590  // But: we may have an invalid processor_id() on nodes that
591  // have been "detached" from coarsened-away elements but
592  // that have not yet themselves been removed.
593  // libmesh_assert_less (filled_request[i], mesh.n_partitions());
594 
595  node.processor_id(cast_int<processor_id_type>(filled_request[i]));
596  }
597  }
598 
599 #ifdef DEBUG
600  MeshTools::libmesh_assert_valid_procids<Node>(mesh);
601 #endif
602 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:656
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
libmesh_assert(j)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
long double min(long double a, double b)
void libMesh::Partitioner::set_parent_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the inactive parent elements.

A parent's processor ID is the same as its first child.

Definition at line 256 of file partitioner.C.

References libMesh::MeshBase::active_element_ptr_range(), libMesh::Elem::active_family_tree(), libMesh::MeshBase::ancestor_elements_begin(), libMesh::MeshBase::ancestor_elements_end(), libMesh::Elem::child_ref_range(), libMesh::ParallelObject::comm(), libMesh::Partitioner::communication_blocksize, libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::MeshBase::max_elem_id(), std::min(), libMesh::Parallel::Communicator::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::parent(), libMesh::processor_id(), libMesh::DofObject::processor_id(), libMesh::Elem::total_family_tree(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

257 {
258  // Ignore the parameter when !LIBMESH_ENABLE_AMR
260 
261  LOG_SCOPE("set_parent_processor_ids()", "Partitioner");
262 
263 #ifdef LIBMESH_ENABLE_AMR
264 
265  // If the mesh is serial we have access to all the elements,
266  // in particular all the active ones. We can therefore set
267  // the parent processor ids indirectly through their children, and
268  // set the subactive processor ids while examining their active
269  // ancestors.
270  // By convention a parent is assigned to the minimum processor
271  // of all its children, and a subactive is assigned to the processor
272  // of its active ancestor.
273  if (mesh.is_serial())
274  {
275  for (auto & child : mesh.active_element_ptr_range())
276  {
277  // First set descendents
278  std::vector<const Elem *> subactive_family;
279  child->total_family_tree(subactive_family);
280  for (std::size_t i = 0; i != subactive_family.size(); ++i)
281  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
282 
283  // Then set ancestors
284  Elem * parent = child->parent();
285 
286  while (parent)
287  {
288  // invalidate the parent id, otherwise the min below
289  // will not work if the current parent id is less
290  // than all the children!
291  parent->invalidate_processor_id();
292 
293  for (auto & child : parent->child_ref_range())
294  {
295  libmesh_assert(!child.is_remote());
296  libmesh_assert_not_equal_to (child.processor_id(), DofObject::invalid_processor_id);
297  parent->processor_id() = std::min(parent->processor_id(),
298  child.processor_id());
299  }
300  parent = parent->parent();
301  }
302  }
303  }
304 
305  // When the mesh is parallel we cannot guarantee that parents have access to
306  // all their children.
307  else
308  {
309  // Setting subactive processor ids is easy: we can guarantee
310  // that children have access to all their parents.
311 
312  // Loop over all the active elements in the mesh
313  for (auto & child : mesh.active_element_ptr_range())
314  {
315  std::vector<const Elem *> subactive_family;
316  child->total_family_tree(subactive_family);
317  for (std::size_t i = 0; i != subactive_family.size(); ++i)
318  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
319  }
320 
321  // When the mesh is parallel we cannot guarantee that parents have access to
322  // all their children.
323 
324  // We will use a brute-force approach here. Each processor finds its parent
325  // elements and sets the parent pid to the minimum of its
326  // semilocal descendants.
327  // A global reduction is then performed to make sure the true minimum is found.
328  // As noted, this is required because we cannot guarantee that a parent has
329  // access to all its children on any single processor.
330  libmesh_parallel_only(mesh.comm());
331  libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
332  mesh.unpartitioned_elements_end()) == 0);
333 
334  const dof_id_type max_elem_id = mesh.max_elem_id();
335 
336  std::vector<processor_id_type>
337  parent_processor_ids (std::min(communication_blocksize,
338  max_elem_id));
339 
340  for (dof_id_type blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
341  {
342  last_elem_id =
343  std::min(static_cast<dof_id_type>((blk+1)*communication_blocksize),
344  max_elem_id);
345  const dof_id_type first_elem_id = blk*communication_blocksize;
346 
347  std::fill (parent_processor_ids.begin(),
348  parent_processor_ids.end(),
350 
351  // first build up local contributions to parent_processor_ids
352  MeshBase::element_iterator not_it = mesh.ancestor_elements_begin();
353  const MeshBase::element_iterator not_end = mesh.ancestor_elements_end();
354 
355  bool have_parent_in_block = false;
356 
357  for ( ; not_it != not_end; ++not_it)
358  {
359  Elem * parent = *not_it;
360 
361  const dof_id_type parent_idx = parent->id();
362  libmesh_assert_less (parent_idx, max_elem_id);
363 
364  if ((parent_idx >= first_elem_id) &&
365  (parent_idx < last_elem_id))
366  {
367  have_parent_in_block = true;
369 
370  std::vector<const Elem *> active_family;
371  parent->active_family_tree(active_family);
372  for (std::size_t i = 0; i != active_family.size(); ++i)
373  parent_pid = std::min (parent_pid, active_family[i]->processor_id());
374 
375  const dof_id_type packed_idx = parent_idx - first_elem_id;
376  libmesh_assert_less (packed_idx, parent_processor_ids.size());
377 
378  parent_processor_ids[packed_idx] = parent_pid;
379  }
380  }
381 
382  // then find the global minimum
383  mesh.comm().min (parent_processor_ids);
384 
385  // and assign the ids, if we have a parent in this block.
386  if (have_parent_in_block)
387  for (not_it = mesh.ancestor_elements_begin();
388  not_it != not_end; ++not_it)
389  {
390  Elem * parent = *not_it;
391 
392  const dof_id_type parent_idx = parent->id();
393 
394  if ((parent_idx >= first_elem_id) &&
395  (parent_idx < last_elem_id))
396  {
397  const dof_id_type packed_idx = parent_idx - first_elem_id;
398  libmesh_assert_less (packed_idx, parent_processor_ids.size());
399 
400  const processor_id_type parent_pid =
401  parent_processor_ids[packed_idx];
402 
403  libmesh_assert_not_equal_to (parent_pid, DofObject::invalid_processor_id);
404 
405  parent->processor_id() = parent_pid;
406  }
407  }
408  }
409  }
410 
411 #endif // LIBMESH_ENABLE_AMR
412 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Count up the number of elements of a specific type (as defined by an iterator range).
Definition: mesh_tools.C:656
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
libmesh_assert(j)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
void libmesh_ignore(const T &)
static const dof_id_type communication_blocksize
The blocksize to use when doing blocked parallel communication.
Definition: partitioner.h:211
processor_id_type processor_id()
Definition: libmesh_base.h:96
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::single_partition ( MeshBase mesh)
protectedinherited

Trivially "partitions" the mesh for one processor.

Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 151 of file partitioner.C.

References libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Partitioner::single_partition_range().

Referenced by libMesh::SubdomainPartitioner::_do_partition(), libMesh::Partitioner::attach_weights(), libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

152 {
153  this->single_partition_range(mesh.elements_begin(),
154  mesh.elements_end());
155 }
MeshBase & mesh
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Slightly generalized version of single_partition which acts on a range of elements defined by the pai...
Definition: partitioner.C:159
void libMesh::Partitioner::single_partition_range ( MeshBase::element_iterator  it,
MeshBase::element_iterator  end 
)
protectedinherited

Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end).

Definition at line 159 of file partitioner.C.

References end, libMesh::Elem::n_nodes(), libMesh::Elem::node_ptr(), and libMesh::DofObject::processor_id().

Referenced by libMesh::Partitioner::attach_weights(), libMesh::LinearPartitioner::partition_range(), libMesh::MappedSubdomainPartitioner::partition_range(), libMesh::CentroidPartitioner::partition_range(), and libMesh::Partitioner::single_partition().

161 {
162  LOG_SCOPE("single_partition_range()", "Partitioner");
163 
164  for ( ; it != end; ++it)
165  {
166  Elem * elem = *it;
167  elem->processor_id() = 0;
168 
169  // Assign all this element's nodes to processor 0 as well.
170  for (unsigned int n=0; n<elem->n_nodes(); ++n)
171  elem->node_ptr(n)->processor_id() = 0;
172  }
173 }
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...

Member Data Documentation

ErrorVector* libMesh::Partitioner::_weights
protectedinherited

The weights that might be used for partitioning.

Definition at line 216 of file partitioner.h.

Referenced by attach_weights().

const dof_id_type libMesh::Partitioner::communication_blocksize = 1000000
staticprotectedinherited

The blocksize to use when doing blocked parallel communication.

This limits the maximum vector size which can be used in a single communication step.

Definition at line 211 of file partitioner.h.

Referenced by libMesh::Partitioner::set_parent_processor_ids().


The documentation for this class was generated from the following files: