libMesh
Public Member Functions | List of all members
libMesh::MeshCommunication Class Reference

This is the MeshCommunication class. More...

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()
 Constructor. More...
 
 ~MeshCommunication ()
 Destructor. More...
 
void clear ()
 Clears all data structures and resets to a pristine state. More...
 
void broadcast (MeshBase &) const
 Finds all the processors that may contain elements that neighbor my elements. More...
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 This method takes a parallel distributed mesh and redistributes the elements. More...
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them. More...
 
void gather (const processor_id_type root_id, DistributedMesh &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void allgather (DistributedMesh &mesh) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem * > &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void assign_global_indices (MeshBase &) const
 This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. More...
 
void check_for_duplicate_global_indices (MeshBase &) const
 Throw an error if we have any index clashes in the numbering used by assign_global_indices. More...
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 This method determines a globally unique, partition-agnostic index for each object in the input range. More...
 
void make_elems_parallel_consistent (MeshBase &)
 Copy ids of ghost elements from their local processors. More...
 
void make_p_levels_parallel_consistent (MeshBase &)
 Copy p levels of ghost elements from their local processors. More...
 
void make_node_ids_parallel_consistent (MeshBase &)
 Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent. More...
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent. More...
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well. More...
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well. More...
 
void make_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on ghost nodes from their local processors. More...
 
void make_new_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on new nodes from their local processors. More...
 

Detailed Description

This is the MeshCommunication class.

It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 47 of file mesh_communication.h.

Constructor & Destructor Documentation

libMesh::MeshCommunication::MeshCommunication ( )

Constructor.

Definition at line 54 of file mesh_communication.h.

54 {}
libMesh::MeshCommunication::~MeshCommunication ( )

Destructor.

Definition at line 59 of file mesh_communication.h.

References broadcast(), clear(), gather(), gather_neighboring_elements(), mesh, redistribute(), and send_coarse_ghosts().

59 {}

Member Function Documentation

void libMesh::MeshCommunication::allgather ( DistributedMesh mesh) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 132 of file mesh_communication.h.

References assign_global_indices(), check_for_duplicate_global_indices(), libMesh::connect_children(), libMesh::connect_families(), delete_remote_elements(), find_global_indices(), gather(), libMesh::DofObject::invalid_processor_id, make_elems_parallel_consistent(), make_new_node_proc_ids_parallel_consistent(), make_new_nodes_parallel_consistent(), make_node_ids_parallel_consistent(), make_node_proc_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), make_nodes_parallel_consistent(), make_p_levels_parallel_consistent(), mesh, libMesh::query_ghosting_functors(), and libMesh::reconnect_nodes().

Referenced by libMesh::DistributedMesh::allgather().

MeshBase & mesh
void gather(const processor_id_type root_id, DistributedMesh &) const
This method takes an input DistributedMesh which may be distributed among all the processors...
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh.

The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 169 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::ParallelObject::comm(), libMesh::MeshTools::create_nodal_bounding_box(), distance(), libMesh::MeshBase::element_ptr_range(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::libmesh_assert(), libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::MeshBase::node_ptr_range(), libMesh::Threads::parallel_for(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by allgather(), and libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

170 {
171  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
172 
173  // This method determines partition-agnostic global indices
174  // for nodes and elements.
175 
176  // Algorithm:
177  // (1) compute the Hilbert key for each local node/element
178  // (2) perform a parallel sort of the Hilbert key
179  // (3) get the min/max value on each processor
180  // (4) determine the position in the global ranking for
181  // each local object
182 
183  const Parallel::Communicator & communicator (mesh.comm());
184 
185  // Global bounding box. We choose the nodal bounding box for
186  // backwards compatibility; the element bounding box may be looser
187  // on curved elements.
188  BoundingBox bbox =
190 
191  //-------------------------------------------------------------
192  // (1) compute Hilbert keys
193  std::vector<Parallel::DofObjectKey>
194  node_keys, elem_keys;
195 
196  {
197  // Nodes first
198  {
200  mesh.local_nodes_end());
201  node_keys.resize (nr.size());
202  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
203 
204  // // It's O(N^2) to check that these keys don't duplicate before the
205  // // sort...
206  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
207  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
208  // {
209  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
210  // for (std::size_t j = 0; j != i; ++j, ++nodej)
211  // {
212  // if (node_keys[i] == node_keys[j])
213  // {
214  // CFixBitVec icoords[3], jcoords[3];
215  // get_hilbert_coords(**nodej, bbox, jcoords);
216  // libMesh::err <<
217  // "node " << (*nodej)->id() << ", " <<
218  // *(Point *)(*nodej) << " has HilbertIndices " <<
219  // node_keys[j] << std::endl;
220  // get_hilbert_coords(**nodei, bbox, icoords);
221  // libMesh::err <<
222  // "node " << (*nodei)->id() << ", " <<
223  // *(Point *)(*nodei) << " has HilbertIndices " <<
224  // node_keys[i] << std::endl;
225  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
226  // }
227  // }
228  // }
229  }
230 
231  // Elements next
232  {
234  mesh.local_elements_end());
235  elem_keys.resize (er.size());
236  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
237 
238  // // For elements, the keys can be (and in the case of TRI, are
239  // // expected to be) duplicates, but only if the elements are at
240  // // different levels
241  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
242  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
243  // {
244  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
245  // for (std::size_t j = 0; j != i; ++j, ++elemj)
246  // {
247  // if ((elem_keys[i] == elem_keys[j]) &&
248  // ((*elemi)->level() == (*elemj)->level()))
249  // {
250  // libMesh::err <<
251  // "level " << (*elemj)->level() << " elem\n" <<
252  // (**elemj) << " centroid " <<
253  // (*elemj)->centroid() << " has HilbertIndices " <<
254  // elem_keys[j] << " or " <<
255  // get_hilbert_index((*elemj), bbox) <<
256  // std::endl;
257  // libMesh::err <<
258  // "level " << (*elemi)->level() << " elem\n" <<
259  // (**elemi) << " centroid " <<
260  // (*elemi)->centroid() << " has HilbertIndices " <<
261  // elem_keys[i] << " or " <<
262  // get_hilbert_index((*elemi), bbox) <<
263  // std::endl;
264  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
265  // }
266  // }
267  // }
268  }
269  } // done computing Hilbert keys
270 
271 
272 
273  //-------------------------------------------------------------
274  // (2) parallel sort the Hilbert keys
276  node_keys);
277  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
278 
279  const std::vector<Parallel::DofObjectKey> & my_node_bin =
280  node_sorter.bin();
281 
283  elem_keys);
284  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
285 
286  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
287  elem_sorter.bin();
288 
289 
290 
291  //-------------------------------------------------------------
292  // (3) get the max value on each processor
293  std::vector<Parallel::DofObjectKey>
294  node_upper_bounds(communicator.size()),
295  elem_upper_bounds(communicator.size());
296 
297  { // limit scope of temporaries
298  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
299  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
300  empty_nodes (communicator.size()),
301  empty_elem (communicator.size());
302  std::vector<Parallel::DofObjectKey> my_max(2);
303 
304  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
305  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
306 
307  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
308  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
309 
310  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
311 
312  // Be careful here. The *_upper_bounds will be used to find the processor
313  // a given object belongs to. So, if a processor contains no objects (possible!)
314  // then copy the bound from the lower processor id.
315  for (processor_id_type p=0; p<communicator.size(); p++)
316  {
317  node_upper_bounds[p] = my_max[2*p+0];
318  elem_upper_bounds[p] = my_max[2*p+1];
319 
320  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
321  {
322  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
323  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
324  }
325  }
326  }
327 
328 
329 
330  //-------------------------------------------------------------
331  // (4) determine the position in the global ranking for
332  // each local object
333  {
334  //----------------------------------------------
335  // Nodes first -- all nodes, not just local ones
336  {
337  // Request sets to send to each processor
338  std::vector<std::vector<Parallel::DofObjectKey>>
339  requested_ids (communicator.size());
340  // Results to gather from each processor
341  std::vector<std::vector<dof_id_type>>
342  filled_request (communicator.size());
343 
344  // build up list of requests
345  for (const auto & node : mesh.node_ptr_range())
346  {
347  libmesh_assert(node);
348  const Parallel::DofObjectKey hi =
349  get_hilbert_index (node, bbox);
350  const processor_id_type pid =
351  cast_int<processor_id_type>
352  (std::distance (node_upper_bounds.begin(),
353  std::lower_bound(node_upper_bounds.begin(),
354  node_upper_bounds.end(),
355  hi)));
356 
357  libmesh_assert_less (pid, communicator.size());
358 
359  requested_ids[pid].push_back(hi);
360  }
361 
362  // The number of objects in my_node_bin on each processor
363  std::vector<dof_id_type> node_bin_sizes(communicator.size());
364  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
365 
366  // The offset of my first global index
367  dof_id_type my_offset = 0;
368  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
369  my_offset += node_bin_sizes[pid];
370 
371  // start with pid=0, so that we will trade with ourself
372  for (processor_id_type pid=0; pid<communicator.size(); pid++)
373  {
374  // Trade my requests with processor procup and procdown
375  const processor_id_type procup = cast_int<processor_id_type>
376  ((communicator.rank() + pid) % communicator.size());
377  const processor_id_type procdown = cast_int<processor_id_type>
378  ((communicator.size() + communicator.rank() - pid) %
379  communicator.size());
380 
381  std::vector<Parallel::DofObjectKey> request_to_fill;
382  communicator.send_receive(procup, requested_ids[procup],
383  procdown, request_to_fill);
384 
385  // Fill the requests
386  std::vector<dof_id_type> global_ids; global_ids.reserve(request_to_fill.size());
387  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
388  {
389  const Parallel::DofObjectKey & hi = request_to_fill[idx];
390  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
391 
392  // find the requested index in my node bin
393  std::vector<Parallel::DofObjectKey>::const_iterator pos =
394  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
395  libmesh_assert (pos != my_node_bin.end());
396  libmesh_assert_equal_to (*pos, hi);
397 
398  // Finally, assign the global index based off the position of the index
399  // in my array, properly offset.
400  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
401  }
402 
403  // and trade back
404  communicator.send_receive (procdown, global_ids,
405  procup, filled_request[procup]);
406  }
407 
408  // We now have all the filled requests, so we can loop through our
409  // nodes once and assign the global index to each one.
410  {
411  std::vector<std::vector<dof_id_type>::const_iterator>
412  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
413  for (processor_id_type pid=0; pid<communicator.size(); pid++)
414  next_obj_on_proc.push_back(filled_request[pid].begin());
415 
416  for (auto & node : mesh.node_ptr_range())
417  {
418  libmesh_assert(node);
419  const Parallel::DofObjectKey hi =
420  get_hilbert_index (node, bbox);
421  const processor_id_type pid =
422  cast_int<processor_id_type>
423  (std::distance (node_upper_bounds.begin(),
424  std::lower_bound(node_upper_bounds.begin(),
425  node_upper_bounds.end(),
426  hi)));
427 
428  libmesh_assert_less (pid, communicator.size());
429  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
430 
431  const dof_id_type global_index = *next_obj_on_proc[pid];
432  libmesh_assert_less (global_index, mesh.n_nodes());
433  node->set_id() = global_index;
434 
435  ++next_obj_on_proc[pid];
436  }
437  }
438  }
439 
440  //---------------------------------------------------
441  // elements next -- all elements, not just local ones
442  {
443  // Request sets to send to each processor
444  std::vector<std::vector<Parallel::DofObjectKey>>
445  requested_ids (communicator.size());
446  // Results to gather from each processor
447  std::vector<std::vector<dof_id_type>>
448  filled_request (communicator.size());
449 
450  for (const auto & elem : mesh.element_ptr_range())
451  {
452  libmesh_assert(elem);
453  const Parallel::DofObjectKey hi =
454  get_hilbert_index (elem, bbox);
455  const processor_id_type pid =
456  cast_int<processor_id_type>
457  (std::distance (elem_upper_bounds.begin(),
458  std::lower_bound(elem_upper_bounds.begin(),
459  elem_upper_bounds.end(),
460  hi)));
461 
462  libmesh_assert_less (pid, communicator.size());
463 
464  requested_ids[pid].push_back(hi);
465  }
466 
467  // The number of objects in my_elem_bin on each processor
468  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
469  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
470 
471  // The offset of my first global index
472  dof_id_type my_offset = 0;
473  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
474  my_offset += elem_bin_sizes[pid];
475 
476  // start with pid=0, so that we will trade with ourself
477  for (processor_id_type pid=0; pid<communicator.size(); pid++)
478  {
479  // Trade my requests with processor procup and procdown
480  const processor_id_type procup = cast_int<processor_id_type>
481  ((communicator.rank() + pid) % communicator.size());
482  const processor_id_type procdown = cast_int<processor_id_type>
483  ((communicator.size() + communicator.rank() - pid) %
484  communicator.size());
485 
486  std::vector<Parallel::DofObjectKey> request_to_fill;
487  communicator.send_receive(procup, requested_ids[procup],
488  procdown, request_to_fill);
489 
490  // Fill the requests
491  std::vector<dof_id_type> global_ids; global_ids.reserve(request_to_fill.size());
492  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
493  {
494  const Parallel::DofObjectKey & hi = request_to_fill[idx];
495  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
496 
497  // find the requested index in my elem bin
498  std::vector<Parallel::DofObjectKey>::const_iterator pos =
499  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
500  libmesh_assert (pos != my_elem_bin.end());
501  libmesh_assert_equal_to (*pos, hi);
502 
503  // Finally, assign the global index based off the position of the index
504  // in my array, properly offset.
505  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
506  }
507 
508  // and trade back
509  communicator.send_receive (procdown, global_ids,
510  procup, filled_request[procup]);
511  }
512 
513  // We now have all the filled requests, so we can loop through our
514  // elements once and assign the global index to each one.
515  {
516  std::vector<std::vector<dof_id_type>::const_iterator>
517  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
518  for (processor_id_type pid=0; pid<communicator.size(); pid++)
519  next_obj_on_proc.push_back(filled_request[pid].begin());
520 
521  for (auto & elem : mesh.element_ptr_range())
522  {
523  libmesh_assert(elem);
524  const Parallel::DofObjectKey hi =
525  get_hilbert_index (elem, bbox);
526  const processor_id_type pid =
527  cast_int<processor_id_type>
528  (std::distance (elem_upper_bounds.begin(),
529  std::lower_bound(elem_upper_bounds.begin(),
530  elem_upper_bounds.end(),
531  hi)));
532 
533  libmesh_assert_less (pid, communicator.size());
534  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
535 
536  const dof_id_type global_index = *next_obj_on_proc[pid];
537  libmesh_assert_less (global_index, mesh.n_elem());
538  elem->set_id() = global_index;
539 
540  ++next_obj_on_proc[pid];
541  }
542  }
543  }
544  }
545 }
Encapsulates the MPI_Comm object.
Definition: parallel.h:657
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MPI_Comm communicator
Communicator object for talking with subsets of processors.
Definition: parallel.h:181
virtual element_iterator local_elements_begin()=0
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:52
Real distance(const Point &p)
libmesh_assert(j)
virtual SimpleRange< element_iterator > element_ptr_range()=0
virtual node_iterator local_nodes_end()=0
virtual SimpleRange< node_iterator > node_ptr_range()=0
virtual element_iterator local_elements_end()=0
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
virtual node_iterator local_nodes_begin()=0
Iterate over local nodes (nodes whose processor_id() matches the current processor).
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:53
const Parallel::Communicator & comm() const
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:354
virtual dof_id_type n_nodes() const =0
virtual dof_id_type n_elem() const =0
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const

Finds all the processors that may contain elements that neighbor my elements.

This list is guaranteed to include all processors that border any of my elements, but may include additional ones as well. This method computes bounding boxes for the elements on each processor and checks for overlaps. This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1091 of file mesh_communication.C.

References libMesh::Parallel::Communicator::broadcast(), libMesh::Parallel::Communicator::broadcast_packed_range(), libMesh::MeshBase::cache_elem_dims(), libMesh::MeshBase::clear(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::level_elements_begin(), libMesh::MeshBase::level_elements_end(), libMesh::libmesh_assert(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshTools::n_levels(), libMesh::MeshBase::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::MeshTools::paranoid_n_levels(), libMesh::ParallelObject::processor_id(), libMesh::BoundaryInfo::set_nodeset_name_map(), libMesh::BoundaryInfo::set_sideset_name_map(), libMesh::MeshBase::set_subdomain_name_map(), and libMesh::Parallel::Communicator::verify().

Referenced by libMesh::NameBasedIO::read(), libMesh::CheckpointIO::read(), and ~MeshCommunication().

1092 {
1093  // no MPI == one processor, no need for this method...
1094  return;
1095 }
void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 554 of file mesh_communication_global_indices.C.

References libMesh::MeshTools::create_nodal_bounding_box(), libMesh::err, libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), and libMesh::Threads::parallel_for().

Referenced by allgather().

555 {
556  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
557 
558  // Global bounding box. We choose the nodal bounding box for
559  // backwards compatibility; the element bounding box may be looser
560  // on curved elements.
561  BoundingBox bbox =
563 
564 
565  std::vector<Parallel::DofObjectKey>
566  node_keys, elem_keys;
567 
568  {
569  // Nodes first
570  {
572  mesh.local_nodes_end());
573  node_keys.resize (nr.size());
574  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
575 
576  // It's O(N^2) to check that these keys don't duplicate before the
577  // sort...
579  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
580  {
582  for (std::size_t j = 0; j != i; ++j, ++nodej)
583  {
584  if (node_keys[i] == node_keys[j])
585  {
586  CFixBitVec icoords[3], jcoords[3];
587  get_hilbert_coords(**nodej, bbox, jcoords);
588  libMesh::err <<
589  "node " << (*nodej)->id() << ", " <<
590  *(Point *)(*nodej) << " has HilbertIndices " <<
591  node_keys[j] << std::endl;
592  get_hilbert_coords(**nodei, bbox, icoords);
593  libMesh::err <<
594  "node " << (*nodei)->id() << ", " <<
595  *(Point *)(*nodei) << " has HilbertIndices " <<
596  node_keys[i] << std::endl;
597  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
598  }
599  }
600  }
601  }
602 
603  // Elements next
604  {
606  mesh.local_elements_end());
607  elem_keys.resize (er.size());
608  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
609 
610  // For elements, the keys can be (and in the case of TRI, are
611  // expected to be) duplicates, but only if the elements are at
612  // different levels
614  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
615  {
617  for (std::size_t j = 0; j != i; ++j, ++elemj)
618  {
619  if ((elem_keys[i] == elem_keys[j]) &&
620  ((*elemi)->level() == (*elemj)->level()))
621  {
622  libMesh::err <<
623  "level " << (*elemj)->level() << " elem\n" <<
624  (**elemj) << " centroid " <<
625  (*elemj)->centroid() << " has HilbertIndices " <<
626  elem_keys[j] << " or " <<
627  get_hilbert_index((*elemj), bbox) <<
628  std::endl;
629  libMesh::err <<
630  "level " << (*elemi)->level() << " elem\n" <<
631  (**elemi) << " centroid " <<
632  (*elemi)->centroid() << " has HilbertIndices " <<
633  elem_keys[i] << " or " <<
634  get_hilbert_index((*elemi), bbox) <<
635  std::endl;
636  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
637  }
638  }
639  }
640  }
641  } // done checking Hilbert keys
642 }
OStreamProxy err
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
The definition of the const_element_iterator struct.
Definition: mesh_base.h:1494
virtual element_iterator local_elements_begin()=0
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:52
virtual node_iterator local_nodes_end()=0
virtual element_iterator local_elements_end()=0
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
virtual node_iterator local_nodes_begin()=0
Iterate over local nodes (nodes whose processor_id() matches the current processor).
The definition of the const_node_iterator struct.
Definition: mesh_base.h:1548
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:354
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:38
void libMesh::MeshCommunication::clear ( )

Clears all data structures and resets to a pristine state.

Definition at line 314 of file mesh_communication.C.

Referenced by ~MeshCommunication().

315 {
316  // _neighboring_processors.clear();
317 }
void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem * > &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 1783 of file mesh_communication.C.

References libMesh::Elem::active_family_tree(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::DistributedMesh::delete_elem(), libMesh::DistributedMesh::delete_node(), libMesh::GhostingFunctor::delete_remote_elements(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DofObject::invalid_processor_id, libMesh::DistributedMesh::is_serial(), libMesh::DistributedMesh::level_elements_begin(), libMesh::DistributedMesh::level_elements_end(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libMesh::Elem::make_links_to_me_remote(), libMesh::DistributedMesh::max_elem_id(), libMesh::DistributedMesh::max_node_id(), libMesh::MeshTools::n_levels(), libMesh::DistributedMesh::node_ptr_range(), libMesh::DistributedMesh::parallel_max_elem_id(), libMesh::DistributedMesh::parallel_max_node_id(), libMesh::ParallelObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::reconnect_nodes(), libMesh::BoundaryInfo::regenerate_id_sets(), libMesh::Elem::subactive(), and libMesh::Parallel::Communicator::verify().

Referenced by allgather(), and libMesh::DistributedMesh::delete_remote_elements().

1785 {
1786  // The mesh should know it's about to be parallelized
1787  libmesh_assert (!mesh.is_serial());
1788 
1789  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
1790 
1791 #ifdef DEBUG
1792  // We expect maximum ids to be in sync so we can use them to size
1793  // vectors
1794  libmesh_assert(mesh.comm().verify(mesh.max_node_id()));
1795  libmesh_assert(mesh.comm().verify(mesh.max_elem_id()));
1796  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
1797  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
1798  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
1799  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
1800 #endif
1801 
1802  std::set<const Elem *, CompareElemIdsByLevel> elements_to_keep;
1803 
1804  // Don't delete elements that we were explicitly told not to
1805  for (std::set<Elem *>::iterator it = extra_ghost_elem_ids.begin();
1806  it != extra_ghost_elem_ids.end(); ++it)
1807  {
1808  const Elem * elem = *it;
1809 
1810  std::vector<const Elem *> active_family;
1811 #ifdef LIBMESH_ENABLE_AMR
1812  if (!elem->subactive())
1813  elem->active_family_tree(active_family);
1814  else
1815 #endif
1816  active_family.push_back(elem);
1817 
1818  for (std::size_t i=0; i != active_family.size(); ++i)
1819  elements_to_keep.insert(active_family[i]);
1820  }
1821 
1822  // See which elements we still need to keep ghosted, given that
1823  // we're keeping local and unpartitioned elements.
1825  false, elements_to_keep);
1827  false, elements_to_keep);
1828 
1829  // The inactive elements we need to send should have their
1830  // immediate children present.
1831  connect_children(mesh, mesh.processor_id(), elements_to_keep);
1832  connect_children(mesh, DofObject::invalid_processor_id, elements_to_keep);
1833 
1834  // The elements we need should have their ancestors and their
1835  // subactive children present too.
1836  connect_families(elements_to_keep);
1837 
1838  // Don't delete nodes that our semilocal elements need
1839  std::set<const Node *> connected_nodes;
1840  reconnect_nodes(elements_to_keep, connected_nodes);
1841 
1842  // Delete all the elements we have no reason to save,
1843  // starting with the most refined so that the mesh
1844  // is valid at all intermediate steps
1845  unsigned int n_levels = MeshTools::n_levels(mesh);
1846 
1847  for (int l = n_levels - 1; l >= 0; --l)
1848  {
1849  MeshBase::element_iterator lev_elem_it = mesh.level_elements_begin(l),
1850  lev_end = mesh.level_elements_end(l);
1851  for (; lev_elem_it != lev_end; ++lev_elem_it)
1852  {
1853  Elem * elem = *lev_elem_it;
1854  libmesh_assert (elem);
1855  // Make sure we don't leave any invalid pointers
1856  const bool keep_me = elements_to_keep.count(elem);
1857 
1858  if (!keep_me)
1859  elem->make_links_to_me_remote();
1860 
1861  // delete_elem doesn't currently invalidate element
1862  // iterators... that had better not change
1863  if (!keep_me)
1864  mesh.delete_elem(elem);
1865  }
1866  }
1867 
1868  // Delete all the nodes we have no reason to save
1869  for (auto & node : mesh.node_ptr_range())
1870  {
1871  libmesh_assert(node);
1872  if (!connected_nodes.count(node))
1873  mesh.delete_node(node);
1874  }
1875 
1876  // If we had a point locator, it's invalid now that some of the
1877  // elements it pointed to have been deleted.
1878  mesh.clear_point_locator();
1879 
1880  // Much of our boundary info may have been for now-remote parts of
1881  // the mesh, in which case we don't want to keep local copies.
1883 
1884  // We now have all remote elements and nodes deleted; our ghosting
1885  // functors should be ready to delete any now-redundant cached data
1886  // they use too.
1887  std::set<GhostingFunctor *>::iterator gf_it = mesh.ghosting_functors_begin();
1888  const std::set<GhostingFunctor *>::iterator gf_end = mesh.ghosting_functors_end();
1889  for (; gf_it != gf_end; ++gf_it)
1890  {
1891  GhostingFunctor *gf = *gf_it;
1892  gf->delete_remote_elements();
1893  }
1894 
1895 #ifdef DEBUG
1897 #endif
1898 }
The definition of the element_iterator struct.
Definition: mesh_base.h:1476
const BoundaryInfo & get_boundary_info() const
The information about boundary ids on the mesh.
Definition: mesh_base.h:117
bool verify(const T &r) const
Verify that a local variable has the same value on all processors.
This abstract base class defines the interface by which library code and user code can report associa...
bool subactive() const
Definition: elem.h:2275
void connect_children(const MeshBase &mesh, processor_id_type pid, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
void reconnect_nodes(const std::set< const Elem *, CompareElemIdsByLevel > &connected_elements, std::set< const Node * > &connected_nodes)
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
A function for verifying that elements on this processor have valid descendants and consistent active...
Definition: mesh_tools.C:1678
This is the base class from which all geometric element types are derived.
Definition: elem.h:89
dof_id_type parallel_max_elem_id() const
void make_links_to_me_remote()
Resets this element&#39;s neighbors&#39; appropriate neighbor pointers and its parent&#39;s and children&#39;s approp...
Definition: elem.C:1293
void active_family_tree(std::vector< const Elem * > &active_family, const bool reset=true) const
Same as the family_tree() member, but only adds the active children.
Definition: elem.C:1724
virtual element_iterator level_elements_end(unsigned int level) libmesh_override
libmesh_assert(j)
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
End of range of ghosting functors.
Definition: mesh_base.h:804
dof_id_type parallel_max_node_id() const
virtual element_iterator level_elements_begin(unsigned int level) libmesh_override
Iterate over elements of a given level.
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
void connect_families(std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:603
void clear_point_locator()
Releases the current PointLocator object.
Definition: mesh_base.C:555
virtual dof_id_type max_node_id() const libmesh_override
virtual bool is_serial() const libmesh_override
void regenerate_id_sets()
Clears and regenerates the cached sets of ids.
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Beginning of range of ghosting functors.
Definition: mesh_base.h:798
virtual SimpleRange< node_iterator > node_ptr_range() libmesh_override
const Parallel::Communicator & comm() const
virtual void delete_remote_elements()
GhostingFunctor subclasses with relatively long-lasting caches may want to delete the no-longer-relev...
void query_ghosting_functors(const MeshBase &mesh, processor_id_type pid, bool newly_coarsened_only, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
virtual void delete_elem(Elem *e) libmesh_override
Removes element e from the mesh.
virtual dof_id_type max_elem_id() const libmesh_override
processor_id_type processor_id() const
uint8_t dof_id_type
Definition: id_types.h:64
virtual void delete_node(Node *n) libmesh_override
Removes the Node n from the mesh.
template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator communicator,
const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 651 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Communicator::allgather(), libMesh::Parallel::Sort< KeyType, IdxType >::bin(), distance(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), std::max(), libMesh::Parallel::Communicator::rank(), libMesh::Real, libMesh::Parallel::Communicator::send_receive(), libMesh::Parallel::Communicator::size(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by allgather(), libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

656 {
657  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
658 
659  // This method determines partition-agnostic global indices
660  // for nodes and elements.
661 
662  // Algorithm:
663  // (1) compute the Hilbert key for each local node/element
664  // (2) perform a parallel sort of the Hilbert key
665  // (3) get the min/max value on each processor
666  // (4) determine the position in the global ranking for
667  // each local object
668  index_map.clear();
669  std::size_t n_objects = std::distance (begin, end);
670  index_map.reserve(n_objects);
671 
672  //-------------------------------------------------------------
673  // (1) compute Hilbert keys
674  // These aren't trivial to compute, and we will need them again.
675  // But the binsort will sort the input vector, trashing the order
676  // that we'd like to rely on. So, two vectors...
677  std::vector<Parallel::DofObjectKey>
678  sorted_hilbert_keys,
679  hilbert_keys;
680  sorted_hilbert_keys.reserve(n_objects);
681  hilbert_keys.reserve(n_objects);
682  {
683  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
684  for (ForwardIterator it=begin; it!=end; ++it)
685  {
686  const Parallel::DofObjectKey hi(get_hilbert_index (*it, bbox));
687  hilbert_keys.push_back(hi);
688 
689  if ((*it)->processor_id() == communicator.rank())
690  sorted_hilbert_keys.push_back(hi);
691 
692  // someone needs to take care of unpartitioned objects!
693  if ((communicator.rank() == 0) &&
694  ((*it)->processor_id() == DofObject::invalid_processor_id))
695  sorted_hilbert_keys.push_back(hi);
696  }
697  }
698 
699  //-------------------------------------------------------------
700  // (2) parallel sort the Hilbert keys
701  START_LOG ("parallel_sort()", "MeshCommunication");
702  Parallel::Sort<Parallel::DofObjectKey> sorter (communicator,
703  sorted_hilbert_keys);
704  sorter.sort();
705  STOP_LOG ("parallel_sort()", "MeshCommunication");
706  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
707 
708  // The number of objects in my_bin on each processor
709  std::vector<unsigned int> bin_sizes(communicator.size());
710  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
711 
712  // The offset of my first global index
713  unsigned int my_offset = 0;
714  for (unsigned int pid=0; pid<communicator.rank(); pid++)
715  my_offset += bin_sizes[pid];
716 
717  //-------------------------------------------------------------
718  // (3) get the max value on each processor
719  std::vector<Parallel::DofObjectKey>
720  upper_bounds(1);
721 
722  if (!my_bin.empty())
723  upper_bounds[0] = my_bin.back();
724 
725  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
726 
727  // Be careful here. The *_upper_bounds will be used to find the processor
728  // a given object belongs to. So, if a processor contains no objects (possible!)
729  // then copy the bound from the lower processor id.
730  for (unsigned int p=1; p<communicator.size(); p++)
731  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
732 
733 
734  //-------------------------------------------------------------
735  // (4) determine the position in the global ranking for
736  // each local object
737  {
738  //----------------------------------------------
739  // all objects, not just local ones
740 
741  // Request sets to send to each processor
742  std::vector<std::vector<Parallel::DofObjectKey>>
743  requested_ids (communicator.size());
744  // Results to gather from each processor
745  std::vector<std::vector<dof_id_type>>
746  filled_request (communicator.size());
747 
748  // build up list of requests
749  std::vector<Parallel::DofObjectKey>::const_iterator hi =
750  hilbert_keys.begin();
751 
752  for (ForwardIterator it = begin; it != end; ++it)
753  {
754  libmesh_assert (hi != hilbert_keys.end());
755 
756  std::vector<Parallel::DofObjectKey>::iterator lb =
757  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
758  *hi);
759 
760  const processor_id_type pid =
761  cast_int<processor_id_type>
762  (std::distance (upper_bounds.begin(), lb));
763 
764  libmesh_assert_less (pid, communicator.size());
765 
766  requested_ids[pid].push_back(*hi);
767 
768  ++hi;
769  // go ahead and put pid in index_map, that way we
770  // don't have to repeat the std::lower_bound()
771  index_map.push_back(pid);
772  }
773 
774  // start with pid=0, so that we will trade with ourself
775  std::vector<Parallel::DofObjectKey> request_to_fill;
776  std::vector<dof_id_type> global_ids;
777  for (processor_id_type pid=0; pid<communicator.size(); pid++)
778  {
779  // Trade my requests with processor procup and procdown
780  const processor_id_type procup = cast_int<processor_id_type>
781  ((communicator.rank() + pid) % communicator.size());
782  const processor_id_type procdown = cast_int<processor_id_type>
783  ((communicator.size() + communicator.rank() - pid) %
784  communicator.size());
785 
786  communicator.send_receive(procup, requested_ids[procup],
787  procdown, request_to_fill);
788 
789  // Fill the requests
790  global_ids.clear(); global_ids.reserve(request_to_fill.size());
791  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
792  {
793  const Parallel::DofObjectKey & hilbert_indices = request_to_fill[idx];
794  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
795 
796  // find the requested index in my node bin
797  std::vector<Parallel::DofObjectKey>::const_iterator pos =
798  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
799  libmesh_assert (pos != my_bin.end());
800 #ifdef DEBUG
801  // If we could not find the requested Hilbert index in
802  // my_bin, something went terribly wrong, possibly the
803  // Mesh was displaced differently on different processors,
804  // and therefore the Hilbert indices don't agree.
805  if (*pos != hilbert_indices)
806  {
807  // The input will be hilbert_indices. We convert it
808  // to BitVecType using the operator= provided by the
809  // BitVecType class. BitVecType is a CBigBitVec!
810  Hilbert::BitVecType input;
811 #ifdef LIBMESH_ENABLE_UNIQUE_ID
812  input = hilbert_indices.first;
813 #else
814  input = hilbert_indices;
815 #endif
816 
817  // Get output in a vector of CBigBitVec
818  std::vector<CBigBitVec> output(3);
819 
820  // Call the indexToCoords function
821  Hilbert::indexToCoords(&output[0], 8*sizeof(Hilbert::inttype), 3, input);
822 
823  // The entries in the output racks are integers in the
824  // range [0, Hilbert::inttype::max] which can be
825  // converted to floating point values in [0,1] and
826  // finally to actual values using the bounding box.
827  Real max_int_as_real = static_cast<Real>(std::numeric_limits<Hilbert::inttype>::max());
828 
829  // Get the points in [0,1]^3. The zeroth rack of each entry in
830  // 'output' maps to the normalized x, y, and z locations,
831  // respectively.
832  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
833  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
834  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
835 
836  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
837  Real
838  xmin = bbox.first(0),
839  xmax = bbox.second(0),
840  ymin = bbox.first(1),
841  ymax = bbox.second(1),
842  zmin = bbox.first(2),
843  zmax = bbox.second(2);
844 
845  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
846  Point p(xmin + (xmax-xmin)*p_hat(0),
847  ymin + (ymax-ymin)*p_hat(1),
848  zmin + (zmax-zmin)*p_hat(2));
849 
850  libmesh_error_msg("Could not find hilbert indices: "
851  << hilbert_indices
852  << " corresponding to point " << p);
853  }
854 #endif
855 
856  // Finally, assign the global index based off the position of the index
857  // in my array, properly offset.
858  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
859  }
860 
861  // and trade back
862  communicator.send_receive (procdown, global_ids,
863  procup, filled_request[procup]);
864  }
865 
866  // We now have all the filled requests, so we can loop through our
867  // nodes once and assign the global index to each one.
868  {
869  std::vector<std::vector<dof_id_type>::const_iterator>
870  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
871  for (unsigned int pid=0; pid<communicator.size(); pid++)
872  next_obj_on_proc.push_back(filled_request[pid].begin());
873 
874  unsigned int cnt=0;
875  for (ForwardIterator it = begin; it != end; ++it, cnt++)
876  {
877  const processor_id_type pid = cast_int<processor_id_type>
878  (index_map[cnt]);
879 
880  libmesh_assert_less (pid, communicator.size());
881  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
882 
883  const dof_id_type global_index = *next_obj_on_proc[pid];
884  index_map[cnt] = global_index;
885 
886  ++next_obj_on_proc[pid];
887  }
888  }
889  }
890 
891  libmesh_assert_equal_to(index_map.size(), n_objects);
892 }
unsigned int size() const
Definition: parallel.h:726
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
Also have a polymorphic pointer to the end object, this prevents iterating past the end...
long double max(long double a, double b)
Real distance(const Point &p)
libmesh_assert(j)
void send_receive(const unsigned int dest_processor_id, const T1 &send, const unsigned int source_processor_id, T2 &recv, const MessageTag &send_tag=no_tag, const MessageTag &recv_tag=any_tag) const
Send data send to one processor while simultaneously receiving other data recv from a (potentially di...
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:335
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:53
unsigned int rank() const
Definition: parallel.h:724
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:38
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
uint8_t dof_id_type
Definition: id_types.h:64
void allgather(const T &send, std::vector< T > &recv) const
Take a vector of length this->size(), and fill in recv[processor_id] = the value of send on that proc...
void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
DistributedMesh mesh 
) const
void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const
void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1488 of file mesh_communication.C.

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::ParallelObject::comm(), libMesh::MeshBase::query_elem_ptr(), libMesh::MeshBase::renumber_elem(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements(), and allgather().

1489 {
1490  // This function must be run on all processors at once
1491  libmesh_parallel_only(mesh.comm());
1492 
1493  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1494 
1495  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1497  (mesh, mesh.active_elements_begin(),
1498  mesh.active_elements_end(), syncids);
1499 
1500 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1501  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1503  (mesh.comm(), mesh.active_elements_begin(),
1504  mesh.active_elements_end(), syncuniqueids);
1505 #endif
1506 }
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
virtual element_iterator active_elements_begin()=0
Active, local, and negation forms of the element iterators described above.
virtual element_iterator active_elements_end()=0
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
Changes the id of element old_id, both by changing elem(old_id)->id() and by moving elem(old_id) in t...
const Parallel::Communicator & comm() const
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost elements uniquely identified by their parent id and which child t...
void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1672 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1673 {
1674  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1675 
1676  // This function must be run on all processors at once
1677  libmesh_parallel_only(mesh.comm());
1678 
1679  // When this function is called, each section of a parallelized mesh
1680  // should be in the following state:
1681  //
1682  // Local nodes should have unique authoritative ids,
1683  // and processor ids consistent with all processors which own
1684  // an element touching them.
1685  //
1686  // Ghost nodes touching local elements should have processor ids
1687  // consistent with all processors which own an element touching
1688  // them.
1689  SyncProcIds sync(mesh);
1691  (mesh, mesh.elements_begin(), mesh.elements_end(),
1692  ElemNodesMaybeNew(), NodeMaybeNew(), sync);
1693 }
virtual element_iterator elements_begin()=0
Iterate over all the elements in the Mesh.
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Request data about a range of ghost nodes uniquely identified by an element id and local node id...
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 1740 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), and libMesh::MeshTools::correct_node_proc_ids().

Referenced by libMesh::MeshRefinement::_refine_elements(), and allgather().

1741 {
1742  // This function must be run on all processors at once
1743  libmesh_parallel_only(mesh.comm());
1744 
1745  // When this function is called, each section of a parallelized mesh
1746  // should be in the following state:
1747  //
1748  // All nodes should have the exact same physical location on every
1749  // processor where they exist.
1750  //
1751  // Local nodes should have unique authoritative ids,
1752  // and processor ids consistent with all processors which own
1753  // an element touching them.
1754  //
1755  // Ghost nodes touching local elements should have processor ids
1756  // consistent with all processors which own an element touching
1757  // them.
1758  //
1759  // Ghost nodes should have ids which are either already correct
1760  // or which are in the "unpartitioned" id space.
1761 
1762  // First, let's sync up processor ids. Some of these processor ids
1763  // may be "wrong" from coarsening, but they're right in the sense
1764  // that they'll tell us who has the authoritative dofobject ids for
1765  // each node.
1766 
1768 
1769  // Second, sync up dofobject ids.
1771 
1772  // Third, sync up dofobject unique_ids if applicable.
1774 
1775  // Finally, correct the processor ids to make DofMap happy
1777 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:1832
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1449 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1450 {
1451  // This function must be run on all processors at once
1452  libmesh_parallel_only(mesh.comm());
1453 
1454  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1455 
1456  SyncNodeIds syncids(mesh);
1458  (mesh, mesh.elements_begin(), mesh.elements_end(),
1460 }
virtual element_iterator elements_begin()=0
Iterate over all the elements in the Mesh.
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Request data about a range of ghost nodes uniquely identified by an element id and local node id...
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1643 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1644 {
1645  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1646 
1647  // This function must be run on all processors at once
1648  libmesh_parallel_only(mesh.comm());
1649 
1650  // When this function is called, each section of a parallelized mesh
1651  // should be in the following state:
1652  //
1653  // All nodes should have the exact same physical location on every
1654  // processor where they exist.
1655  //
1656  // Local nodes should have unique authoritative ids,
1657  // and processor ids consistent with all processors which own
1658  // an element touching them.
1659  //
1660  // Ghost nodes touching local elements should have processor ids
1661  // consistent with all processors which own an element touching
1662  // them.
1663  SyncProcIds sync(mesh);
1665  (mesh, mesh.elements_begin(), mesh.elements_end(),
1667 }
virtual element_iterator elements_begin()=0
Iterate over all the elements in the Mesh.
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Request data about a range of ghost nodes uniquely identified by an element id and local node id...
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1464 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::MeshBase::query_node_ptr(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), allgather(), and libMesh::Nemesis_IO::read().

1465 {
1466  // Avoid unused variable warnings if unique ids aren't enabled.
1467  libmesh_ignore(mesh);
1468 
1469  // This function must be run on all processors at once
1470  libmesh_parallel_only(mesh.comm());
1471 
1472 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1473  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1474 
1475  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1477  mesh.nodes_begin(),
1478  mesh.nodes_end(),
1479  syncuniqueids);
1480 
1481 #endif
1482 }
virtual node_iterator nodes_begin()=0
Iterate over all the nodes in the Mesh.
virtual const Node * query_node_ptr(const dof_id_type i) const =0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
void libmesh_ignore(const T &)
virtual node_iterator nodes_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors.

This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 1698 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), and libMesh::MeshTools::correct_node_proc_ids().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::UnstructuredMesh::all_second_order(), libMesh::MeshTools::Modification::all_tri(), and allgather().

1699 {
1700  // This function must be run on all processors at once
1701  libmesh_parallel_only(mesh.comm());
1702 
1703  // When this function is called, each section of a parallelized mesh
1704  // should be in the following state:
1705  //
1706  // All nodes should have the exact same physical location on every
1707  // processor where they exist.
1708  //
1709  // Local nodes should have unique authoritative ids,
1710  // and processor ids consistent with all processors which own
1711  // an element touching them.
1712  //
1713  // Ghost nodes touching local elements should have processor ids
1714  // consistent with all processors which own an element touching
1715  // them.
1716  //
1717  // Ghost nodes should have ids which are either already correct
1718  // or which are in the "unpartitioned" id space.
1719 
1720  // First, let's sync up processor ids. Some of these processor ids
1721  // may be "wrong" from coarsening, but they're right in the sense
1722  // that they'll tell us who has the authoritative dofobject ids for
1723  // each node.
1724 
1726 
1727  // Second, sync up dofobject ids.
1729 
1730  // Third, sync up dofobject unique_ids if applicable.
1732 
1733  // Finally, correct the processor ids to make DofMap happy
1735 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:1832
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...
const Parallel::Communicator & comm() const
void make_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.
void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1512 of file mesh_communication.C.

References libMesh::Elem::as_parent_node(), libMesh::ParallelObject::comm(), data, libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::invalid_uint, libMesh::Elem::is_node_on_side(), libMesh::Elem::JUST_REFINED, mesh, libMesh::Elem::neighbor(), libMesh::Elem::neighbor_ptr_range(), libMesh::MeshBase::node_ref(), libMesh::Elem::parent(), libMesh::DofObject::processor_id(), libMesh::Elem::refinement_flag(), libMesh::remote_elem, libMesh::Elem::side_index_range(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Elem::which_child_am_i().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::MeshRefinement::_refine_elements(), and allgather().

1513 {
1514  // This function must be run on all processors at once
1515  libmesh_parallel_only(mesh.comm());
1516 
1517  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1518 
1519  SyncPLevels syncplevels(mesh);
1521  (mesh.comm(), mesh.elements_begin(), mesh.elements_end(),
1522  syncplevels);
1523 }
virtual element_iterator elements_begin()=0
Iterate over all the elements in the Mesh.
virtual element_iterator elements_end()=0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements.

Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 323 of file mesh_communication.C.

References libMesh::Parallel::Communicator::alltoall(), libMesh::Parallel::any_source, libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::Parallel::Communicator::get_unique_tag(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DistributedMesh::is_serial(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_equal_n_systems(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libmesh_nullptr, libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::ParallelObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::Parallel::Communicator::receive_packed_range(), libMesh::reconnect_nodes(), libMesh::GhostingFunctor::redistribute(), libMesh::Parallel::Communicator::send_packed_range(), libMesh::DistributedMesh::unpartitioned_elements_begin(), libMesh::DistributedMesh::unpartitioned_elements_end(), and libMesh::Parallel::wait().

Referenced by libMesh::DistributedMesh::redistribute(), and ~MeshCommunication().

324 {
325  // no MPI == one processor, no redistribution
326  return;
327 }
void libMesh::MeshCommunication::send_coarse_ghosts ( MeshBase mesh) const

The documentation for this class was generated from the following files: