[TOC]

The following instructions aims at setting up a baseline multi-user environment for building MOOSE based applications in a job scheduling capable environment.

# Base Prerequisites
Both of these pre-reqs are in the hands of the admins of the cluster.

* Modules (optional) such as ['Modules Environment'](http://modules.sourceforge.net/), if not already installed (or some kind of environment management software), is highly recommended when designing a system in which multiple users will require multiple environment setups.

* What ever compiler you choose to use on your cluster (GCC/Clang/Intel, MPICH/OpenMPI/MVAPICH), **the minimum requirement, is that it must be C++11 compatible**. If you are unsure, please consult with your system admins for your cluster on which compiler to use (and how to use it).

 * **CMake**. A modern version of CMake (>2.8) is required to build some of the meta packages we need to include in PETSc.

 * **Python 2.7.x Development libraries**. This normally is an easy addition to the system through the use of your package manager. Perhaps older distributions (such as RHEL 6) will need to manually install Python 2.7 and its accompanying development package.  

# Setup

```bash
export CLUSTER_TEMP=`mktemp -d /tmp/cluster_temp.XXXXXX`
```

*Note: The terminal you used to run that command, should be the terminal you use from now on while following the instructions to completion.*

## Download PETSc:

```bash
cd $CLUSTER_TEMP

curl -L -O http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.6.4.tar.gz
```

## Set your umask
Some systems have a wacko umask set. So lets make it sane just in case:

```bash
umask 0022
```
This is necessary so everything we do is readable/executable by everyone on your cluster.

## Choose a base path
Export a base path variable which will be the home location for the compiler stack. All files related to MOOSE will be stored in this location (so choose carefully now):

```bash
export PACKAGES_DIR=/opt/moose-compilers
```

*Note: The PACKAGES_DIR must reside in a location where all the compute nodes can access*

## Extract downloaded packages

```bash
cd $CLUSTER_TEMP
tar -xf petsc-3.6.4.tar.gz
```

## Setup necessary Modules

Even if you're not using Modules the following instructions should give you a good idea of what is needed in the MOOSE environment.


### Create MOOSE Module
```bash
sudo mkdir -p $PACKAGES_DIR/modulefiles
sudo vi $PACKAGES_DIR/modulefiles/moose-dev-gcc
```

Add the following content to that file:
```csh
#%Module1.0#####################################################################
##
## MOOSE module
##
set base_path   INSERT BASE PATH HERE!

setenv CC       mpicc
setenv CXX      mpicxx
setenv F90      mpif90
setenv F77      mpif77
setenv FC       mpif90

setenv          PETSC_DIR       $base_path/petsc/petsc-3.6.4/gcc-opt

```
*Note: You must change INSERT BASE PATH HERE! Which _should_ be what ever the following returns: `echo $PACKAGES_DIR`*


To make the module available in your terminal session throughout the rest of the instructions, export the following:
```bash
export MODULEPATH=$MODULEPATH:$PACKAGES_DIR/modulefiles
```
Note: The above will have to be added in a more permanent location so that it may be used by everyone. Something on the oder of:

1. Copy this module file to where-ever the rest of the system's modules are located
1. Add the above export command to the system-wide bash profile
1. Inform the user how to add the above export command to their personal profile

On our systems, we prefer option 3, as that will make listing all, and thus finding the MOOSE module/s easy (as it will appear at the bottom of a `module available` request). Example:
```bash
me@some_machine#>  module available

--------------------------------------- /usr/share/modules ---------------------------------------
3.2.10

----------------------------- /usr/share/Modules/3.2.10/modulefiles ------------------------------
dot         module-git  module-info modules     null        use.own

-------------------------------- /apps/local/modules/modulefiles ---------------------------------
intel/12.1.2               python/2.7                 starccm+/7.05.026
intel/12.1.3               python/2.7-open            starccm+/7.05.067
intel-mkl/10.3.8           python/3.2                 starccm+/7.06.012(default)
intel-mkl/10.3.9           python/as-2.7.2            starccm+/8.02.008
mvapich2-gcc/1.7           python/as-3.2              totalview/8.11.0(default)
mvapich2-intel/1.7         starccm+/6.06.011          totalview/8.6.2
pbs                        starccm+/7.02.008          use.projects
pgi/12.4                   starccm+/7.04.006          vtk

-------------------------------- /apps/projects/moose/modulefiles --------------------------------
moose-dev-gcc         moose-dev-gcc-parmesh
me@some_machine#>
```

## Install PETSc


```bash
module load moose-dev-gcc
```
Note: Verify that the environment variable 'PETSC_DIR' is available. If not, something went wrong with creating the MOOSE module above.

```bash
cd $CLUSTER_TEMP/petsc-3.6.4

./configure \
--prefix=$PETSC_DIR \
--download-hypre=1 \
--with-ssl=0 \
--with-debugging=no \
--with-pic=1 \
--with-shared-libraries=1 \
--with-cc=mpicc \
--with-cxx=mpicxx \
--with-fc=mpif90 \
--download-fblaslapack=1 \
--download-metis=1 \
--download-parmetis=1 \
--download-superlu_dist=1 \
--download-scalapack=1 \
--download-mumps=1 \
CC=mpicc CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90 \
CFLAGS='-fPIC -fopenmp' \
CXXFLAGS='-fPIC -fopenmp' \
FFLAGS='-fPIC -fopenmp' \
FCFLAGS='-fPIC -fopenmp' \
F90FLAGS='-fPIC -fopenmp' \
F77FLAGS='-fPIC -fopenmp' \
PETSC_DIR=`pwd`
```
During the configure/build process, you will be prompted to enter the correct make commands. Because this can be different from system to system, I leave that task to the reader.

## Clean TMPDIR
Clean all the temporary stuff:

```bash
rm -rf $CLUSTER_TEMP
```

Once the above is complete, verify that any user can load/use the MOOSE module, and has read access to the PETSC_DIR location. It would also be wise to move on to Step 2 as a normal user, on our [Getting Started](http://mooseframework.org/getting-started/) pages, to confirm everything will work for your users on your cluster.