... | @@ -65,13 +65,13 @@ Access to the HPC computers is via a scheduling system called Slurm. To use Slur |
... | @@ -65,13 +65,13 @@ Access to the HPC computers is via a scheduling system called Slurm. To use Slur |
|
To do our actual computing, first we need to install GTDB-tk and set the database:
|
|
To do our actual computing, first we need to install GTDB-tk and set the database:
|
|
|
|
|
|
$ conda create -y -n gtdbtk -c conda-forge -c bioconda gtdbtk
|
|
$ conda create -y -n gtdbtk -c conda-forge -c bioconda gtdbtk
|
|
$ conda activate gtdbtk
|
|
|
|
|
|
|
|
|
|
|
|
Instead of downloading all the data (which takes an age and loads of space), you can use mine for now. Set the path with this:
|
|
Instead of downloading all the data (which takes an age and loads of space), you can use mine for now. Set the path with this:
|
|
|
|
|
|
$ echo "export GTDBTK_DATA_PATH=/bioinf/home/tfrancis/software/gtdbtk/release95" > ~/miniconda3/envs/gtdbtk/etc/conda/activate.d/gtdbtk.sh
|
|
$ echo "export GTDBTK_DATA_PATH=/bioinf/home/tfrancis/software/gtdbtk/release95" > ~/miniconda3/envs/gtdbtk/etc/conda/activate.d/gtdbtk.sh
|
|
|
|
|
|
|
|
Then activate the environment with `conda activate gtdbtk`
|
|
|
|
|
|
Now we need to create our submission script. This will contain firstly a set of instructions to be read by Slurm, prefixed with `#SBATCH`. These include details of how much memory and cpus to use, how long to run, and which partition to use. Partitions are just sets of computers (or 'nodes'), with certain characteristics or permissions.
|
|
Now we need to create our submission script. This will contain firstly a set of instructions to be read by Slurm, prefixed with `#SBATCH`. These include details of how much memory and cpus to use, how long to run, and which partition to use. Partitions are just sets of computers (or 'nodes'), with certain characteristics or permissions.
|
|
|
|
|
|
Open a new text file with:
|
|
Open a new text file with:
|
... | | ... | |