Next: Diskanvändning, Previous: Setting Up Compute Nodes, Up: Installing Guix on a Cluster [Contents][Index]
Guix kräver en nätverksåtkomst för att hämta källkod och förbyggda binärfiler. The good news is that only the head node needs that since compute nodes simply delegate to it.
It is customary for cluster nodes to have access at best to a white
list of hosts. Our head node needs at least ci.guix.gnu.org
in this
white list since this is where it gets pre-built binaries from by default,
for all the packages that are in Guix proper.
Incidentally, ci.guix.gnu.org
also serves as a
content-addressed mirror of the source code of those packages.
Consequently, it is sufficient to have only ci.guix.gnu.org
in
that white list.
Software packages maintained in a separate repository such as one of the
various HPC channels are of course
unavailable from ci.guix.gnu.org
. For these packages, you may want
to extend the white list such that source and pre-built binaries (assuming
this-party servers provide binaries for these packages) can be downloaded.
As a last resort, users can always download source on their workstation and
add it to the cluster’s /gnu/store, like this:
GUIX_DAEMON_SOCKET=ssh://compute-node.example.org \ guix download http://starpu.gforge.inria.fr/files/starpu-1.2.3/starpu-1.2.3.tar.gz
The above command downloads starpu-1.2.3.tar.gz
and sends it
to the cluster’s guix-daemon
instance over SSH.
Air-gapped clusters require more work. At the moment, our suggestion would
be to download all the necessary source code on a workstation running Guix.
For instance, using the --sources option of guix build
(see Invoking guix build in GNU Guix Reference Manual), the
example below downloads all the source code the openmpi
package
depends on:
$ guix build --sources=transitive openmpi … /gnu/store/xc17sm60fb8nxadc4qy0c7rqph499z8s-openmpi-1.10.7.tar.bz2 /gnu/store/s67jx92lpipy2nfj5cz818xv430n4b7w-gcc-5.4.0.tar.xz /gnu/store/npw9qh8a46lrxiwh9xwk0wpi3jlzmjnh-gmp-6.0.0a.tar.xz /gnu/store/hcz0f4wkdbsvsdky3c0vdvcawhdkyldb-mpfr-3.1.5.tar.xz /gnu/store/y9akh452n3p4w2v631nj0injx7y0d68x-mpc-1.0.3.tar.gz /gnu/store/6g5c35q8avfnzs3v14dzl54cmrvddjm2-glibc-2.25.tar.xz /gnu/store/p9k48dk3dvvk7gads7fk30xc2pxsd66z-hwloc-1.11.8.tar.bz2 /gnu/store/cry9lqidwfrfmgl0x389cs3syr15p13q-gcc-5.4.0.tar.xz /gnu/store/7ak0v3rzpqm2c5q1mp3v7cj0rxz0qakf-libfabric-1.4.1.tar.bz2 /gnu/store/vh8syjrsilnbfcf582qhmvpg1v3rampf-rdma-core-14.tar.gz …
(Ifall du undrar är det där mer än 320 MiB av komprimerad källkod.)
We can then make a big archive containing all of this (see Invoking guix archive in GNU Guix Reference Manual):
$ guix archive --export \ `guix build --sources=transitive openmpi` \ > openmpi-source-code.nar
… and we can eventually transfer that archive to the cluster on removable storage and unpack it there:
$ guix archive --import < openmpi-source-code.nar
This process has to be repeated every time new source code needs to be brought to the cluster.
As we write this, the research institutes involved in Guix-HPC do not have air-gapped clusters though. If you have experience with such setups, we would like to hear feedback and suggestions.
Next: Diskanvändning, Previous: Setting Up Compute Nodes, Up: Installing Guix on a Cluster [Contents][Index]