Building in GitLab
To learn how to use containers and Docker on your local machine, refer to our tutorial section.
We use our own installation of GitLab for Source Code Management, Continuous Integration automation, containers registry and other development lifecycle tasks. It fully uses Nautilus Cluster resources, which provides our users unlimited storage and fast builds. All data from our GitLab except container images are backed up nightly to Google storage, which means there's almost zero chance that you might lose your code in our repository.
Step 1: Create a Git repo
- To use our GitLab installation, register at https://gitlab.nrp-nautilus.io
- Use GitLab for storing your code like any git repository. Here's GitLab basics guide.
- Create a new project in your GitLab account
Step 2: Use Containers Registry
What makes GitLab especially useful for kubernetes cluster in integration with Containers Registry. You can store your containers directly in our cluster and avoid slow downloads from DockerHub (although you're still free to do that as well).
If you wish to use our registry, in your https://gitlab.nrp-nautilus.io project go to Deploy -> Container Registry
menu and read instructions on how to use one.
Step 3: Continuous Integration automation
To fully unleash the GitLab powers, introduce yourself to Continuous Integration automation and more advanced DevOps article.
- Create the
.gitlab-ci.yml
file in your project, see Quick start guide. The runners are already configured.
There's a list of CI templates available for most common languages. - If you need to build your Dockerfile and create a container from it, adjust this
.gitlab-ci.yml
template (remove--cache=true
if you don't need layer caching):
image: gcr.io/kaniko-project/executor:debug
stages:
- build-and-push
build-and-push-job:
stage: build-and-push
variables:
GODEBUG: "http2client=0"
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --cache=true --push-retry=10 --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --destination $CI_REGISTRY_IMAGE:latest
The above Kaniko builder has severe speed problems pushing to GitLab, which is resolved by setting the environment variable GODEBUG="http2client=0"
.
The below example is the variant for using Docker (as there is only one dedicated build server available, only use when image compatibility with the Docker builder is an important priority):
image: docker:git
default:
tags:
- docker
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
stages:
- build-and-push
build-and-push-job:
stage: build-and-push
script:
- cd $CI_PROJECT_DIR && docker build . -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
- docker rmi -f $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
- docker builder prune -a -f
- Go to
CI / CD -> Jobs
tab to see in amazement your job running and image being uploaded to your registry. - From the
Packages -> Containers Registry
tab get the URL of your image to be included in your pod definition:
spec:
containers:
- name: my-container
image: gitlab-registry.nrp-nautilus.io/<your_group>/<your_project>:<optional_tag>
Multiarch builds
Nautilus has several ARM64 nodes, which require a specifically build images to run on. Docker can build images for multiple architectures and automatically create a manifest which will allow using the same image path on different architectures, provided by buildx tool.
Here's the example of such CI definition:
image: docker:git
default:
tags:
- docker
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker buildx create --name builder_$CI_COMMIT_SHORT_SHA
after_script:
- docker buildx rm builder_$CI_COMMIT_SHORT_SHA
stages:
- build-and-push
build-and-push-job:
stage: build-and-push
script:
- cd $CI_PROJECT_DIR && docker buildx build --provenance=false --platform linux/arm64/v8,linux/amd64 --builder builder_$CI_COMMIT_SHORT_SHA . -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA -t $CI_REGISTRY_IMAGE:latest --push
Using sysbox-provided docker
image: docker:git
default:
tags:
- sysbox
services:
- name: docker:dind
variables:
DOCKER_HOST: tcp://docker:2376/
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
build-jupyter-base:
before_script:
- until docker info; do sleep 1; done
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab-registry.nrp-nautilus.io
script:
- cd $CI_PROJECT_DIR && docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
- docker rmi -f $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
- docker builder prune -a -f
Cloud IDE
You could use our Coder Web Instance or DevPod with your own namespace for an environment similar to GitHub Codespaces or GitPod.
Visual Studio Code allows remote access and editing within any Kubernetes pod with the combination of the Kubernetes and Remote Development extensions. Right-click a pod in the Kubernetes sidebar (after changing the namespace within the config
file if you have multiple namespaces) and click Attach Visual Studio Code
with kubectl
in your PATH.
Build better containers
Make yourself familiar with Docker containers best practices.
Use multi-stage builds when necessary.
Use S3 to store large files collections and access those during builds
Refer to S3 documentation.
Other development information
Check out this guide from the Netherlands eScience Center for best practices in developing academic code.