Install custom artifact bundles

Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. These bundles are docker images that contain the artifacts to be deployed alongside scripts to deploy them. To create a new bundle or modify an existing one follow this guide first:

https://getvisibility.atlassian.net/wiki/spaces/GS/pages/65372391/Model+deployment+guide#1.-Create-a-new-model-bundle-or-modify-an-existing-one

The list of all the available bundles is inside the bundles/ directory on the models-ci project on Github.

After the model bundle is published, for example images.master.k3s.getvisibility.com/models:company-1.0.1 You will have to generate a public link to this image by running the k3s airgap Publish ML models GitHub CI task. The task will ask you for the docker image URL.
Note: We are still using the images.master.k3s.getvisibility.com/models repo because the bundles were only used to deploy custom models at first.

Once the task is complete you will get a public URL to download the artifact on the summary of the task. After that you have to execute the following commands.

Replace the following variables:
  • $URL with the URL to the model bundle provided by the task
  • $BUNDLE with the name of the artifact, in this case company-1.0.1
    mkdir custom
    wget -O custom/$BUNDLE.tar.gz $URL
    gunzip custom/$BUNDLE.tar.gz
    ctr -n=k8s.io images import models/$BUNDLE.tar

Now you will need to execute the artifact deployment job. This job will unpack the artifacts from the docker image into a MinIO bucket inside the on-premises cluster and restart any services that use them.

Replace the following variables:
  • $GV_DEPLOYER_VERSION with the version of the model deployer available under charts/
  • $BUNDLE_VERSION with the version of the artifact, in this case company-1.0.1
     helm upgrade \
     --install gv-model-deployer charts/gv-model-deployer-$GV_DEPLOYER_VERSION.tgz \
     --wait --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
     --set models.version="$BUNDLE_VERSION"
You should be able to verify that everything went alright by locating the ml-model job that was launched. The logs should look like this:
root@ip-172-31-9-140:~# kubectl logs -f ml-model-0jvaycku9prx-84nbf
Uploading models
Added `myminio` successfully.
`/models/AIP-1.0.0.zip` -> `myminio/models-data/AIP-1.0.0.zip`
`/models/Commercial-1.0.0.zip` -> `myminio/models-data/Commercial-1.0.0.zip`
`/models/Default-1.0.0.zip` -> `myminio/models-data/Default-1.0.0.zip`
`/models/classifier-6.1.2.zip` -> `myminio/models-data/classifier-6.1.2.zip`
`/models/lm-full-en-2.1.2.zip` -> `myminio/models-data/lm-full-en-2.1.2.zip`
`/models/sec-mapped-1.0.0.zip` -> `myminio/models-data/sec-mapped-1.0.0.zip`
Total: 0 B, Transferred: 297.38 MiB, Speed: 684.36 MiB/s
Restart classifier
deployment.apps/classifier-focus restarted
root@ip-172-31-9-140:~# 

In addition, you can enter the different services that consume these artifacts to check if they have been correctly deployed. For example, for the models you can open a shell inside the classifier containers and check the /models directory or check the models-data bucket inside MinIO. Both should contain the expected models.