Skip to content

Prerequisites for Deploying SGX Usecases

Beginning in the 5.0 release, Intel SecL-DC is deployed using Helm charts onto a Kubernetes platform. Deployment using binary installers has been deprecated.

Prerequisites for Deployment

  • A Kubernetes cluster (version 1.23.1) must be available. Hosts to be attested should be worker nodes in the Kubernetes cluster. Ensure that kubectl is configured to work from the system where you will execute the deployment. For default installations, a multi-node cluster (at least one controller node and at least one worker node) is recommended. A single-node installation (all Kubernetes services and all workloads running on a single node, as in microk8s) will require manual adjustments to node labels and pod tolerations. Worker nodes to be attested must meet the hardware security technology requirements for the use cases to be enabled.

  • Pre-requisite for deploying sgx-agent:
    sgx drivers must be installed.
    For Ubuntu 20.04:

    apt-get install build-essential ocaml automake autoconf libtool wget python libssl-dev dkms  
    wget https://download.01.org/intel-sgx/latest/dcap-latest/linux/distro/${distro_name}/sgx_linux_x64_driver_${version}.bin  
    chmod 777 sgx_linux_x64_driver_${version}.bin  
    ./sgx_linux_x64_driver_${version}.bin  
    

Detailed instructions for different OS are in :
https://download.01.org/intel-sgx/latest/dcap-latest/linux/docs/Intel_SGX_SW_Installation_Guide_for_Linux.pdf

  • Helm 3 must be installed on the system from which the deployment will be launched.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh 
  • An NFS server is required with at minimum 2Gi of storage space available (10Gi is recommended for deployments managing up to 10,000 hosts. 50gi is recommended for deployments managing up to 100,000 hosts).

  • All SGX container images must be pushed to a container registry using the appropriate version tag

  • All worker nodes must be labelled according to the security technologies enabled (specifically SGX)

  • Label sgx-enabled worker nodes with the label "SGX-ENABLED"

    kubectl label nodes <node name> node.type=SGX-ENABLED
    

Setting up for Helm deployment

Create a namespace or use the namespace used for helm deployment.

kubectl create ns isecl

Login to private registry based on runtime installed with.

  • Docker
    docker login <registry IP> --username <registry username> --password <registry password>
    

(OR)

  • Crio
    skopeo login <registry IP> --username <registry username> --password <registry password>
    

Once login is successful, login credentials will be stored in an auth.json file. By using this file, create a secret to pull/push images from/to registry.

kubectl create secret generic <secret-name> --from-file=.dockerconfigjson=<path to auth.json> --type=kubernetes.io/dockerconfigjson  -n isecl

NOTE:

Possible path of auth.json file would be /run/user/0/containers/auth.json. This may change sometime. Search for this file and provide correct path when creating this secret.

Created secret should be used in values.yaml file of service/usecase chart, in place of imagePullSecret under global section.

Persistent Storage

Several Intel SecL-DC services require persistent storage for databases, configuration settings, and logs. By default the Helm deployments are configured to use NFS storage for these persistent volumes.

On the NFS server, execute the provided setup-nfs.sh script to set up the needed persistent volume folder structure and share the mount points.

curl -fsSL -o setup-nfs.sh https://raw.githubusercontent.com/intel-secl/helm-charts/v5.1.0/setup-nfs.sh
chmod +x setup-nfs.sh
./setup-nfs.sh /mnt/nfs_share 1001 <ip or hostname of the control plane>

All the pods running on nodes in cluster need access to NFS mount path via PVC e.g /mnt/nfs_share. This requires user to add IPs/FQDN names to be added to /etc/exports file on system where NFS server is installed Add all the IPs or FQDN names of each of nodes in K8s cluster in /etc/exports file in system where NFS is server is installed and configured using above script. e.g For 3 node cluster with FQDN names control-plane.com(control plane), node1.com and node2.com(nodes)

cat /etc/exports
/mnt/nfs_share/isecl/        control-plane.com(rw,sync,no_all_squash,root_squash)
/mnt/nfs_share/isecl/        node1.com(rw,sync,no_all_squash,root_squash)
/mnt/nfs_share/isecl/        node2.com(rw,sync,no_all_squash,root_squash)

NOTE:

Even if a new node is joined to the cluster, the IP/FQDN name should be added to /etc/exportfs file at system where NFS server is running

After making the changes in /etc/exports file run the below command

exportfs -arv

Default Service and Agent Mount Paths

#Certificate-Management-Service
Config: <NFS-vol-base-path>/isecl/cms/config
Logs: <NFS-vol-base-path>/isecl/cms/logs

#Authentication Authorization Service
Config: <NFS-vol-base-path>/isecl/aas/config
Logs: <NFS-vol-base-path>/isecl/aas/logs
Pg-data: <NFS-vol-base-path>/isecl/aas/db

#SGX caching Service
Config: <NFS-vol-base-path>/isecl/scs/config
Logs: <NFS-vol-base-path>/isecl/scs/logs
Pg-data: <NFS-vol-base-path>/isecl/scs/db

#SGX Host Verification Service
Config: <NFS-vol-base-path>/isecl/shvs/config
Logs: <NFS-vol-base-path>/isecl/shvs/logs
Pg-data: <NFS-vol-base-path>/isecl/shvs/db

#SGX Quote Verification Service
Config: <NFS-vol-base-path>/isecl/sqvs/config
Logs: <NFS-vol-base-path>/isecl/sqvs/logs

#Integration-Hub
Config: <NFS-vol-base-path>/isecl/ihub/config
Log: <NFS-vol-base-path>/isecl/ihub/logs

#Key-Broker-Service
Config: <NFS-vol-base-path>/isecl/kbs/config
Log: <NFS-vol-base-path>/isecl/kbs/logs
Opt: <NFS-vol-base-path>/isecl/kbs/kbs/opt

#SGX Agent:
Config: /etc/sgx_agent/
Logs:  /var/log/sgx_agent/

NOTE: Persistent storage will not be deleted if the Helm deployment is uninstalled. Be sure to delete the created folders on the NFS server (/mnt/nfs_share/ by default) as well as the local storage used by the Trust Agent (/etc/trustagent and /var/log/trustagent on each worker node) and Workload Agent (/etc/workload-agent and /var/log/workload-agent on each worker node) if a "fresh" installation is needed. Rerun the setup-nfs.sh script after deleting the shared folders to recreate the empty folder structure.

Retrieving Helm charts

The SGX Helm charts are hosted in a github Helm repo:

helm repo add isecl-helm https://intel-secl.github.io/helm-charts/
helm repo update

The list of available charts on the added repo can be shown using the "search repo" command:

helm search repo --versions

Default Service Ports

For both Single node and multi-node deployments following ports are used. All services exposing APIs will use the below ports

CMS: 30445
AAS: 30444
IHUB: None
KBS: 30448
K8s-scheduler: 30888
K8s-controller: None
SGX-Agent: None
SCS: 30502
SHVS: 30500
SQVS: 30503