Installing the Intel® SecL Kubernetes Extensions and Integration Hub
Installing the Intel® SecL Kubernetes Extensions and Integration Hub
Intel® SecL uses Custom Resource Definitions to add the ability to base orchestration decisions on Intel® SecL security attributes to Kubernetes. These CRDs allow Kubernetes administrators to configure pods to require specific security attributes so that the Kubernetes Control Plane Node will schedule those pods only on Worker Nodes that match the specified attributes.
Two CRDs are required for integration with Intel® SecL – an extension
for the Control Plane nodes, and a scheduler extension. The extensions are deployed as a Kubernetes
deployment in the isecl
namespace.
Deploy Intel® SecL Custom Controller
#Install skopeo to load container image for controller and scheduler from archive
dnf install -y skopeo
-
Copy
isecl-k8s-extensions-*.tar.gz
to Kubernetes Control plane machine and extract the contents#Copy scp /<build_path>/binaries/isecl-k8s-extensions-*.tar.gz <user>@<k8s_controller_machine>:/<path>/ #Extract tar -xvzf /<path>/isecl-k8s-extensions-*.tar.gz cd /<path>/isecl-k8s-extensions/
-
Create
hostattributes.crd.isecl.intel.com
CRD#1.17<=k8s_version<=1.22 kubectl apply -f yamls/crd-1.17.yaml
-
Check whether the CRD is created
kubectl get crds
- Load the
isecl-controller
container image
cd /<path>/isecl-k8s-extensions/
skopeo copy oci-archive:<isecl-k8s-controller-*.tar> docker://<docker_private_registry_server>:5000/<imageName>:<tagName>
-
Udate image name as above in controller yaml "/opt/isecl-k8s-extensions/yamls/isecl-controller.yaml"
containers: - name: isecl-controller image: <docker_private_registry_server>:5000/<imageName>:<tagName>
-
Deploy
isecl-controller
kubectl apply -f yamls/isecl-controller.yaml
- Check whether the isecl-controller is up and running
kubectl get deploy -n isecl
- Create clusterRoleBinding for ihub to get access to cluster nodes
kubectl create clusterrolebinding isecl-clusterrole --clusterrole=system:node --user=system:serviceaccount:isecl:isecl
- Fetch token required for ihub installation
kubectl get secrets -n isecl
#The below token will be used for ihub installation to update 'KUBERNETES_TOKEN' in ihub.env when configured with Kubernetes Tenant
kubectl describe secret default-token-<name> -n isecl
- Additional Optional Configurable fields for isecl-controller configuration in
isecl-controller.yaml
Field | Required | Type | Default | Description |
---|---|---|---|---|
LOG_LEVEL | Optional |
string |
INFO | Determines the log level |
LOG_MAX_LENGTH | Optional |
int |
1500 | Determines the maximum length of characters in a line in log file |
TAG_PREFIX | Optional |
string |
isecl | A custom prefix which can be applied to isecl attributes that are pushed from IH. For example, if the tag-prefix is isecl. and trusted attribute in CRD becomes isecl.trusted. |
TAINT_UNTRUSTED_NODES | Optional |
string |
false | If set to true. NoExec taint applied to the nodes for which trust status is set to false, Applicable only for HVS based attestation |
Installing the Intel® SecL Integration Hub
Note
The Integration Hub is only required to integrate Intel® SecL with third-party scheduler services, such as OpenStack Nova or Kubernetes. The Hub is not required for usage models that do not require Intel® SecL security attributes to be pushed to an integration endpoint.
Required For
The Hub is REQUIRED for the following use cases.
- Workload Confidentiality (both VMs and Containers)
The Hub is OPTIONAL for the following use cases (used only if orchestration or other integration support is needed):
- Platform Integrity with Data Sovereignty and Signed Flavors
- Application Integrity
Deployment Architecture Considerations for the Hub
A separate Hub instance is REQUIRED for each Cloud environment (also referred to as a Hub "tenant"). For example, if a single datacenter will have an OpenStack cluster and also two separate Kubernetes clusters, a total of three Hub instances must be installed, though additional instances of other Intel SecL services are not required (in the same example, only a single Verification Service is required). Each Hub will manage a single orchestrator environment. Each Hub instance should be installed on a separate VM or physical server
Prerequisites
The Intel® Security Libraries Integration Hub can be run as a VM or as a bare-metal server. The Hub may be installed on the same server (physical or VM) as the Verification Service.
- The Verification Service must be installed and available
- The Authentication and Authorization Service must be installed and available
- The Certificate Management Service must be installed and available
- (REQUIRED for Kubernetes integration only) The Intel SecL Custom Resource Definitions must be installed and available (see the Integration section for details)
Package Dependencies
The Intel® SecL Integration Hub requires a number of packages and their dependencies:
If these are not already installed, the Integration Hub installer attempts to install these packages automatically using the package manager. Automatic installation requires access to package repositories (the RHEL subscription repositories, the EPEL repository, or a suitable mirror), which may require an Internet connection. If the packages are to be installed from the package repository, be sure to update your repository package lists before installation.
Supported Operating Systems
The Intel Security Libraries Integration Hub supports Red Hat Enterprise Linux 8.4
Recommended Hardware
-
1 vCPUs
-
RAM: 2 GB
-
1 GB free space to install the Verification Service services. Additional free space is needed for the Integration Hub database and logs (database and log space requirements are dependent on the number of managed servers).
-
One network interface with network access to the Verification Service.
-
One network interface with network access to any integration endpoints (for example, OpenStack Nova).
Installing the Integration Hub
To install the Integration Hub, follow these steps:
- Copy the API Server certificate of the Kubernetes Controller to machine where Integration Hub will be installed to
/root/
directory
Note
In most Kubernetes distributions the Kubernetes certificate and key is normally present under /etc/kubernetes/pki
. However this might differ in case of some specific Kubernetes distributions.
In ihub.env KUBERNETES_TOKEN
token can be retrieved from Kubernetes using the following command:
kubectl get secrets -n isecl -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode
KUBERNETES_CERT_FILE=/<any_path>/apiserver.crt
in this path can be specified any ex: /root
which can taken care by IHUB during installation and copied to '/etc/ihub' directory.
- Create the
ihub.env
installation answer file. See the sample file below.
# Authentication URL and service account credentials
AAS_API_URL=https://isecl-aas:8444/aas/v1
IHUB_SERVICE_USERNAME=<Username for the Hub service user>
IHUB_SERVICE_PASSWORD=<Password for the Hub service user>
# CMS URL and CMS webserivce TLS hash for server verification
CMS_BASE_URL=https://isecl-cms:8445/cms/v1
CMS_TLS_CERT_SHA384=<TLS hash>
# TLS Configuration
TLS_SAN_LIST=127.0.0.1,192.168.1.1,hub.server.com #comma-separated list of IP addresses and hostnames for the Hub to be used in the Subject Alternative Names list in the TLS Certificate
# Verification Service URL
HVS_BASE_URL=https://isecl-hvs:8443/hvs/v2
ATTESTATION_TYPE=HVS
#Integration tenant type. Currently supported values are "KUBENETES" or "OPENSTACK"
TENANT=<KUBERNETES or OPENSTACK>
# OpenStack Integration Credentials - required for OpenStack integration only
OPENSTACK_AUTH_URL=<OpenStack Keystone URL; typically http://openstack-ip:5000/>
OPENSTACK_PLACEMENT_URL=<OpenStack Nova API URL; typically http://openstack-ip:8778/>
OPENSTACK_USERNAME=<OpenStack username>
OPENSTACK_PASSWORD=<OpenStack password>
# Kubernetes Integration Credentials - required for Kubernetes integration only
KUBERNETES_URL=https://kubernetes:6443/
KUBERNETES_CRD=custom-isecl
KUBERNETES_CERT_FILE=/root/apiserver.crt
KUBERNETES_TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6Ik......
# Installation admin bearer token for CSR approval request to CMS - mandatory
BEARER_TOKEN=eyJhbGciOiJSUzM4NCIsImtpZCI6ImE…
Deploy Intel® SecL Custom Controller
along with other relevant tenant configuration options in ihub.env
- Copy the Integration Hub installation binary to the
/root
directory & execute the installer binary.
./ihub-v4.1.0.bin
- Copy the
/etc/ihub/ihub_public_key.pem
to Kubernetes Controller machine to/<path>/secrets/
directory
#On K8s-Controller machine
mkdir -p /<path>/secrets
#On IHUB machine, copy
scp /etc/ihub/ihub_public_key.pem <user>@<k8s_controller_machine>:/<path>/secrets/hvs_ihub_public_key.pem
After installation, the Hub must be configured to integrate with a Cloud orchestration platform (for example, OpenStack or Kubernetes). See the Integration section for details.
Deploy Intel® SecL Extended Scheduler
- Install
cfssl
andcfssljson
on Kubernetes Control Plane
#Install wget
dnf install wget -y
#Download cfssl to /usr/local/bin/
wget -O /usr/local/bin/cfssl http://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x /usr/local/bin/cfssl
#Download cfssljson to /usr/local/bin
wget -O /usr/local/bin/cfssljson http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x /usr/local/bin/cfssljson
- Create TLS key-pair for
isecl-scheduler
service which is signed by Kubernetesapiserver.crt
cd /<path>/isecl-k8s-extensions/
chmod +x create_k8s_extsched_cert.sh
#Set K8s_CONTROLLER_IP,HOSTNAME
export CONTROLLER_IP=<k8s_machine_ip>
export HOSTNAME=<k8s_machine_hostname>
#Create TLS key-pair
./create_k8s_extsched_cert.sh -n "K8S Extended Scheduler" -s "$CONTROLLER_IP","$HOSTNAME" -c <k8s_ca_authority_cert> -k <k8s_ca_authority_key>
Note
In most Kubernetes distributions the Kubernetes certificate and key is normally present under /etc/kubernetes/pki
. However this might differ in case of some specific Kubernetes distributions.
- Copy the TLS key-pair generated to
/<path>/secrets/
directory
cp /<path>/isecl-k8s-extensions/server.key /<path>/secrets/
cp /<path>/isecl-k8s-extensions/server.crt /<path>/secrets/
- Load the
isecl-scheduler
container image
cd /<path>/isecl-k8s-extensions/
skopeo copy oci-archive:<isecl-k8s-scheduler-*.tar> docker://<docker_private_registry_server>:5000/<imageName>:<tagName>
-
Update image name as above in scheduler yaml "/opt/isecl-k8s-extensions/yamls/isecl-scheduler.yaml"
containers: - name: isecl-scheduler image: <docker_private_registry_server>:5000/<imageName>:<tagName>
-
Create scheduler-secret for isecl-scheduler
cd /<path>/
kubectl create secret generic scheduler-certs --namespace isecl --from-file=secrets
- The
isecl-scheduler.yaml
file includes support for both SGX and Workload Security put together. For only working with Workload Security scenarios , the following line needs to be made empty in the yaml file. The scheduler and controller yaml files are located under/<path>/isecl-k8s-extensions/yamls
- name: SGX_IHUB_PUBLIC_KEY_PATH
value: ""
- Deploy
isecl-scheduler
cd /<path>/isecl-k8s-extensions/
kubectl apply -f yamls/isecl-scheduler.yaml
- Check whether the
isecl-scheduler
is up and running
kubectl get deploy -n isecl
- Additional optional fields for isecl-scheduler configuration in
isecl-scheduler.yaml
Field | Required | Type | Default | Description |
---|---|---|---|---|
LOG_LEVEL | Optional |
string |
INFO | Determines the log level |
LOG_MAX_LENGTH | Optional |
int |
1500 | Determines the maximum length of characters in a line in log file |
TAG_PREFIX | Optional |
string |
isecl. | A custom prefix which can be applied to isecl attributes that are pushed from IH. For example, if the tag-prefix is *isecl.* and *trusted* attribute in CRD becomes *isecl.trusted*. |
PORT | Optional |
int |
8888 | ISecl scheduler service port |
HVS_IHUB_PUBLIC_KEY_PATH | Required |
string |
Required for IHub with HVS Attestation | |
SGX_IHUB_PUBLIC_KEY_PATH | Required |
string |
Required for IHub with SGX Attestation | |
TLS_CERT_PATH | Required |
string |
Path of tls certificate signed by kubernetes CA | |
TLS_KEY_PATH | Required |
string |
Path of tls key |
Configuring kube-scheduler to establish communication with isecl-scheduler
Note
The below is a sample when using kubeadm
as the Kubernetes distribution, the scheduler configuration files would be different for any other Kubernetes distributions being used.
-
Add a mount path to the
/etc/kubernetes/manifests/kube-scheduler.yaml
file for the Intel SecL scheduler extension:- mountPath: /<path>/isecl-k8s-extensions/ name: extendedsched readOnly: true
-
Add a volume path to the
/etc/kubernetes/manifests/kube-scheduler.yaml
file for the Intel SecL scheduler extension:- hostPath: path: /<path>/isecl-k8s-extensions/ type: "" name: extendedsched
-
Add
policy-config-file
path in the/etc/kubernetes/manifests/kube-scheduler.yaml
file undercommand
section:
- command:
- kube-scheduler
- --policy-config-file=/<path>/isecl-k8s-extensions/scheduler-policy.json
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- Restart kubelet
systemctl restart kubelet
Logs will be appended to older logs in
/var/log/isecl-k8s-extensions
-
Whenever the CRD's are deleted and restarted for updates, the CRD's using the yaml files present under
/opt/isecl-k8s-extensions/yamls/
. Kubernetes Version 1.17-1.22 usescrd-1.17.yaml
kubectl delete crd hostattributes.crd.isecl.intel.com kubectl apply -f /opt/isecl-k8s-extensions/yamls/crd-<version>.yaml
-
(Optional) Verify that the Intel ® SecL K8s extensions have been started:
To verify the Intel SecL CRDs have been deployed:
kubectl get -o json hostattributes.crd.isecl.intel.com
Admission Controller
when a new worker node is joined/rebooted in the cluster, below steps can be done, to taint such nodes, by labelling it as untrusted, intiatially. Tainiting, doesn't allow scheduling of any pods on that worker node.
IHUB pulls the data from HVS, and pushes to ISECL Controller, based on the report status of the node, if the worker node is trusted, then that node will be untainted.
Node Joining or Node Rebooted When the worker node is being joined/rebooted in the k8s cluster, Untrusted:True NoExecute and NoSchedule taint would be added to the worker Node
To the K8s cluster, when a new worker node is being joined, such worker Nodes are tainted Set TAINT_REBOOTED_NODES to "true" in in isecl-controller.yml.
In the K8s cluster, if any of the worker node is Rebooted, such worker Nodes are tainted Set TAINT_REGISTERED_NODES to "true" in isecl-controller.yml.
Note
By default, TAINT_REBOOTED_NODES and TAINT_REGISTERED_NODES, will be false.
Note
In no_proxy, add .svc,.svc.cluster.local, and then do kubeadm init
Upload image to registry The admission controller tar file that is present in k8s image folder should be uploaded to registry and update the image name in admission_controller.yaml file.
To bring up Admisison controller
./isecl-bootstrap.sh up admission-controller
To bring down Admisison controller
./isecl-bootstrap.sh down admission-controller