diff --git a/README.md b/README.md index 32ebe80b..f64fe596 100644 --- a/README.md +++ b/README.md @@ -2,21 +2,23 @@ Validated pattern for deploying confidential containers on OpenShift using the [Validated Patterns](https://validatedpatterns.io/) framework. -Confidential containers use hardware-backed Trusted Execution Environments (TEEs) to isolate workloads from cluster and hypervisor administrators. This pattern deploys and configures the Red Hat CoCo stack — including the sandboxed containers operator, Trustee (Key Broker Service), and peer-pod infrastructure — on Azure. +Confidential containers use hardware-backed Trusted Execution Environments (TEEs) to isolate workloads from cluster and hypervisor administrators. This pattern deploys and configures the Red Hat CoCo stack — including the sandboxed containers operator, Trustee (Key Broker Service), and peer-pod infrastructure — on Azure and bare metal. ## Topologies -The pattern provides two deployment topologies: +The pattern provides three deployment topologies: -1. **Single cluster** (`simple` clusterGroup) — deploys all components (Trustee, Vault, ACM, sandboxed containers, workloads) in one cluster. This breaks the RACI separation expected in a remote attestation architecture but simplifies testing and demonstrations. +1. **Single cluster** (`simple` clusterGroup) — deploys all components (Trustee, Vault, ACM, sandboxed containers, workloads) in one cluster on Azure. This breaks the RACI separation expected in a remote attestation architecture but simplifies testing and demonstrations. 2. **Multi-cluster** (`trusted-hub` + `spoke` clusterGroups) — separates the trusted zone from the untrusted workload zone: - **Hub** (`trusted-hub`): Runs Trustee (KBS + attestation service), HashiCorp Vault, ACM, and cert-manager. This cluster is the trust anchor. - **Spoke** (`spoke`): Runs the sandboxed containers operator and confidential workloads. The spoke is imported into ACM and managed from the hub. +3. **Bare metal** (`baremetal` clusterGroup) — deploys all components on bare metal hardware with Intel TDX or AMD SEV-SNP support. NFD (Node Feature Discovery) auto-detects the CPU architecture and configures the appropriate runtime. Supports SNO (Single Node OpenShift) and multi-node clusters. + The topology is controlled by the `main.clusterGroupName` field in `values-global.yaml`. -Currently supports Azure via peer-pods. Peer-pods provision confidential VMs (`Standard_DCas_v5` family) directly on the Azure hypervisor rather than nesting VMs inside worker nodes. +Azure deployments use peer-pods, which provision confidential VMs (`Standard_DCas_v5` family) directly on the Azure hypervisor. Bare metal deployments use layered images and hardware TEE features directly. ## Current version (4.*) @@ -42,9 +44,21 @@ All previous versions used pre-GA (Technology Preview) releases of Trustee: ### Prerequisites +**Azure deployments:** + - OpenShift 4.17+ cluster on Azure (self-managed via `openshift-install` or ARO) - Azure `Standard_DCas_v5` VM quota in your target region (these are confidential computing VMs and are not available in all regions). See the note below for more details. - Azure DNS hosting the cluster's DNS zone + +**Bare metal deployments:** + +- OpenShift 4.17+ cluster on bare metal with Intel TDX or AMD SEV-SNP hardware +- BIOS/firmware configured to enable TDX or SEV-SNP +- Available block devices for LVMS storage (auto-discovered) +- For Intel TDX: an Intel PCS API key from [api.portal.trustedservices.intel.com](https://api.portal.trustedservices.intel.com/) + +**Common:** + - Tools on your workstation: `podman`, `yq`, `jq`, `skopeo` - OpenShift pull secret saved at `~/pull-secret.json` (download from [console.redhat.com](https://console.redhat.com/openshift/downloads)) - Fork the repository — ArgoCD reconciles cluster state against your fork, so changes must be pushed to your remote @@ -53,20 +67,20 @@ All previous versions used pre-GA (Technology Preview) releases of Trustee: These scripts generate the cryptographic material and attestation measurements needed by Trustee and the peer-pod VMs. Run them once before your first deployment. -1. `bash scripts/gen-secrets.sh` — generates KBS key pairs, attestation policy seeds, and copies `values-secret.yaml.template` to `~/values-secret-coco-pattern.yaml` -2. `bash scripts/get-pcr.sh` — retrieves PCR measurements from the peer-pod VM image and stores them at `~/.coco-pattern/measurements.json` (requires `podman`, `skopeo`, and `~/pull-secret.json`) -3. Review and customise `~/values-secret-coco-pattern.yaml` — this file is loaded into Vault and provides secrets to the pattern +1. `bash scripts/gen-secrets.sh` — generates KBS key pairs, PCCS certificates/tokens (for bare metal), and copies `values-secret.yaml.template` to `~/values-secret-coco-pattern.yaml` +2. `bash scripts/get-pcr.sh` — retrieves PCR measurements from the peer-pod VM image and stores them at `~/.coco-pattern/measurements.json` (requires `podman`, `skopeo`, and `~/pull-secret.json`). **Not required for bare metal deployments.** +3. Review and customise `~/values-secret-coco-pattern.yaml` — this file is loaded into Vault and provides secrets to the pattern. For bare metal, uncomment the PCCS secrets section and provide your Intel PCS API key. > **Note:** `gen-secrets.sh` will not overwrite existing secrets. Delete `~/.coco-pattern/` if you need to regenerate. -### Single cluster deployment +### Single cluster deployment (Azure) 1. Set `main.clusterGroupName: simple` in `values-global.yaml` 2. Ensure your Azure configuration is populated in `values-global.yaml` (see `global.azure.*` fields) 3. `./pattern.sh make install` 4. Wait for the cluster to reboot all nodes (the sandboxed containers operator triggers a MachineConfig update). Monitor progress in the ArgoCD UI. -### Multi-cluster deployment +### Multi-cluster deployment (Azure) 1. Set `main.clusterGroupName: trusted-hub` in `values-global.yaml` 2. Deploy the hub cluster: `./pattern.sh make install` @@ -76,6 +90,25 @@ These scripts generate the cryptographic material and attestation measurements n (see [importing a cluster](https://validatedpatterns.io/learn/importing-a-cluster/)) 6. ACM will automatically deploy the `spoke` clusterGroup applications (sandboxed containers, workloads) to the imported cluster +### Bare metal deployment + +1. Set `main.clusterGroupName: baremetal` in `values-global.yaml` +2. Run `bash scripts/gen-secrets.sh` to generate KBS keys and PCCS secrets +3. For Intel TDX: uncomment the PCCS secrets in `~/values-secret-coco-pattern.yaml` and provide your Intel PCS API key +4. `./pattern.sh make install` +5. Wait for the cluster to reboot nodes (MachineConfig updates for TDX kernel parameters and vsock) + +The system auto-detects your hardware: + +- **NFD** discovers Intel TDX or AMD SEV-SNP capabilities and labels nodes +- **LVMS** auto-discovers available block devices for storage +- **RuntimeClass** `kata-cc` is created automatically pointing to the correct handler (`kata-tdx` or `kata-snp`) +- Both `kata-tdx` and `kata-snp` RuntimeClasses are deployed; only the one matching your hardware has schedulable nodes +- MachineConfigs are deployed for both `master` and `worker` roles (safe on SNO where only master exists) +- PCCS and QGS services deploy unconditionally; DaemonSets only schedule on Intel nodes via NFD labels + +Optional: pin PCCS to a specific node with `bash scripts/get-pccs-node.sh` and set `baremetal.pccs.nodeSelector` in the baremetal chart values. + ## Sample applications Two sample applications are deployed on the cluster running confidential workloads (the single cluster in `simple` mode, or the spoke in multi-cluster mode): diff --git a/ansible/detect-runtime-class.yaml b/ansible/detect-runtime-class.yaml new file mode 100644 index 00000000..a3618ecb --- /dev/null +++ b/ansible/detect-runtime-class.yaml @@ -0,0 +1,56 @@ +- name: Detect and configure runtime class + hosts: localhost + connection: local + gather_facts: false + tasks: + - name: Check for Intel TDX nodes + kubernetes.core.k8s_info: + api_version: v1 + kind: Node + label_selectors: + - intel.feature.node.kubernetes.io/tdx=true + register: tdx_nodes + + - name: Check for AMD SEV-SNP nodes + kubernetes.core.k8s_info: + api_version: v1 + kind: Node + label_selectors: + - amd.feature.node.kubernetes.io/snp=true + register: snp_nodes + + - name: Set runtime handler for Intel TDX + ansible.builtin.set_fact: + kata_handler: "kata-tdx" + kata_overhead: + memory: "350Mi" + cpu: "250m" + tdx.intel.com/keys: "1" + kata_node_selector: + intel.feature.node.kubernetes.io/tdx: "true" + when: tdx_nodes.resources | length > 0 + + - name: Set runtime handler for AMD SEV-SNP + ansible.builtin.set_fact: + kata_handler: "kata-snp" + kata_overhead: + memory: "350Mi" + cpu: "250m" + kata_node_selector: + amd.feature.node.kubernetes.io/snp: "true" + when: snp_nodes.resources | length > 0 + + - name: Create kata-cc RuntimeClass + kubernetes.core.k8s: + state: present + definition: + apiVersion: node.k8s.io/v1 + kind: RuntimeClass + metadata: + name: kata-cc + handler: "{{ kata_handler }}" + overhead: + podFixed: "{{ kata_overhead }}" + scheduling: + nodeSelector: "{{ kata_node_selector }}" + when: kata_handler is defined diff --git a/charts/all/baremetal/Chart.yaml b/charts/all/baremetal/Chart.yaml new file mode 100644 index 00000000..33940799 --- /dev/null +++ b/charts/all/baremetal/Chart.yaml @@ -0,0 +1,9 @@ +apiVersion: v2 +description: Bare metal platform configuration (NFD rules, MachineConfigs, RuntimeClasses, Intel device plugin). +keywords: +- pattern +- upstream +- sandbox +- baremetal +name: baremetal +version: 0.0.1 diff --git a/charts/all/baremetal/bm-kernel-params.yaml b/charts/all/baremetal/bm-kernel-params.yaml new file mode 100644 index 00000000..86f01791 --- /dev/null +++ b/charts/all/baremetal/bm-kernel-params.yaml @@ -0,0 +1,2 @@ +[hypervisor.qemu] +kernel_params="agent.aa_kbc_params=cc_kbc::http://kbs-trustee-operator-system.{{ .Values.global.hubClusterDomain }}" diff --git a/charts/all/baremetal/templates/kata-nfd.yaml b/charts/all/baremetal/templates/kata-nfd.yaml new file mode 100644 index 00000000..6196e02c --- /dev/null +++ b/charts/all/baremetal/templates/kata-nfd.yaml @@ -0,0 +1,80 @@ +apiVersion: nfd.openshift.io/v1alpha1 +kind: NodeFeatureRule +metadata: + name: consolidated-hardware-features + namespace: openshift-nfd +spec: + rules: + - name: "runtime.kata" + labels: + feature.node.kubernetes.io/runtime.kata: "true" + matchAny: + - matchFeatures: + - feature: cpu.cpuid + matchExpressions: + SSE42: { op: Exists } + VMX: { op: Exists } + - feature: kernel.loadedmodule + matchExpressions: + kvm: { op: Exists } + kvm_intel: { op: Exists } + - matchFeatures: + - feature: cpu.cpuid + matchExpressions: + SSE42: { op: Exists } + SVM: { op: Exists } + - feature: kernel.loadedmodule + matchExpressions: + kvm: { op: Exists } + kvm_amd: { op: Exists } + + - name: "amd.sev-snp" + labels: + amd.feature.node.kubernetes.io/snp: "true" + extendedResources: + sev-snp.amd.com/esids: "@cpu.security.sev.encrypted_state_ids" + matchFeatures: + - feature: cpu.cpuid + matchExpressions: + SVM: { op: Exists } + - feature: cpu.security + matchExpressions: + sev.snp.enabled: { op: Exists } + + - name: "intel.sgx" + labels: + intel.feature.node.kubernetes.io/sgx: "true" + extendedResources: + sgx.intel.com/epc: "@cpu.security.sgx.epc" + matchFeatures: + - feature: cpu.cpuid + matchExpressions: + SGX: { op: Exists } + SGXLC: { op: Exists } + - feature: cpu.security + matchExpressions: + sgx.enabled: { op: IsTrue } + - feature: kernel.config + matchExpressions: + X86_SGX: { op: Exists } + + - name: "intel.tdx" + labels: + intel.feature.node.kubernetes.io/tdx: "true" + extendedResources: + tdx.intel.com/keys: "@cpu.security.tdx.total_keys" + matchFeatures: + - feature: cpu.cpuid + matchExpressions: + VMX: { op: Exists } + - feature: cpu.security + matchExpressions: + tdx.enabled: { op: Exists } + + - name: "ibm.se.enabled" + labels: + ibm.feature.node.kubernetes.io/se: "true" + matchFeatures: + - feature: cpu.security + matchExpressions: + se.enabled: { op: IsTrue } diff --git a/charts/all/baremetal/templates/kernel-params-mco.yaml b/charts/all/baremetal/templates/kernel-params-mco.yaml new file mode 100644 index 00000000..1c372551 --- /dev/null +++ b/charts/all/baremetal/templates/kernel-params-mco.yaml @@ -0,0 +1,21 @@ +{{- range list "master" "worker" }} +--- +apiVersion: machineconfiguration.openshift.io/v1 +kind: MachineConfig +metadata: + labels: + machineconfiguration.openshift.io/role: {{ . }} + name: 96-kata-kernel-config-{{ . }} + namespace: openshift-machine-config-operator +spec: + config: + ignition: + version: 3.2.0 + storage: + files: + - contents: + source: 'data:text/plain;charset=utf-8;base64,{{ tpl ($.Files.Get "bm-kernel-params.yaml") $ | b64enc }}' + mode: 420 + overwrite: true + path: /etc/kata-containers/snp/config.d/96-kata-kernel-config +{{- end }} diff --git a/charts/all/baremetal/templates/nfd-instance.yaml b/charts/all/baremetal/templates/nfd-instance.yaml new file mode 100644 index 00000000..97ce9ee1 --- /dev/null +++ b/charts/all/baremetal/templates/nfd-instance.yaml @@ -0,0 +1,12 @@ +apiVersion: nfd.openshift.io/v1 +kind: NodeFeatureDiscovery +metadata: + name: nfd-instance + namespace: openshift-nfd +spec: + operand: + image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.20 + imagePullPolicy: Always + servicePort: 12000 + workerConfig: + configData: | diff --git a/charts/all/baremetal/templates/runtimeclass-amd-snp.yaml b/charts/all/baremetal/templates/runtimeclass-amd-snp.yaml new file mode 100644 index 00000000..c59be865 --- /dev/null +++ b/charts/all/baremetal/templates/runtimeclass-amd-snp.yaml @@ -0,0 +1,12 @@ +# apiVersion: node.k8s.io/v1 +# kind: RuntimeClass +# metadata: +# name: kata-snp +# handler: kata-snp +# overhead: +# podFixed: +# memory: "350Mi" +# cpu: "250m" +# scheduling: +# nodeSelector: +# amd.feature.node.kubernetes.io/snp: "true" diff --git a/charts/all/baremetal/templates/runtimeclass-intel-tdx.yaml b/charts/all/baremetal/templates/runtimeclass-intel-tdx.yaml new file mode 100644 index 00000000..f328dc0e --- /dev/null +++ b/charts/all/baremetal/templates/runtimeclass-intel-tdx.yaml @@ -0,0 +1,13 @@ +# apiVersion: node.k8s.io/v1 +# kind: RuntimeClass +# metadata: +# name: kata-tdx +# handler: kata-tdx +# overhead: +# podFixed: +# memory: "350Mi" +# cpu: "250m" +# tdx.intel.com/keys: 1 +# scheduling: +# nodeSelector: +# intel.feature.node.kubernetes.io/tdx: "true" diff --git a/charts/all/baremetal/templates/vsock-mco.yaml b/charts/all/baremetal/templates/vsock-mco.yaml new file mode 100644 index 00000000..f8938f62 --- /dev/null +++ b/charts/all/baremetal/templates/vsock-mco.yaml @@ -0,0 +1,24 @@ +{{- range list "master" "worker" }} +--- +apiVersion: machineconfiguration.openshift.io/v1 +kind: MachineConfig +metadata: + labels: + machineconfiguration.openshift.io/role: {{ . }} + name: 99-enable-coco-{{ . }} +spec: + kernelArguments: + - nohibernate +{{- if $.Values.tdx.enabled }} + - kvm_intel.tdx=1 +{{- end }} + config: + ignition: + version: 3.2.0 + storage: + files: + - path: /etc/modules-load.d/vsock.conf + mode: 0644 + contents: + source: data:text/plain;charset=utf-8;base64,dnNvY2stbG9vcGJhY2sK +{{- end }} diff --git a/charts/all/baremetal/values.yaml b/charts/all/baremetal/values.yaml new file mode 100644 index 00000000..3942cb97 --- /dev/null +++ b/charts/all/baremetal/values.yaml @@ -0,0 +1,2 @@ +tdx: + enabled: true diff --git a/charts/all/intel-dcap/Chart.yaml b/charts/all/intel-dcap/Chart.yaml new file mode 100644 index 00000000..06095d7f --- /dev/null +++ b/charts/all/intel-dcap/Chart.yaml @@ -0,0 +1,10 @@ +apiVersion: v2 +description: Intel DCAP services (PCCS and QGS) for TDX remote attestation. +keywords: +- pattern +- intel +- tdx +- pccs +- qgs +name: intel-dcap +version: 0.0.1 diff --git a/charts/all/intel-dcap/templates/intel-dpo-sgx.yaml b/charts/all/intel-dcap/templates/intel-dpo-sgx.yaml new file mode 100644 index 00000000..2a7a8fca --- /dev/null +++ b/charts/all/intel-dcap/templates/intel-dpo-sgx.yaml @@ -0,0 +1,11 @@ +apiVersion: deviceplugin.intel.com/v1 +kind: SgxDevicePlugin +metadata: + name: sgxdeviceplugin-sample +spec: + image: registry.connect.redhat.com/intel/intel-sgx-plugin@sha256:f2c77521c6dae6b4db1896a5784ba8b06a5ebb2a01684184fc90143cfcca7bf4 + enclaveLimit: 110 + provisionLimit: 110 + logLevel: 4 + nodeSelector: + intel.feature.node.kubernetes.io/sgx: "true" diff --git a/charts/all/intel-dcap/templates/pccs-deployment.yaml b/charts/all/intel-dcap/templates/pccs-deployment.yaml new file mode 100644 index 00000000..9d5435e2 --- /dev/null +++ b/charts/all/intel-dcap/templates/pccs-deployment.yaml @@ -0,0 +1,69 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: pccs + namespace: intel-dcap +spec: + replicas: 1 + selector: + matchLabels: + app: pccs + template: + metadata: + labels: + app: pccs + trustedservices.intel.com/cache: pccs + spec: + tolerations: + - effect: NoSchedule + key: node-role.kubernetes.io/master + operator: Exists + serviceAccountName: pccs-service-account + {{- if .Values.baremetal.pccs.nodeSelector }} + nodeSelector: + kubernetes.io/hostname: {{ .Values.baremetal.pccs.nodeSelector }} + {{- end }} + initContainers: + - name: init-seclabel + image: registry.access.redhat.com/ubi9/ubi:9.7-1764578509 + command: [ "sh", "-c", "chcon -Rt container_file_t /var/cache/pccs" ] + volumeMounts: + - name: host-database + mountPath: /var/cache/pccs + securityContext: + runAsUser: 0 + runAsGroup: 0 + privileged: true # Required for chcon to work on host files + containers: + - name: pccs + image: registry.redhat.io/openshift-sandboxed-containers/osc-pccs@sha256:de64fc7b13aaa7e466e825d62207f77e7c63a4f9da98663c3ab06abc45f2334d + envFrom: + - secretRef: + name: pccs-secrets + env: + - name: "PCCS_LOG_LEVEL" + value: "info" + - name: "CLUSTER_HTTPS_PROXY" + value: "" + - name: "PCCS_FILL_MODE" + value: "LAZY" + ports: + - containerPort: 8042 + name: pccs-port + volumeMounts: + - name: pccs-tls + mountPath: /opt/intel/pccs/ssl_key + readOnly: true + - name: host-database + mountPath: /var/cache/pccs/ + securityContext: + runAsUser: 0 + volumes: + - name: pccs-tls + secret: + secretName: pccs-tls + - name: host-database + hostPath: + path: /var/cache/pccs/ + type: DirectoryOrCreate diff --git a/charts/all/intel-dcap/templates/pccs-rbac.yaml b/charts/all/intel-dcap/templates/pccs-rbac.yaml new file mode 100644 index 00000000..2122caf0 --- /dev/null +++ b/charts/all/intel-dcap/templates/pccs-rbac.yaml @@ -0,0 +1,49 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: pccs-service-account + namespace: intel-dcap +--- +apiVersion: security.openshift.io/v1 +kind: SecurityContextConstraints +metadata: + name: pccs-scc + annotations: + kubernetes.io/description: "SCC for Intel DCAP PCCS service requiring privileged access and hostPath volumes" +allowHostDirVolumePlugin: true +allowHostIPC: false +allowHostNetwork: false +allowHostPID: false +allowHostPorts: false +allowPrivilegedContainer: true +allowedCapabilities: +- DAC_OVERRIDE +- SETGID +- SETUID +defaultAddCapabilities: null +fsGroup: + type: RunAsAny +priority: null +readOnlyRootFilesystem: false +requiredDropCapabilities: +- KILL +- MKNOD +- SETPCAP +- SYS_CHROOT +runAsUser: + type: RunAsAny +seLinuxContext: + type: MustRunAs +supplementalGroups: + type: RunAsAny +users: +- system:serviceaccount:intel-dcap:pccs-service-account +volumes: +- configMap +- downwardAPI +- emptyDir +- hostPath +- persistentVolumeClaim +- projected +- secret diff --git a/charts/all/intel-dcap/templates/pccs-secrets-eso.yaml b/charts/all/intel-dcap/templates/pccs-secrets-eso.yaml new file mode 100644 index 00000000..5ee91ab9 --- /dev/null +++ b/charts/all/intel-dcap/templates/pccs-secrets-eso.yaml @@ -0,0 +1,37 @@ +--- +apiVersion: "external-secrets.io/v1beta1" +kind: ExternalSecret +metadata: + name: pccs-secrets-eso + namespace: intel-dcap +spec: + refreshInterval: 15s + secretStoreRef: + name: {{ .Values.secretStore.name }} + kind: {{ .Values.secretStore.kind }} + target: + name: pccs-secrets + template: + type: Opaque + data: + PCCS_API_KEY: "{{ "{{ .api_key }}" }}" + PCCS_USER_TOKEN_HASH: "{{ "{{ .user_token_hash }}" }}" + USER_TOKEN: "{{ "{{ .user_token }}" }}" + PCCS_ADMIN_TOKEN_HASH: "{{ "{{ .admin_token_hash }}" }}" + data: + - secretKey: api_key + remoteRef: + key: 'secret/data/hub/pccs' + property: api_key + - secretKey: user_token_hash + remoteRef: + key: 'secret/data/hub/pccs' + property: user_token_hash + - secretKey: user_token + remoteRef: + key: 'secret/data/hub/pccs' + property: user_token + - secretKey: admin_token_hash + remoteRef: + key: 'secret/data/hub/pccs' + property: admin_token_hash diff --git a/charts/all/intel-dcap/templates/pccs-service.yaml b/charts/all/intel-dcap/templates/pccs-service.yaml new file mode 100644 index 00000000..bc83684c --- /dev/null +++ b/charts/all/intel-dcap/templates/pccs-service.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: pccs-service + namespace: intel-dcap +spec: + selector: + trustedservices.intel.com/cache: pccs + ports: + - name: pccs + protocol: TCP + port: 8042 + targetPort: pccs-port diff --git a/charts/all/intel-dcap/templates/pccs-tls-eso.yaml b/charts/all/intel-dcap/templates/pccs-tls-eso.yaml new file mode 100644 index 00000000..a7212ae1 --- /dev/null +++ b/charts/all/intel-dcap/templates/pccs-tls-eso.yaml @@ -0,0 +1,24 @@ +--- +apiVersion: "external-secrets.io/v1beta1" +kind: ExternalSecret +metadata: + name: pccs-tls-eso + namespace: intel-dcap +spec: + refreshInterval: 15s + secretStoreRef: + name: {{ .Values.secretStore.name }} + kind: {{ .Values.secretStore.kind }} + target: + name: pccs-tls + template: + type: Opaque + data: + - secretKey: private.pem + remoteRef: + key: 'secret/data/hub/pccs-tls' + property: private_key + - secretKey: file.crt + remoteRef: + key: 'secret/data/hub/pccs-tls' + property: certificate diff --git a/charts/all/intel-dcap/templates/qgs-config-cm.yaml b/charts/all/intel-dcap/templates/qgs-config-cm.yaml new file mode 100644 index 00000000..5745adeb --- /dev/null +++ b/charts/all/intel-dcap/templates/qgs-config-cm.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: qgs-config + namespace: intel-dcap +data: + qgs.conf: | + port = 4050 + number_threads = 4 diff --git a/charts/all/intel-dcap/templates/qgs-ds.yaml b/charts/all/intel-dcap/templates/qgs-ds.yaml new file mode 100644 index 00000000..788e769c --- /dev/null +++ b/charts/all/intel-dcap/templates/qgs-ds.yaml @@ -0,0 +1,88 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: tdx-qgs + namespace: intel-dcap +spec: + selector: + matchLabels: + app: tdx-qgs + template: + metadata: + labels: + app: tdx-qgs + annotations: + sgx.intel.com/quote-provider: tdx-qgs + qcnl-conf: '{"pccs_url": "https://pccs-service:8042/sgx/certification/v4/", "use_secure_cert": false, "pck_cache_expire_hours": 168}' + spec: + serviceAccountName: tdx-qgs-service-account + nodeSelector: + intel.feature.node.kubernetes.io/tdx: 'true' + hostNetwork: true + dnsPolicy: ClusterFirstWithHostNet + initContainers: + - name: platform-registration + image: registry.redhat.io/openshift-sandboxed-containers/osc-tdx-qgs@sha256:86b23461c4eea073f4535a777374a54e934c37ac8c96c6180030f92ebf970524 + restartPolicy: Always + command: [ '/usr/bin/dcap-registration-flow' ] + env: + - name: PCCS_URL + value: "https://pccs-service:8042" + - name: SECURE_CERT + value: 'false' + envFrom: + - secretRef: + name: pccs-secrets + securityContext: + readOnlyRootFilesystem: true + allowPrivilegeEscalation: true + privileged: true + capabilities: + drop: + - ALL + add: + - LINUX_IMMUTABLE + volumeMounts: + - name: efivars + mountPath: /sys/firmware/efi/efivars + containers: + - name: tdx-qgs + image: registry.redhat.io/openshift-sandboxed-containers/osc-tdx-qgs@sha256:86b23461c4eea073f4535a777374a54e934c37ac8c96c6180030f92ebf970524 + args: + - -p=4050 + - -n=4 + securityContext: + readOnlyRootFilesystem: true + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + resources: + limits: + sgx.intel.com/epc: "512Ki" + sgx.intel.com/enclave: 1 + sgx.intel.com/provision: 1 + env: + - name: QCNL_CONF_PATH + value: "/run/dcap/qcnl_conf" + - name: XDG_CACHE_HOME + value: "/run/dcap/cache" + volumeMounts: + - name: dcap-qcnl-cache + mountPath: /run/dcap/cache + - name: qcnl-config + mountPath: /run/dcap/ + readOnly: true + volumes: + - name: dcap-qcnl-cache + emptyDir: + sizeLimit: 50Mi + - name: qcnl-config + downwardAPI: + items: + - path: "qcnl_conf" + fieldRef: + fieldPath: metadata.annotations['qcnl-conf'] + - name: efivars + hostPath: + path: /sys/firmware/efi/efivars/ diff --git a/charts/all/intel-dcap/templates/qgs-rbac.yaml b/charts/all/intel-dcap/templates/qgs-rbac.yaml new file mode 100644 index 00000000..2dcb591e --- /dev/null +++ b/charts/all/intel-dcap/templates/qgs-rbac.yaml @@ -0,0 +1,47 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tdx-qgs-service-account + namespace: intel-dcap +--- +apiVersion: security.openshift.io/v1 +kind: SecurityContextConstraints +metadata: + name: tdx-qgs-scc + annotations: + kubernetes.io/description: "SCC for Intel TDX Quote Generation Service requiring host network access and SGX devices" +allowHostDirVolumePlugin: true +allowHostIPC: false +allowHostNetwork: true +allowHostPID: false +allowHostPorts: false +allowPrivilegedContainer: true +allowedCapabilities: +- LINUX_IMMUTABLE +defaultAddCapabilities: null +fsGroup: + type: RunAsAny +priority: null +readOnlyRootFilesystem: false +requiredDropCapabilities: +- KILL +- MKNOD +- SETPCAP +- SYS_CHROOT +runAsUser: + type: RunAsAny +seLinuxContext: + type: MustRunAs +supplementalGroups: + type: RunAsAny +users: +- system:serviceaccount:intel-dcap:tdx-qgs-service-account +volumes: +- configMap +- downwardAPI +- emptyDir +- hostPath +- persistentVolumeClaim +- projected +- secret diff --git a/charts/all/intel-dcap/templates/qgs-sgx-cm.yaml b/charts/all/intel-dcap/templates/qgs-sgx-cm.yaml new file mode 100644 index 00000000..b715a010 --- /dev/null +++ b/charts/all/intel-dcap/templates/qgs-sgx-cm.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: sgx-default-qcnl-conf + namespace: intel-dcap +data: + sgx_default_qcnl.conf: | + { + "pccs_url": "https://pccs-service:8042/sgx/certification/v4/", + "use_secure_cert": false, + "retry_times": 6, + "retry_delay": 10, + "pck_cache_expire_hours": 168, + "verify_collateral_cache_expire_hours": 168, + "local_cache_only": false + } diff --git a/charts/all/intel-dcap/values.yaml b/charts/all/intel-dcap/values.yaml new file mode 100644 index 00000000..5910caed --- /dev/null +++ b/charts/all/intel-dcap/values.yaml @@ -0,0 +1,7 @@ +baremetal: + pccs: + nodeSelector: "" # optional: hostname for PCCS pinning. Empty = schedule anywhere with tolerations. + +secretStore: + name: "" + kind: "" diff --git a/charts/coco-supported/hello-openshift/templates/_helpers.tpl b/charts/coco-supported/hello-openshift/templates/_helpers.tpl index 6082b80a..ab49aab2 100644 --- a/charts/coco-supported/hello-openshift/templates/_helpers.tpl +++ b/charts/coco-supported/hello-openshift/templates/_helpers.tpl @@ -51,11 +51,14 @@ app.kubernetes.io/instance: {{ .Release.Name }} {{- end }} {{/* -Determine runtime class name based on cluster platform -Returns "kata-remote" for Azure/AWS, "kata-cc" for other platforms +Determine runtime class name. +If runtimeClassName is explicitly set, use it. +Otherwise, detect from cluster platform: "kata-remote" for Azure/AWS, "kata-cc" for other platforms. */}} {{- define "hello-openshift.runtimeClassName" -}} -{{- if or (eq .Values.global.clusterPlatform "Azure") (eq .Values.global.clusterPlatform "AWS") -}} +{{- if .Values.runtimeClassName -}} +{{- .Values.runtimeClassName -}} +{{- else if or (eq .Values.global.clusterPlatform "Azure") (eq .Values.global.clusterPlatform "AWS") -}} kata-remote {{- else -}} kata-cc diff --git a/charts/coco-supported/hello-openshift/values.yaml b/charts/coco-supported/hello-openshift/values.yaml index c9dbbe0a..39f2b9ff 100644 --- a/charts/coco-supported/hello-openshift/values.yaml +++ b/charts/coco-supported/hello-openshift/values.yaml @@ -1,6 +1,11 @@ # Chart-specific values # Common values are inherited from values-global.yaml +# Runtime class for confidential containers. +# When empty, auto-detected from global.clusterPlatform (kata-remote for Azure/AWS, kata-cc otherwise). +# Bare metal: set to "kata-cc" via values-baremetal.yaml overrides. +runtimeClassName: "" + # Global values used by this chart (overridden by values-global.yaml) global: clusterPlatform: "" # Cluster platform: "Azure" or "AWS" - determines runtime class diff --git a/charts/coco-supported/kbs-access/templates/secure-pod.yaml b/charts/coco-supported/kbs-access/templates/secure-pod.yaml index 663408bd..bf5d33f8 100644 --- a/charts/coco-supported/kbs-access/templates/secure-pod.yaml +++ b/charts/coco-supported/kbs-access/templates/secure-pod.yaml @@ -7,7 +7,7 @@ metadata: annotations: peerpods: "true" spec: - runtimeClassName: kata-remote + runtimeClassName: {{ .Values.runtimeClassName }} containers: - name: python-access image: ghcr.io/butler54/kbs-access-app:latest diff --git a/charts/coco-supported/kbs-access/values.yaml b/charts/coco-supported/kbs-access/values.yaml index fdaa4d74..975ee21a 100644 --- a/charts/coco-supported/kbs-access/values.yaml +++ b/charts/coco-supported/kbs-access/values.yaml @@ -1,6 +1,11 @@ # Chart-specific values # Common values are inherited from values-global.yaml +# Runtime class for confidential containers. +# Azure/AWS peer-pods: kata-remote (default) +# Bare metal: kata-cc (set via values-baremetal.yaml overrides) +runtimeClassName: "kata-remote" + # Global values used by this chart (overridden by values-global.yaml) global: coco: diff --git a/charts/hub/storage/Chart.yaml b/charts/hub/storage/Chart.yaml new file mode 100644 index 00000000..55383ab4 --- /dev/null +++ b/charts/hub/storage/Chart.yaml @@ -0,0 +1,9 @@ +apiVersion: v2 +description: Deploy and configure storage providers (HPP/LVM) for baremetal clusters +keywords: +- pattern +- storage +- hpp +- lvm +name: storage +version: 0.0.2 diff --git a/charts/hub/storage/templates/hostpathprovisioner.yaml b/charts/hub/storage/templates/hostpathprovisioner.yaml new file mode 100644 index 00000000..909e24e5 --- /dev/null +++ b/charts/hub/storage/templates/hostpathprovisioner.yaml @@ -0,0 +1,11 @@ +{{- if eq .Values.global.storageProvider "hpp" }} +apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 +kind: HostPathProvisioner +metadata: + name: hostpath-provisioner +spec: + imagePullPolicy: IfNotPresent + storagePools: + - name: local + path: {{ .Values.hpp.storagePools.path | default "/var/hpvolumes" }} +{{- end }} diff --git a/charts/hub/storage/templates/hpp-storageclass.yaml b/charts/hub/storage/templates/hpp-storageclass.yaml new file mode 100644 index 00000000..7c86c889 --- /dev/null +++ b/charts/hub/storage/templates/hpp-storageclass.yaml @@ -0,0 +1,13 @@ +{{- if eq .Values.global.storageProvider "hpp" }} +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: {{ .Values.hpp.storageClass.name | default "hostpath-csi" }} + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: kubevirt.io.hostpath-provisioner +reclaimPolicy: {{ .Values.hpp.storageClass.reclaimPolicy | default "Delete" }} +volumeBindingMode: {{ .Values.hpp.storageClass.volumeBindingMode | default "WaitForFirstConsumer" }} +parameters: + storagePool: local +{{- end }} diff --git a/charts/hub/storage/templates/lvmcluster.yaml b/charts/hub/storage/templates/lvmcluster.yaml new file mode 100644 index 00000000..3a7724bb --- /dev/null +++ b/charts/hub/storage/templates/lvmcluster.yaml @@ -0,0 +1,15 @@ +{{- if eq .Values.global.storageProvider "lvm" }} +apiVersion: lvm.topolvm.io/v1alpha1 +kind: LVMCluster +metadata: + name: {{ .Values.lvmCluster.name | default "lvmcluster" }} + namespace: openshift-storage +spec: + storage: + deviceClasses: + - name: {{ .Values.lvmCluster.deviceClass | default "vg1" }} + thinPoolConfig: + name: thin-pool-1 + sizePercent: 90 + overprovisionRatio: 10 +{{- end }} diff --git a/charts/hub/storage/values.yaml b/charts/hub/storage/values.yaml new file mode 100644 index 00000000..705f5233 --- /dev/null +++ b/charts/hub/storage/values.yaml @@ -0,0 +1,14 @@ +global: + storageProvider: hpp + +lvmCluster: + name: "lvmcluster" + deviceClass: "vg1" + +hpp: + storagePools: + path: /var/hpvolumes + storageClass: + name: hostpath-csi + reclaimPolicy: Delete + volumeBindingMode: WaitForFirstConsumer diff --git a/docs/dell-tdx-configuration.md b/docs/dell-tdx-configuration.md new file mode 100644 index 00000000..ecf078fd --- /dev/null +++ b/docs/dell-tdx-configuration.md @@ -0,0 +1,149 @@ +# Enable Intel TDX on Dell PowerEdge via iDRAC + +This guide provides step-by-step instructions for enabling Intel Trust Domain Extensions (TDX) on Dell PowerEdge servers using the iDRAC console. + +## Prerequisites + +- Dell 16th Generation PowerEdge server: + - PowerEdge R660, R660xs + - PowerEdge R760, R760xs, R760xd2, R760XA + - PowerEdge R860, R960 + - PowerEdge XE8640, XE9640, XE9680 + - PowerEdge C6620, MX760c + - PowerEdge XR5610, XR7620, XR8610t, XR8620t + - PowerEdge T360, T560 +- 5th Gen Intel Xeon Scalable processor with TDX support +- **8 or 16 DIMMs per socket** (required memory configuration) +- Latest BIOS firmware installed + +## Step-by-Step Instructions (Order Matters) + +> **IMPORTANT:** Settings must be configured in this exact order. Some options (like "Multiple Keys") will be greyed out until prerequisite settings are applied. You may need to **save and reboot between steps** for dependent options to become available. + +### 1. Access BIOS Setup via iDRAC + +1. Log into the iDRAC web console +2. Navigate to **Configuration → BIOS Settings** +3. Alternatively, launch **Virtual Console** and press **F2** during POST to enter System Setup + +### 2. Configure Memory Settings (FIRST) + +Navigate to: **System BIOS → Memory Settings** + +| Setting | Value | +| ---------------------- | ----------- | +| **Node Interleaving** | Disabled | + +**Save and reboot** before proceeding. + +### 3. Configure Processor Prerequisites (SECOND) + +Navigate to: **System BIOS → Processor Settings** + +| Setting | Value | +| -------------------------------- | -------- | +| **Logical Processor (x2APIC)** | Enabled | +| **CPU Physical Address Limit** | Disabled | + +**Save and reboot** before proceeding. + +### 4. Enable Memory Encryption - Multiple Keys (THIRD) + +Navigate to: **System BIOS → System Security** + +| Setting | Value | +| --------------------- | -------------- | +| **Memory Encryption** | Multiple Keys | + +> If "Multiple Keys" is still greyed out, verify steps 2 and 3 were applied and the system was rebooted. + +**Save and reboot** before proceeding. + +### 5. Configure TDX Settings (FOURTH) + +Navigate to: **System BIOS → System Security** (or **Processor Settings** depending on BIOS version) + +| Setting | Value | +| ------------------------------------------------- | ------- | +| **Global Memory Integrity** | Disabled | +| **Intel TDX (Trust Domain Extension)** | Enabled | +| **TME-MT/TDX Key Split** | 1 | +| **TDX Secure Arbitration Mode Loader (SEAM)** | Enabled | + +### 6. Configure SGX Settings (FIFTH) + +Navigate to: **System BIOS → Processor Settings → Software Guard Extensions (SGX)** + +| Setting | Value | +| -------------------- | ------------------------- | +| **Intel SGX** | Enabled | +| **SGX Factory Reset** | Off | +| **SGX PRMRR Size** | As needed (e.g., 64GB) | + +### 7. Final Save and Reboot + +1. Press **Escape** to exit menus +2. Select **Save Changes and Exit** +3. System will reboot with TDX enabled + +## Configuration Summary (Order of Operations) + +```text +1. Disable Node Interleaving → Save & Reboot +2. Enable x2APIC Mode → Save & Reboot +3. Disable CPU Physical Address Limit → Save & Reboot +4. Set Memory Encryption = Multiple Keys → Save & Reboot +5. Disable Global Memory Integrity +6. Enable Intel TDX +7. Set TME-MT/TDX Key Split = 1 +8. Enable SEAM Loader +9. Enable Intel SGX → Final Save & Reboot +``` + +## Verification + +After the OS boots, verify TDX is enabled: + +```bash +# Check kernel messages for TDX +dmesg | grep -i tdx +# Should show: "virt/tdx: BIOS enabled: private KeyID range: [X, Y)" + +# Check for TDX module +ls /sys/firmware/tdx_seam/ +``` + +## Troubleshooting + +### "Multiple Keys" Option is Greyed Out + +This is typically caused by: + +1. **Node Interleaving is Enabled** - Must be disabled first +2. **x2APIC Mode is Disabled** - Must be enabled first +3. **CPU Physical Address Limit is Enabled** - Must be disabled first +4. **System not rebooted** - Some changes require reboot before dependent options appear +5. **Insufficient DIMMs** - Requires 8 or 16 DIMMs per socket + +### Settings Not Available + +If TDX-related settings are not visible: + +1. Ensure BIOS firmware is updated to the latest version +2. Verify your processor supports TDX (5th Gen Xeon Scalable required) +3. Contact Dell support for BIOS with TDX support + +### TDX Not Detected by OS + +If the OS doesn't detect TDX after configuration: + +1. Verify all settings are correctly applied in the order specified +2. Ensure the OS/kernel supports TDX (Linux 6.2+ recommended) +3. Check that Memory Encryption is set to "Multiple Keys" (not "Single Key") + +## References + +- [Dell: Enable Intel TDX on Dell 16G Intel Servers](https://www.dell.com/support/kbdoc/en-us/000226452/enableinteltdxondell16g) +- [Intel TDX Enabling Guide - Hardware Setup](https://cc-enabling.trustedservices.intel.com/intel-tdx-enabling-guide/04/hardware_setup/) +- [Dell Info Hub: Enable Intel TDX in BIOS](https://infohub.delltechnologies.com/en-us/l/securing-ai-workloads-on-dell-poweredge-with-intel-xeon-processors-using-intel-trust-domain-extensions/appendix-b-enable-intel-r-tdx-in-bios/) +- [Linux Kernel TDX Documentation](https://docs.kernel.org/arch/x86/tdx.html) diff --git a/overrides/values-BareMetal.yaml b/overrides/values-BareMetal.yaml new file mode 100644 index 00000000..59cb0ab8 --- /dev/null +++ b/overrides/values-BareMetal.yaml @@ -0,0 +1,2 @@ +# Bare metal platform overrides. +# Storage-specific values moved to overrides/values-storage-*.yaml. diff --git a/overrides/values-None.yaml b/overrides/values-None.yaml new file mode 100644 index 00000000..172b55e3 --- /dev/null +++ b/overrides/values-None.yaml @@ -0,0 +1,2 @@ +# None platform overrides. +# Storage-specific values moved to overrides/values-storage-*.yaml. diff --git a/overrides/values-storage-external.yaml b/overrides/values-storage-external.yaml new file mode 100644 index 00000000..72b69f73 --- /dev/null +++ b/overrides/values-storage-external.yaml @@ -0,0 +1,2 @@ +# External storage: uses cluster default StorageClass. +# No vault storageClass override — vault uses whatever default exists. diff --git a/overrides/values-storage-hpp.yaml b/overrides/values-storage-hpp.yaml new file mode 100644 index 00000000..1bd94b8b --- /dev/null +++ b/overrides/values-storage-hpp.yaml @@ -0,0 +1,8 @@ +vault: + server: + dataStorage: + storageClass: hostpath-csi + +global: + objectStorage: + backingStorageClass: "hostpath-csi" diff --git a/overrides/values-storage-lvm.yaml b/overrides/values-storage-lvm.yaml new file mode 100644 index 00000000..3c49baed --- /dev/null +++ b/overrides/values-storage-lvm.yaml @@ -0,0 +1,8 @@ +vault: + server: + dataStorage: + storageClass: lvms-vg1 + +global: + objectStorage: + backingStorageClass: "lvms-vg1" diff --git a/scripts/gen-secrets.sh b/scripts/gen-secrets.sh index c487bcac..902d52a2 100755 --- a/scripts/gen-secrets.sh +++ b/scripts/gen-secrets.sh @@ -28,6 +28,32 @@ if [ ! -f "${KBS_PRIVATE_KEY}" ]; then openssl pkey -in "${KBS_PRIVATE_KEY}" -pubout -out "${KBS_PUBLIC_KEY}" fi +## PCCS secrets for bare metal Intel TDX deployments +PCCS_PRIVATE_KEY="${COCO_SECRETS_DIR}/pccs_private.pem" +PCCS_CERTIFICATE="${COCO_SECRETS_DIR}/pccs_certificate.pem" +PCCS_USER_TOKEN_FILE="${COCO_SECRETS_DIR}/pccs_user_token" +PCCS_ADMIN_TOKEN_FILE="${COCO_SECRETS_DIR}/pccs_admin_token" + +if [ ! -f "${PCCS_PRIVATE_KEY}" ]; then + echo "Creating PCCS TLS certificate" + openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ + -keyout "${PCCS_PRIVATE_KEY}" \ + -out "${PCCS_CERTIFICATE}" \ + -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=pccs-service.intel-dcap.svc.cluster.local" +fi + +if [ ! -f "${PCCS_USER_TOKEN_FILE}" ]; then + echo "Creating PCCS user token" + echo "usertoken" > "${PCCS_USER_TOKEN_FILE}" +fi +echo -n "usertoken" | sha512sum | tr -d '[:space:]-' > "${COCO_SECRETS_DIR}/pccs_user_token_hash" + +if [ ! -f "${PCCS_ADMIN_TOKEN_FILE}" ]; then + echo "Creating PCCS admin token" + echo "admintoken" > "${PCCS_ADMIN_TOKEN_FILE}" +fi +echo -n "admintoken" | sha512sum | tr -d '[:space:]-' > "${COCO_SECRETS_DIR}/pccs_admin_token_hash" + ## Copy a sample values file if this stuff doesn't exist if [ ! -f "${VALUES_FILE}" ]; then diff --git a/scripts/get-pccs-node.sh b/scripts/get-pccs-node.sh new file mode 100755 index 00000000..32b4313b --- /dev/null +++ b/scripts/get-pccs-node.sh @@ -0,0 +1,10 @@ +#!/usr/bin/env bash +# Detects a node with Intel TDX support for PCCS deployment. +# Usage: bash scripts/get-pccs-node.sh +NODE=$(oc get nodes -l intel.feature.node.kubernetes.io/tdx=true \ + -o jsonpath='{.items[0].metadata.name}' 2>/dev/null) +if [ -z "$NODE" ]; then + echo "ERROR: No TDX-capable nodes found" >&2 + exit 1 +fi +echo "$NODE" diff --git a/values-baremetal.yaml b/values-baremetal.yaml new file mode 100644 index 00000000..2a2980e8 --- /dev/null +++ b/values-baremetal.yaml @@ -0,0 +1,209 @@ +# Bare metal deployment for confidential containers. +# Supports Intel TDX and AMD SEV-SNP via auto-detection (NFD). +# Set main.clusterGroupName: baremetal in values-global.yaml to use. + +clusterGroup: + name: baremetal + isHubCluster: true + namespaces: + - open-cluster-management + - vault + - golang-external-secrets + - openshift-sandboxed-containers-operator + - trustee-operator-system + - cert-manager-operator + - cert-manager + - hello-openshift + - kbs-access + - openshift-cnv + - openshift-storage + - openshift-nfd + - baremetal + - intel-dcap + + subscriptions: + acm: + name: advanced-cluster-management + namespace: open-cluster-management + sandbox: + name: sandboxed-containers-operator + namespace: openshift-sandboxed-containers-operator + source: redhat-operators + channel: stable + installPlanApproval: Manual + csv: sandboxed-containers-operator.v1.11.0 + trustee: + name: trustee-operator + namespace: trustee-operator-system + source: redhat-operators + channel: stable + installPlanApproval: Manual + csv: trustee-operator.v1.0.0 + cert-manager: + name: openshift-cert-manager-operator + namespace: cert-manager-operator + channel: stable-v1 + lvm-operator: + name: lvms-operator + namespace: openshift-storage + source: redhat-operators + channel: stable-4.20 + installPlanApproval: Automatic + cnv: + name: kubevirt-hyperconverged + namespace: openshift-cnv + source: redhat-operators + channel: stable + installPlanApproval: Automatic + nfd: + name: nfd + namespace: openshift-nfd + channel: stable + projects: + - hub + - vault + - trustee + - golang-external-secrets + - sandbox + - workloads + - default + + # Explicitly mention the cluster-state based overrides we plan to use for this pattern. + # We can use self-referential variables because the chart calls the tpl function with these variables defined + sharedValueFiles: + - '/overrides/values-{{ $.Values.global.clusterPlatform }}.yaml' + - '/overrides/values-storage-{{ $.Values.global.storageProvider }}.yaml' + + applications: + acm: + name: acm + namespace: open-cluster-management + project: hub + chart: acm + chartVersion: 0.1.* + + vault: + name: vault + namespace: vault + project: vault + chart: hashicorp-vault + chartVersion: 0.1.* + + secrets-operator: + name: golang-external-secrets + namespace: golang-external-secrets + project: golang-external-secrets + chart: golang-external-secrets + chartVersion: 0.1.* + + trustee: + name: trustee + namespace: trustee-operator-system + project: trustee + chart: trustee + chartVersion: 0.2.* + overrides: + - name: global.coco.secured + value: "true" + - name: global.coco.bypassAttestation + value: "true" + - name: kbs.https.enabled + value: "false" + - name: kbs.secretResources[0].name + value: kbsres1 + - name: kbs.secretResources[0].key + value: secret/data/hub/kbsres1 + - name: kbs.secretResources[1].name + value: passphrase + - name: kbs.secretResources[1].key + value: secret/data/hub/passphrase + + storage: + name: storage + namespace: openshift-storage + project: hub + path: charts/hub/storage + + baremetal: + name: baremetal + namespace: baremetal + project: hub + path: charts/all/baremetal + + sandbox: + name: sandbox + namespace: openshift-sandboxed-containers-operator + project: sandbox + chart: sandboxed-containers + chartVersion: 0.2.* + overrides: + - name: global.secretStore.backend + value: vault + - name: secretStore.name + value: vault-backend + - name: secretStore.kind + value: ClusterSecretStore + - name: enablePeerPods + value: "false" + + + intel-dcap: + name: intel-dcap + namespace: intel-dcap + project: hub + path: charts/all/intel-dcap + overrides: + - name: secretStore.name + value: vault-backend + - name: secretStore.kind + value: ClusterSecretStore + + sandbox-policies: + name: sandbox-policies + namespace: openshift-sandboxed-containers-operator + chart: sandboxed-policies + chartVersion: 0.1.* + + kbs-access: + name: kbs-access + namespace: kbs-access + project: workloads + path: charts/coco-supported/kbs-access + overrides: + - name: runtimeClassName + value: "kata-cc" + + hello-openshift: + name: hello-openshift + namespace: hello-openshift + project: workloads + path: charts/coco-supported/hello-openshift + overrides: + - name: runtimeClassName + value: "kata-cc" + + imperative: + # NOTE: We *must* use lists and not hashes. As hashes lose ordering once parsed by helm + # The default schedule is every 10 minutes: imperative.schedule + # Total timeout of all jobs is 1h: imperative.activeDeadlineSeconds + # imagePullPolicy is set to always: imperative.imagePullPolicy + # For additional overrides that apply to the jobs, please refer to + # https://validatedpatterns.io/imperative-actions/#additional-job-customizations + image: ghcr.io/butler54/imperative-container:latest + serviceAccountCreate: true + adminServiceAccountCreate: true + serviceAccountName: imperative-admin-sa + jobs: + - name: install-deps + playbook: ansible/install-deps.yaml + verbosity: -vvv + timeout: 3600 + - name: init-data-gzipper + playbook: ansible/init-data-gzipper.yaml + verbosity: -vvv + timeout: 3600 + # Required for tech preview only. + # - name: detect-runtime-class + # playbook: ansible/detect-runtime-class.yaml + # verbosity: -vvv + # timeout: 600 diff --git a/values-global.yaml b/values-global.yaml index d7c202a3..9e83b312 100644 --- a/values-global.yaml +++ b/values-global.yaml @@ -1,5 +1,6 @@ global: pattern: coco-pattern + storageProvider: hpp # Options: hpp, lvm, external secretStore: # Warning: This must be present even if it is set to none. backend: vault # none, vault, kubernetes @@ -13,6 +14,7 @@ global: coco: securityPolicyFlavour: "insecure" # insecure, signed or reject is expected. secured: true # true or false. If true, the cluster will be secured. If false, the cluster will be insecure. + bypassAttestation: false # Set true for bare metal (skips PCR/initdata/RVPS) # Enable SSH key injection into podvm for debugging. Do not enable in production. # Also requires: COCO_ENABLE_SSH_DEBUG=true ./scripts/gen-secrets.sh # and uncommenting the sshKey block in values-secret.yaml.template. @@ -24,7 +26,7 @@ main: # WARNING # This default configuration uses a single cluster on azure. # It fundamentally violates the separation of duties. - clusterGroupName: simple + clusterGroupName: baremetal multiSourceConfig: enabled: true clusterGroupChartVersion: 0.9.* diff --git a/values-secret.yaml.template b/values-secret.yaml.template index 4ed9d158..c4874d6b 100644 --- a/values-secret.yaml.template +++ b/values-secret.yaml.template @@ -126,3 +126,39 @@ secrets: onMissingValue: generate vaultPolicy: validatedPatternDefaultPolicy + # PCCS secrets for bare metal Intel TDX deployments. + # Uncomment these sections for bare metal deployments. + # Run ./scripts/gen-secrets.sh first to generate tokens and certificates. + # You must provide your Intel PCS API key in the api_key field. + # Get an API key from: https://api.portal.trustedservices.intel.com/ + #- name: pccs + # vaultPrefixes: + # - hub + # fields: + # - name: api_key + # value: '' + # - name: user_token_hash + # path: ~/.coco-pattern/pccs_user_token_hash + # - name: user_token + # path: ~/.coco-pattern/pccs_user_token + # - name: admin_token_hash + # path: ~/.coco-pattern/pccs_admin_token_hash + # - name: admin_token + # path: ~/.coco-pattern/pccs_admin_token + # - name: db_username + # value: '' + # onMissingValue: generate + # vaultPolicy: validatedPatternDefaultPolicy + # - name: db_password + # value: '' + # onMissingValue: generate + # vaultPolicy: validatedPatternDefaultPolicy + #- name: pccs-tls + # vaultPrefixes: + # - hub + # fields: + # - name: private_key + # path: ~/.coco-pattern/pccs_private.pem + # - name: certificate + # path: ~/.coco-pattern/pccs_certificate.pem +