Skip to content

feat: ec pipeline supports large snapshot specs#3124

Merged
simonbaird merged 1 commit intoconforma:mainfrom
kasemAlem:STONEINTG-1470
Mar 5, 2026
Merged

feat: ec pipeline supports large snapshot specs#3124
simonbaird merged 1 commit intoconforma:mainfrom
kasemAlem:STONEINTG-1470

Conversation

@kasemAlem
Copy link
Contributor

Modify script to enable snapshots with large specs

STONEINTG-1470

@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

EC pipeline supports large snapshot specs via custom resources

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Support custom resource names as snapshot input in EC pipeline
• Fetch snapshot from Kubernetes cluster when valid CR name provided
• Validate CR names against Kubernetes DNS label rules (≤63 chars, lowercase)
• Maintain backward compatibility with file path and literal content handling
Diagram
flowchart LR
  A["SNAPSHOT input"] --> B{"Is file path?"}
  B -->|Yes| C["Copy file to working snapshot"]
  B -->|No| D{"Valid CR name?"}
  D -->|Yes| E["Fetch CR from cluster with kubectl"]
  D -->|No| F["Treat as literal content"]
  E --> G["Store JSON in working snapshot"]
  C --> H["Process with jq"]
  F --> H
  G --> H
Loading

Grey Divider

File Changes

1. hack/reduce-snapshot.sh ✨ Enhancement +16/-5

Add custom resource name support for snapshot input

• Enhanced SNAPSHOT parameter to accept Kubernetes custom resource names in addition to file paths
• Added validation logic to check if SNAPSHOT matches Kubernetes DNS label rules (lowercase, digits,
 hyphens, max 63 chars)
• Implemented kubectl get command to fetch custom resource JSON when valid CR name is detected
• Updated documentation comments to clarify SNAPSHOT parameter behavior and new CR name support
• Maintained backward compatibility with existing file path and literal content handling

hack/reduce-snapshot.sh


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Feb 25, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. kubectl get args mismatch 🐞 Bug ✓ Correctness
Description
The new cluster-fetch path assumes CUSTOM_RESOURCE is a resource kind and SNAPSHOT is the name, but
existing task docs/usage pass CUSTOM_RESOURCE as kind/name (e.g. pr/somepipeline). When SNAPSHOT
looks like a CR name, this will call kubectl with an extra positional argument and fail.
Code

hack/reduce-snapshot.sh[R47-51]

+elif [[ ${#SNAPSHOT} -le 63 && "$SNAPSHOT" =~ $VALID_CR_NAME_PATTERN ]]; then
+  CR_NAMESPACE_ARG=()
+  [[ -n "${CUSTOM_RESOURCE_NAMESPACE:-}" ]] && CR_NAMESPACE_ARG=(-n "$CUSTOM_RESOURCE_NAMESPACE")
+  kubectl get "$CUSTOM_RESOURCE" "$SNAPSHOT" "${CR_NAMESPACE_ARG[@]}" -o json > "$WORKING_SNAPSHOT" || \
+    { echo "Failed to get custom resource: $SNAPSHOT"; exit 1; }
Evidence
The new code calls kubectl get "$CUSTOM_RESOURCE" "$SNAPSHOT" ..., which only works if
CUSTOM_RESOURCE is a kind and SNAPSHOT is the name. However, the Tekton task parameter and README
explicitly define CUSTOM_RESOURCE as “Name, including kind” (kind/name), and the script’s existing
single-component logic also calls kubectl get "$CUSTOM_RESOURCE" without supplying a separate
name—confirming it expects kind/name in CUSTOM_RESOURCE.

hack/reduce-snapshot.sh[47-51]
tasks/verify-enterprise-contract/0.1/README.md[44-46]
tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml[138-143]
tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml[221-229]
hack/reduce-snapshot.sh[66-69]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`reduce-snapshot.sh` now tries to interpret `SNAPSHOT` as a Kubernetes CR name and fetch it using `kubectl get "$CUSTOM_RESOURCE" "$SNAPSHOT" ...`. This conflicts with existing usage where `CUSTOM_RESOURCE` is passed as `kind/name` (e.g. `pr/somepipeline`), causing `kubectl get` to be invoked with invalid positional arguments.

### Issue Context
- Tekton tasks in this repo document and pass `CUSTOM_RESOURCE` as `kind/name`.
- The script’s single-component mode also expects `CUSTOM_RESOURCE` to already include the name.

### Fix Focus Areas
- hack/reduce-snapshot.sh[42-54]
- tasks/verify-enterprise-contract/0.1/verify-enterprise-contract.yaml[214-234]
- tasks/verify-enterprise-contract/0.1/README.md[44-46]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Reduction fails on CR JSON 🐞 Bug ✓ Correctness
Description
When the new path fetches a Snapshot CR via kubectl, the JSON will commonly contain the snapshot
spec under .spec, but the reduction logic assumes .components exists at the root. In
SINGLE_COMPONENT mode this can error (cannot iterate over null) or fail to reduce.
Code

hack/reduce-snapshot.sh[R50-51]

+  kubectl get "$CUSTOM_RESOURCE" "$SNAPSHOT" "${CR_NAMESPACE_ARG[@]}" -o json > "$WORKING_SNAPSHOT" || \
+    { echo "Failed to get custom resource: $SNAPSHOT"; exit 1; }
Evidence
The new code writes the full kubectl JSON output to WORKING_SNAPSHOT. In this repo’s Kubernetes
snapshot CRs, the snapshot spec is wrapped under .spec (not at the root). The reduction logic
later deletes from .components[] and counts .components[], which will not work when components
live under .spec.components.

hack/reduce-snapshot.sh[50-51]
acceptance/kubernetes/stub/stub.go[112-125]
internal/applicationsnapshot/input.go[185-197]
hack/reduce-snapshot.sh[76-81]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
When `SNAPSHOT` is fetched from the cluster, the resulting JSON is typically a Kubernetes CR with the snapshot spec nested at `.spec`. The script’s SINGLE_COMPONENT reduction currently assumes `.components` is at the root, which breaks for CR-shaped input.

### Issue Context
`internal/applicationsnapshot.readSnapshotSource` supports both bare SnapshotSpec and CR-shaped `{spec: ...}` inputs, and tests/stubs in this repo create Snapshot CRs with a `spec` wrapper.

### Fix Focus Areas
- hack/reduce-snapshot.sh[47-54]
- hack/reduce-snapshot.sh[76-88]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. CR-name heuristic too strict 🐞 Bug ⛯ Reliability
Description
The script only treats SNAPSHOT as a CR name when it is a DNS-label (≤63 chars) with no namespace
component, but repo code/tests support snapshot refs like namespace/name. This can cause
namespace/name inputs to be treated as literal JSON and fail validation with a confusing “JSON is
invalid” error.
Code

hack/reduce-snapshot.sh[R43-48]

+# Kubernetes resource names: DNS label, max 63 chars, [a-z0-9]([-a-z0-9]*[a-z0-9])?
+VALID_CR_NAME_PATTERN='^[a-z0-9]([-a-z0-9]*[a-z0-9])?$'
if [[ -f "$SNAPSHOT" ]]; then
  cp "$SNAPSHOT" "$WORKING_SNAPSHOT"
+elif [[ ${#SNAPSHOT} -le 63 && "$SNAPSHOT" =~ $VALID_CR_NAME_PATTERN ]]; then
+  CR_NAMESPACE_ARG=()
Evidence
The new heuristic explicitly matches only DNS-label names. Meanwhile the CLI’s snapshot reference
format supports an optional namespace prefix ([namespace/]name) and unit tests exercise
namespace/name as valid; these refs won’t match the heuristic and will fall through to the
‘literal content’ path.

hack/reduce-snapshot.sh[43-48]
internal/applicationsnapshot/input_test.go[79-86]
internal/kubernetes/client.go[132-136]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`reduce-snapshot.sh` decides whether to fetch a resource from cluster based on a strict DNS-label regex and length, which excludes common snapshot ref formats like `namespace/name` supported elsewhere in the repo.

### Issue Context
The Go CLI supports snapshot references in the form `[namespace/]name`.

### Fix Focus Areas
- hack/reduce-snapshot.sh[43-54]
- internal/kubernetes/client.go[132-136]
- internal/applicationsnapshot/input_test.go[79-86]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@kasemAlem
Copy link
Contributor Author

/retest

@kasemAlem kasemAlem requested a review from dirgim February 25, 2026 19:41
@kasemAlem
Copy link
Contributor Author

/review

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Feb 25, 2026

PR Reviewer Guide 🔍

(Review updated until commit cdb507a)

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 Security concerns

Context-based cluster access:
The script can fetch a Snapshot from whatever cluster/namespace is configured in the current kubectl context when SNAPSHOT is provided as snapshot/<name>. In CI this is usually controlled, but locally (or in misconfigured jobs) this increases the chance of reading from an unintended cluster/namespace. Consider documenting this prominently and/or adding an explicit namespace override/requirement for the fetch path (or a safety check that errors if the context namespace is empty/unexpected).

Let @robnester-rh know what you think of this process.

⚡ Recommended focus areas for review

Name Validation

The snapshot name validation enforces a DNS label (no dots, max 63 chars). Kubernetes object names are typically DNS subdomains (can include dots, up to 253 chars). Confirm Snapshot CR names in your environments won’t be rejected by this stricter check, or adjust validation to match actual API constraints for Snapshot names.

# misclassifying short literal content as a CR name). Name must be DNS label, ≤63 chars.
VALID_CR_NAME_PATTERN='^[a-z0-9]([-a-z0-9]*[a-z0-9])?$'
if [[ "$SNAPSHOT" =~ ^[sS]napshot/(.+)$ ]]; then
  SNAPSHOT_NAME="${BASH_REMATCH[1]}"
  if [[ ${#SNAPSHOT_NAME} -le 63 && "$SNAPSHOT_NAME" =~ $VALID_CR_NAME_PATTERN ]]; then
    # Guard: warn if CUSTOM_RESOURCE_NAMESPACE is set and differs from context
    # (Snapshot is always fetched from context namespace; mismatch can cause wrong/missing CR).
    if [[ -n "${CUSTOM_RESOURCE_NAMESPACE:-}" ]]; then
      CONTEXT_NS="$(kubectl config view --minify -o jsonpath='{.contexts[0].context.namespace}' 2>/dev/null || true)"
      if [[ -n "$CONTEXT_NS" && "$CUSTOM_RESOURCE_NAMESPACE" != "$CONTEXT_NS" ]]; then
        echo "Warning: Fetching Snapshot from context namespace \"$CONTEXT_NS\"; CUSTOM_RESOURCE_NAMESPACE is \"$CUSTOM_RESOURCE_NAMESPACE\" (not used for fetch). If the Snapshot lives in the latter, fetch may fail or be wrong."
      fi
    fi
    kubectl get snapshot/"${SNAPSHOT_NAME}" -o json | jq .spec > "$WORKING_SNAPSHOT" || \
      { echo "Failed to get Snapshot: $SNAPSHOT_NAME"; exit 1; }
  else
    echo "Invalid snapshot name after snapshot/ prefix: $SNAPSHOT_NAME (must be DNS label, ≤63 chars)"
    exit 1
  fi
Input Normalization

Cluster fetch stores only .spec into the working file, but the file-path branch copies the file as-is. If callers sometimes pass a full Snapshot CR JSON (with .spec) and other times pass just the spec JSON, behavior may diverge. Consider normalizing both paths (e.g., detect/strip to .spec when needed) so downstream jq logic consistently receives the same shape.

    kubectl get snapshot/"${SNAPSHOT_NAME}" -o json | jq .spec > "$WORKING_SNAPSHOT" || \
      { echo "Failed to get Snapshot: $SNAPSHOT_NAME"; exit 1; }
  else
    echo "Invalid snapshot name after snapshot/ prefix: $SNAPSHOT_NAME (must be DNS label, ≤63 chars)"
    exit 1
  fi
elif [[ -f "$SNAPSHOT" ]]; then
  cp "$SNAPSHOT" "$WORKING_SNAPSHOT"
else
  printf "%s" "$SNAPSHOT" > "$WORKING_SNAPSHOT"
fi
Tooling Assumptions

The new cluster-fetch path depends on kubectl access and the current context namespace, and jq being available. Consider failing with a clearer message when kubectl/jq are missing or when the context namespace is unset, to make pipeline/debug failures more actionable.

if [[ "$SNAPSHOT" =~ ^[sS]napshot/(.+)$ ]]; then
  SNAPSHOT_NAME="${BASH_REMATCH[1]}"
  if [[ ${#SNAPSHOT_NAME} -le 63 && "$SNAPSHOT_NAME" =~ $VALID_CR_NAME_PATTERN ]]; then
    # Guard: warn if CUSTOM_RESOURCE_NAMESPACE is set and differs from context
    # (Snapshot is always fetched from context namespace; mismatch can cause wrong/missing CR).
    if [[ -n "${CUSTOM_RESOURCE_NAMESPACE:-}" ]]; then
      CONTEXT_NS="$(kubectl config view --minify -o jsonpath='{.contexts[0].context.namespace}' 2>/dev/null || true)"
      if [[ -n "$CONTEXT_NS" && "$CUSTOM_RESOURCE_NAMESPACE" != "$CONTEXT_NS" ]]; then
        echo "Warning: Fetching Snapshot from context namespace \"$CONTEXT_NS\"; CUSTOM_RESOURCE_NAMESPACE is \"$CUSTOM_RESOURCE_NAMESPACE\" (not used for fetch). If the Snapshot lives in the latter, fetch may fail or be wrong."
      fi
    fi
    kubectl get snapshot/"${SNAPSHOT_NAME}" -o json | jq .spec > "$WORKING_SNAPSHOT" || \
      { echo "Failed to get Snapshot: $SNAPSHOT_NAME"; exit 1; }

@kasemAlem
Copy link
Contributor Author

/review

@qodo-code-review
Copy link
Contributor

Persistent review updated to latest commit cd9a62b

@kasemAlem
Copy link
Contributor Author

/review

@qodo-code-review
Copy link
Contributor

Persistent review updated to latest commit 941d23c

@kasemAlem
Copy link
Contributor Author

/review

@qodo-code-review
Copy link
Contributor

Persistent review updated to latest commit af824d7

@kasemAlem
Copy link
Contributor Author

We intentionally do not pass -n so the fetch uses the current kubectl context. In Tekton the task runs in the pipeline namespace; locally the context is the workspace. The script comments have been updated to document this so the intended namespace (context namespace) is explicit.

@kasemAlem
Copy link
Contributor Author

/review

@qodo-code-review
Copy link
Contributor

Persistent review updated to latest commit e5e1681

@kasemAlem
Copy link
Contributor Author

/review

@qodo-code-review
Copy link
Contributor

Persistent review updated to latest commit cdb507a

@github-actions github-actions bot added size: S and removed size: XS labels Mar 1, 2026
@dheerajodha
Copy link
Contributor

/ok-to-test

@codecov
Copy link

codecov bot commented Mar 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

Flag Coverage Δ
acceptance 54.84% <ø> (-0.75%) ⬇️
generative 18.16% <ø> (-0.34%) ⬇️
integration 27.00% <ø> (-0.50%) ⬇️
unit 68.65% <ø> (+0.20%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 4 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions github-actions bot added size: XS and removed size: S labels Mar 2, 2026
@kasemAlem kasemAlem requested a review from dirgim March 2, 2026 14:37
Copy link
Contributor

@dirgim dirgim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

simonbaird
simonbaird previously approved these changes Mar 3, 2026
@simonbaird
Copy link
Member

/ok-to-test

When SNAPSHOT is not a file path, treat it as a custom resource name if it
matches Kubernetes DNS label rules (lowercase, digits, hyphens, ≤63 chars).
In that case fetch the resource with kubectl get and use its JSON as the
working snapshot; otherwise keep the previous behavior of writing SNAPSHOT
literally to the working file.

Signed-off-by: Kasem Alem <kalem@redhat.com>
@dheerajodha
Copy link
Contributor

/ok-to-test

@dheerajodha
Copy link
Contributor

/lgtm

@simonbaird simonbaird merged commit 170e6dd into conforma:main Mar 5, 2026
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants