Productize team import and lock board lifecycle with E2E#5
Productize team import and lock board lifecycle with E2E#5
Conversation
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
…label info Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
…Id to sortable data - Add touch-action: none to .issue-card CSS to prevent browser gesture hijack - Include stateId in useSortable data so card-to-card drops resolve correctly - Add IssueCard component tests verifying useSortable data shape Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
…n, add fields to ISSUE_UPDATE_MUTATION, empty label picker message - Make mergeIssueWithPreservedComments null-safe using ?? fallback for comments and children fields when mutation response omits them - Add comments and children fields to ISSUE_UPDATE_MUTATION response in queries.ts for future consistency - Add 'No labels available' empty-state message in IssueDetailDrawer label picker section - Add 3 tests: undefined comments/children merge safety, empty labels UI Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
…an DnD Add MouseSensor and TouchSensor to the useSensors call in BoardPage.tsx so that browser automation tools (which emit classic mouse events rather than pointer events) can trigger drag sessions. Reduce the activation distance constraint from 8 to 5 for more reliable synthetic drags. Add regression tests verifying all three sensor types are registered and that each has a distance activation constraint. Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Disable file parallelism in the CLI vitest config since multiple test files (verify, issues, import) hit the same shared PostgreSQL database. Running them in parallel causes cross-test contamination where one suite's deleteMany() wipes data that another suite just imported. Additionally, harden resetVerifyImportState to clear ALL legacy mappings (not just verify-prefixed ones) before re-importing, so stale newId references left by other test suites cannot cause the import pipeline's idempotency path to skip recreating prerequisite rows. Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
…pdate Implement all issue-related CLI commands with GraphQL queries/mutations: - teams list, states list, labels list (table output with id/key/name) - issues list with --team filter (table: identifier, title, state, assignee) - issues show with full detail (identifier, title, desc, state, labels, assignee, comments) - issues create with --title, --team, --description - issues update with --state, --title, --assignee, --labels (resolves names to IDs) - All list/show commands support --json flag - Error handling: 'Issue not found' for nonexistent issues Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
📝 WalkthroughWalkthroughAdds containerization, CI and publish workflows, Playwright E2E, extensive web UI features (drag/drop, comments, issue CRUD), server-side deletion and validation tooling (including SON restore), a large CLI expansion (config, team import/export/verify), many tests, and supporting docs/config for local development. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
actor CLI
end
rect rgba(200,255,200,0.5)
participant LinearAPI
end
rect rgba(255,200,200,0.5)
participant Server
end
rect rgba(255,255,200,0.5)
participant FS as Filesystem
end
CLI->>LinearAPI: runExport(team, token)
LinearAPI-->>FS: write export fixture
CLI->>Server: runImport(exportDir)
Server->>FS: read export files
Server->>Server: runImportPipeline (DB writes)
Server->>Server: runVerify()
Server-->>CLI: return verification result + summary
CLI->>FS: write in-volute-import-summary.json
alt verification succeeded and keepExport=false
CLI->>FS: remove exportDir
else verification failed
CLI-->>User: report failure (export retained)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
|
There was a problem hiding this comment.
Code Review
This pull request significantly advances the project by implementing the full issue lifecycle, including creation, updates, and deletion for both issues and comments across the web UI and CLI. Key features include team selection persistence, inline editing in the frontend, and HMAC-signed viewer assertions in the shared package. Feedback focuses on improving code portability by replacing hardcoded absolute paths in scripts and documentation with relative ones. Additionally, there are recommendations to refactor duplicated frontend logic into shared hooks, implement pagination for CLI issue listing, and enhance Docker and E2E test maintainability.
| }, | ||
| } | ||
| : undefined, | ||
| first: 100, |
There was a problem hiding this comment.
The fetchIssues function hardcodes first: 100, which means the issues list command will only ever show a maximum of 100 issues for a given team. This is a significant limitation for teams with more issues. The command should support pagination options (e.g., --first, --after) to allow users to view all issues.
| const issue = await fetchIssueByIdentifier(issueIdOrIdentifier); | ||
|
|
||
| if (!issue) { | ||
| throw new CliError(`Issue not found: ${issueIdOrIdentifier}`); | ||
| } |
There was a problem hiding this comment.
The fallback logic in fetchIssueComments appears to be redundant. If the initial client.request for issue(id: issueIdOrIdentifier) returns null, the code proceeds to call fetchIssueByIdentifier(issueIdOrIdentifier), which executes the exact same GraphQL query with the same argument. This second call will also fail, making the fallback ineffective. The logic should be revised to handle the 'not found' case correctly without a redundant API call.
| export function IssuePage() { | ||
| const navigate = useNavigate(); | ||
| const { id } = useParams(); | ||
| const { data, error, loading } = useQuery<IssuePageQueryData, IssuePageQueryVariables>(ISSUE_PAGE_QUERY, { | ||
| skip: !id, | ||
| variables: { | ||
| id: id ?? '', | ||
| }, | ||
| }); | ||
| const [runIssueUpdate] = useMutation<IssueUpdateMutationData, IssueUpdateMutationVariables>( | ||
| ISSUE_UPDATE_MUTATION, | ||
| ); | ||
| const [runCommentCreate] = useMutation<CommentCreateMutationData, CommentCreateMutationVariables>( | ||
| COMMENT_CREATE_MUTATION, | ||
| ); | ||
| const [runIssueDelete] = useMutation<IssueDeleteMutationData, IssueDeleteMutationVariables>( | ||
| ISSUE_DELETE_MUTATION, | ||
| ); | ||
| const [runCommentDelete] = useMutation<CommentDeleteMutationData, CommentDeleteMutationVariables>( | ||
| COMMENT_DELETE_MUTATION, | ||
| ); | ||
| const [localIssue, setLocalIssue] = useState<IssueSummary | null>(null); | ||
| const [mutationError, setMutationError] = useState<string | null>(null); | ||
| const [isSavingState, setIsSavingState] = useState(false); | ||
|
|
||
| useEffect(() => { | ||
| setLocalIssue(data?.issue ?? null); | ||
| }, [data?.issue]); | ||
|
|
||
| const selectedTeam = useMemo(() => { | ||
| if (!localIssue) { | ||
| return null; | ||
| } | ||
|
|
||
| const teamStates = localIssue.team.states ?? { nodes: [] }; | ||
|
|
||
| return { | ||
| id: localIssue.team.id, | ||
| key: localIssue.team.key, | ||
| name: localIssue.team.name ?? localIssue.team.key, | ||
| states: teamStates, | ||
| }; | ||
| }, [localIssue]); | ||
|
|
||
| async function persistIssueUpdate( | ||
| issue: IssueSummary, | ||
| input: IssueUpdateMutationVariables['input'], | ||
| applyOptimisticIssue: (current: IssueSummary) => IssueSummary, | ||
| ) { | ||
| const previousIssue = localIssue; | ||
| const nextIssue = applyOptimisticIssue(issue); | ||
|
|
||
| setMutationError(null); | ||
| setIsSavingState(true); | ||
| setLocalIssue(nextIssue); | ||
|
|
||
| try { | ||
| const result = await runIssueUpdate({ | ||
| variables: { | ||
| id: issue.id, | ||
| input, | ||
| }, | ||
| }); | ||
|
|
||
| if (!result.data?.issueUpdate.success || !result.data.issueUpdate.issue) { | ||
| throw new Error('Mutation failed'); | ||
| } | ||
|
|
||
| setLocalIssue((currentIssue) => | ||
| currentIssue | ||
| ? mergeIssueWithPreservedComments(currentIssue, result.data!.issueUpdate.issue!) | ||
| : result.data!.issueUpdate.issue!, | ||
| ); | ||
| } catch (mutationIssue) { | ||
| setLocalIssue(previousIssue); | ||
| setMutationError(ERROR_MESSAGE); | ||
| throw mutationIssue; | ||
| } finally { | ||
| setIsSavingState(false); | ||
| } | ||
| } | ||
|
|
||
| async function persistStateChange(issue: IssueSummary, stateId: string) { | ||
| const state = selectedTeam?.states.nodes.find((item) => item.id === stateId) ?? null; | ||
|
|
||
| if (!state || issue.state.id === stateId) { | ||
| return; | ||
| } | ||
|
|
||
| await persistIssueUpdate(issue, { stateId }, (current) => ({ | ||
| ...current, | ||
| state, | ||
| })); | ||
| } | ||
|
|
||
| async function persistTitleChange(issue: IssueSummary, title: string) { | ||
| if (issue.title === title) { | ||
| return; | ||
| } | ||
|
|
||
| await persistIssueUpdate(issue, { title }, (current) => ({ | ||
| ...current, | ||
| title, | ||
| })); | ||
| } | ||
|
|
||
| async function persistDescriptionChange(issue: IssueSummary, description: string) { | ||
| if ((issue.description ?? '') === description) { | ||
| return; | ||
| } | ||
|
|
||
| await persistIssueUpdate(issue, { description }, (current) => ({ | ||
| ...current, | ||
| description, | ||
| })); | ||
| } | ||
|
|
||
| async function persistLabelsChange(issue: IssueSummary, labelIds: string[]) { | ||
| const labels = data?.issueLabels.nodes ?? []; | ||
| const nextLabels = labels.filter((label) => labelIds.includes(label.id)); | ||
| const currentLabelIds = issue.labels.nodes.map((label) => label.id).sort(); | ||
| const nextLabelIds = [...labelIds].sort(); | ||
|
|
||
| if (JSON.stringify(currentLabelIds) === JSON.stringify(nextLabelIds)) { | ||
| return; | ||
| } | ||
|
|
||
| await persistIssueUpdate(issue, { labelIds }, (current) => ({ | ||
| ...current, | ||
| labels: { | ||
| nodes: nextLabels, | ||
| }, | ||
| })); | ||
| } | ||
|
|
||
| async function persistAssigneeChange(issue: IssueSummary, assigneeId: string | null) { | ||
| if ((issue.assignee?.id ?? null) === assigneeId) { | ||
| return; | ||
| } | ||
|
|
||
| const users = data?.users.nodes ?? []; | ||
|
|
||
| await persistIssueUpdate(issue, { assigneeId }, (current) => ({ | ||
| ...current, | ||
| assignee: assigneeId ? users.find((user) => user.id === assigneeId) ?? null : null, | ||
| })); | ||
| } | ||
|
|
||
| async function persistCommentCreate(issue: IssueSummary, body: string) { | ||
| const trimmedBody = body.trim(); | ||
|
|
||
| if (!trimmedBody) { | ||
| return; | ||
| } | ||
|
|
||
| setMutationError(null); | ||
| setIsSavingState(true); | ||
|
|
||
| try { | ||
| const result = await runCommentCreate({ | ||
| variables: { | ||
| input: { | ||
| issueId: issue.id, | ||
| body: trimmedBody, | ||
| }, | ||
| }, | ||
| }); | ||
|
|
||
| if (!result.data?.commentCreate.success || !result.data.commentCreate.comment) { | ||
| throw new Error('Comment mutation failed'); | ||
| } | ||
|
|
||
| setLocalIssue((currentIssue) => | ||
| currentIssue | ||
| ? { | ||
| ...currentIssue, | ||
| comments: { | ||
| nodes: [...currentIssue.comments.nodes, result.data!.commentCreate.comment!].sort( | ||
| (left, right) => new Date(left.createdAt).getTime() - new Date(right.createdAt).getTime(), | ||
| ), | ||
| }, | ||
| } | ||
| : currentIssue, | ||
| ); | ||
| } catch (mutationIssue) { | ||
| setMutationError(ERROR_MESSAGE); | ||
| throw mutationIssue; | ||
| } finally { | ||
| setIsSavingState(false); | ||
| } | ||
| } | ||
|
|
||
| async function persistIssueDelete(issue: IssueSummary) { | ||
| setMutationError(null); | ||
| setIsSavingState(true); | ||
|
|
||
| try { | ||
| const result = await runIssueDelete({ | ||
| variables: { | ||
| id: issue.id, | ||
| }, | ||
| }); | ||
|
|
||
| if (!result.data?.issueDelete.success || !result.data.issueDelete.issueId) { | ||
| throw new Error('Delete issue mutation failed'); | ||
| } | ||
|
|
||
| setLocalIssue(null); | ||
| navigate('/'); | ||
| } catch { | ||
| setMutationError(ISSUE_DELETE_ERROR_MESSAGE); | ||
| throw new Error(ISSUE_DELETE_ERROR_MESSAGE); | ||
| } finally { | ||
| setIsSavingState(false); | ||
| } | ||
| } | ||
|
|
||
| async function persistCommentDelete(issue: IssueSummary, commentId: string) { | ||
| setMutationError(null); | ||
| setIsSavingState(true); | ||
|
|
||
| try { | ||
| const result = await runCommentDelete({ | ||
| variables: { | ||
| id: commentId, | ||
| }, | ||
| }); | ||
|
|
||
| if (!result.data?.commentDelete.success || !result.data.commentDelete.commentId) { | ||
| throw new Error('Delete comment mutation failed'); | ||
| } | ||
|
|
||
| setLocalIssue((currentIssue) => | ||
| currentIssue | ||
| ? { | ||
| ...currentIssue, | ||
| comments: { | ||
| nodes: currentIssue.comments.nodes.filter((comment) => comment.id !== commentId), | ||
| }, | ||
| } | ||
| : currentIssue, | ||
| ); | ||
| } catch { | ||
| setMutationError(COMMENT_DELETE_ERROR_MESSAGE); | ||
| throw new Error(COMMENT_DELETE_ERROR_MESSAGE); | ||
| } finally { | ||
| setIsSavingState(false); | ||
| } | ||
| } | ||
|
|
||
| if (error) { | ||
| const errorState = getBoardBootstrapErrorMessage(error); | ||
|
|
||
| return ( | ||
| <main className="board-page board-page--state"> | ||
| <header className="app-shell__header"> | ||
| <div> | ||
| <p className="app-shell__eyebrow">Involute</p> | ||
| <h1>Issue detail</h1> | ||
| </div> | ||
| </header> | ||
| <section className="board-message board-message--error" role="alert"> | ||
| <h2>{errorState.title}</h2> | ||
| <p>{errorState.description}</p> | ||
| </section> | ||
| </main> | ||
| ); | ||
| } | ||
|
|
||
| if (loading && !data) { | ||
| return ( | ||
| <main className="board-page board-page--state"> | ||
| <header className="app-shell__header"> | ||
| <div> | ||
| <p className="app-shell__eyebrow">Involute</p> | ||
| <h1>Issue detail</h1> | ||
| </div> | ||
| </header> | ||
| <section className="board-message" aria-live="polite"> | ||
| Loading issue… | ||
| </section> | ||
| </main> | ||
| ); | ||
| } | ||
|
|
||
| if (!localIssue || !selectedTeam) { | ||
| return ( | ||
| <main className="board-page board-page--state"> | ||
| <header className="app-shell__header"> | ||
| <div> | ||
| <p className="app-shell__eyebrow">Involute</p> | ||
| <h1>Issue detail</h1> | ||
| </div> | ||
| </header> | ||
| <section className="board-message"> | ||
| <p>Issue not found.</p> | ||
| </section> | ||
| </main> | ||
| ); | ||
| } | ||
|
|
||
| return ( | ||
| <main className="placeholder-page"> | ||
| <h1>Issue detail</h1> | ||
| <p>Issue route ready for <code>{id}</code>.</p> | ||
| <main className="board-page"> | ||
| <header className="app-shell__header"> | ||
| <div> | ||
| <p className="app-shell__eyebrow">Involute</p> | ||
| <h1>Issue detail</h1> | ||
| </div> | ||
| </header> | ||
|
|
||
| <IssueDetailDrawer | ||
| issue={localIssue} | ||
| team={selectedTeam} | ||
| labels={data?.issueLabels.nodes ?? []} | ||
| users={data?.users.nodes ?? []} | ||
| savingState={isSavingState} | ||
| errorMessage={mutationError} | ||
| onClose={() => navigate(-1)} | ||
| onStateChange={persistStateChange} | ||
| onTitleSave={persistTitleChange} | ||
| onDescriptionSave={persistDescriptionChange} | ||
| onLabelsChange={persistLabelsChange} | ||
| onAssigneeChange={persistAssigneeChange} | ||
| onCommentCreate={persistCommentCreate} | ||
| onCommentDelete={persistCommentDelete} | ||
| onIssueDelete={persistIssueDelete} | ||
| /> | ||
| </main> | ||
| ); | ||
| } |
There was a problem hiding this comment.
This component duplicates a significant amount of state management and mutation logic (for updates, creates, and deletes of issues and comments) that is also present in BoardPage.tsx. This leads to code duplication, which increases maintenance overhead and the risk of inconsistencies. This logic should be extracted into a shared custom hook (e.g., useIssueManager) or a service that both components can use to manage issue data and interactions.
| - Command: `cd /Users/chris/workspace/Involute/packages/server && pnpm setup:son-validation` | ||
| - Equivalent manifest command: `.factory/services.yaml` → `commands.restore_son_validation_data` | ||
| - Source export: `/Users/chris/workspace/Involute/.factory/validation/import/user-testing/tmp/import-export-flow/export` |
There was a problem hiding this comment.
This documentation contains hardcoded absolute paths specific to a user's machine (e.g., /Users/chris/...). This makes the documentation and any scripts based on it not portable for other developers. Please replace these with relative paths or placeholders like <project-root> to ensure the documentation is universally usable.
| setup_web_ui_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:web-ui-validation | ||
| restore_son_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:son-validation |
There was a problem hiding this comment.
The commands defined here use hardcoded absolute paths (e.g., /Users/chris/workspace/Involute/...). This ties the configuration to a specific machine setup and prevents other contributors from using it without modification. These paths should be made relative to the project root or use environment variables to be portable.
|
|
||
| EXPOSE 4200 | ||
|
|
||
| CMD ["sh", "-lc", "pnpm --filter @involute/server exec prisma db push --skip-generate && if [ \"${SEED_DATABASE:-false}\" = \"true\" ]; then pnpm --filter @involute/server exec prisma db seed; fi && node packages/server/dist/index.js"] |
There was a problem hiding this comment.
The CMD instruction for the server stage contains complex shell logic. For better readability, maintainability, and easier debugging, it's recommended to move this logic into a dedicated entrypoint.sh script. This script can then be called from the CMD or ENTRYPOINT instruction.
CMD ["/app/entrypoint.sh"]
| - Keep the compose stack and CI reproducible | ||
| - Lock the core board lifecycle down with E2E before the larger UI/UX redesign | ||
|
|
||
| See [docs/vision.md](/Users/chris/workspace/Involute/docs/vision.md) and [docs/milestones.md](/Users/chris/workspace/Involute/docs/milestones.md) for the product direction. |
There was a problem hiding this comment.
The links to other documentation files use hardcoded absolute paths from your local filesystem (e.g., /Users/chris/...). These links will be broken for other users and on the web. Please change them to relative paths so they work correctly within the repository (e.g., docs/vision.md).
| See [docs/vision.md](/Users/chris/workspace/Involute/docs/vision.md) and [docs/milestones.md](/Users/chris/workspace/Involute/docs/milestones.md) for the product direction. | |
| See [docs/vision.md](./docs/vision.md) and [docs/milestones.md](./docs/milestones.md) for the product direction. |
| const createdDescription = 'Created from the end-to-end acceptance suite.'; | ||
| const updatedDescription = 'Updated description from the end-to-end acceptance suite.'; | ||
|
|
||
| page.on('dialog', (dialog) => dialog.accept()); |
There was a problem hiding this comment.
The test automatically accepts all dialogs using page.on('dialog', ...). While convenient, this can make the test less explicit and potentially hide unexpected dialogs. For actions that are expected to trigger a confirmation, like deletions, it's more robust to handle the dialog explicitly within the test step. This makes the test's intent clearer and ensures it's interacting with the correct dialog.
| export const DEFAULT_SON_EXPORT_DIR = | ||
| '/Users/chris/workspace/Involute/.factory/validation/import/user-testing/tmp/import-export-flow/export'; |
There was a problem hiding this comment.
| try { | ||
| await persistStateChange(issue, targetStateId); | ||
| // Call persistIssueUpdate directly instead of persistStateChange because | ||
| // handleDragOver already updated issue.state optimistically, which would | ||
| // cause persistStateChange to skip the mutation. | ||
| await persistIssueUpdate(issue, { stateId: targetStateId }, (current) => ({ | ||
| ...current, | ||
| state: targetState, | ||
| })); | ||
| } catch { | ||
| // error state already handled | ||
| } |
There was a problem hiding this comment.
🔴 Drag-and-drop mutation failure does not revert the optimistic column move
When a user drags an issue card between columns, handleDragOver (line 574) optimistically moves the issue in localIssues via setLocalIssues. By the time handleDragEnd fires and calls persistIssueUpdate at line 517, React has re-rendered with the moved state. Inside persistIssueUpdate at packages/web/src/routes/BoardPage.tsx:242, const previousIssues = localIssues captures the already-moved state (post-handleDragOver). On mutation error at line 267, setLocalIssues(previousIssues) "reverts" to this already-moved state — effectively a no-op — so the card stays in the wrong column despite the server rejecting the change.
Comparison with correct implementations in the same file
onDragCancel (line 760) correctly resets to data?.issues.nodes ?? []. handleNativeDropIssue (line 539) correctly captures previousIssues before calling setLocalIssues(moveIssueToState(...)) at line 540, so its catch block at line 548 properly reverts. Only the DnD Kit handleDragEnd path has this issue because handleDragOver already mutated localIssues before handleDragEnd runs.
Prompt for agents
The handleDragEnd function at packages/web/src/routes/BoardPage.tsx:485-524 relies on persistIssueUpdate for error recovery, but persistIssueUpdate captures localIssues (line 242) which has already been optimistically modified by handleDragOver (line 574). On mutation failure, the revert at line 267 restores the already-moved state instead of the pre-drag state.
The fix should ensure handleDragEnd can revert to the pre-drag state on error. Two approaches:
1. Mirror handleNativeDropIssue's pattern: capture previousIssues before calling persistIssueUpdate, and wrap the call in a try/catch that reverts to the pre-handleDragOver state (e.g., data?.issues.nodes like onDragCancel does at line 760).
2. Store the pre-drag localIssues snapshot in a ref during onDragStart and use that as the revert target in handleDragEnd's error path.
The current catch block at line 521-523 with the comment 'error state already handled' should be replaced with explicit revert logic using data?.issues.nodes ?? [] or a pre-drag snapshot.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (16)
.env.example-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorEnsure AUTH_TOKEN is changed before any deployment.
The default
AUTH_TOKEN=changeme-set-your-tokenis intentionally weak as a placeholder for local development. While thechangeme-prefix signals this should be updated, consider adding a comment in this file or in deployment documentation emphasizing that a strong, unique token must be set before production use.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.env.example at line 2, The .env.example contains a weak placeholder AUTH_TOKEN=changeme-set-your-token; update the example to include a clear comment instructing users to set a strong, unique token before any deployment (e.g., add a trailing comment line or adjacent comment such as "# REQUIRED: replace AUTH_TOKEN with a strong, unique token before production") so that the AUTH_TOKEN variable is explicitly highlighted as mandatory to change; ensure the instruction references AUTH_TOKEN to make it visible in the file and deployment docs.packages/web/src/lib/apollo.tsx-170-197 (1)
170-197:⚠️ Potential issue | 🟡 MinorQuery param URL persistence creates potential for unintended API redirection.
The
involuteApiUrlquery parameter is persisted to localStorage immediately (line 177), which means a user clicking a link like?involuteApiUrl=https://attacker.example/graphqlwill have all future GraphQL requests redirected until they manually clear localStorage.While this is primarily a development/debugging feature, consider:
- Adding URL validation (same-origin or allowlist).
- Prompting user confirmation before persisting.
- Documenting the behavior so users are aware of the persistence.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/lib/apollo.tsx` around lines 170 - 197, The code in resolveRuntimeGraphqlUrlOverride currently persists any query param returned by readRuntimeGraphqlUrlQueryParam into window.localStorage using LOCAL_STORAGE_GRAPHQL_URL_KEYS[0], which allows an attacker-supplied ?involuteApiUrl value to permanently redirect GraphQL calls; change this to validate the URL (e.g., same-origin or check against an allowlist) before calling window.localStorage.setItem, and if validation fails do not persist (optionally use the URL for this session only), or prompt the user for confirmation before saving; update resolveRuntimeGraphqlUrlOverride (and any callers) to perform validation via a helper (e.g., isAllowedGraphqlUrl) and only call window.localStorage.setItem when that helper returns true.README.md-141-141 (1)
141-141:⚠️ Potential issue | 🟡 MinorReplace absolute paths with repository-relative paths.
The links to
docs/vision.mdanddocs/milestones.mduse absolute local filesystem paths (/Users/chris/workspace/Involute/...) that won't resolve for other users or on GitHub.Proposed fix
-See [docs/vision.md](/Users/chris/workspace/Involute/docs/vision.md) and [docs/milestones.md](/Users/chris/workspace/Involute/docs/milestones.md) for the product direction. +See [docs/vision.md](docs/vision.md) and [docs/milestones.md](docs/milestones.md) for the product direction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 141, Replace the absolute filesystem links in README.md that reference /Users/chris/... with repository-relative links so they work on GitHub and for other users; update the two links pointing to docs/vision.md and docs/milestones.md to use repo-relative paths (e.g., docs/vision.md or ./docs/vision.md and docs/milestones.md or ./docs/milestones.md) so the references resolve correctly in the repository.packages/web/src/routes/BacklogPage.tsx-19-27 (1)
19-27:⚠️ Potential issue | 🟡 MinorBacklog sort order appears reversed for “newest first.”
Line 19 currently sorts identifiers ascending, which generally surfaces older issues first.
Proposed fix
- const identifierComparison = left.identifier.localeCompare(right.identifier, undefined, { + const identifierComparison = right.identifier.localeCompare(left.identifier, undefined, { numeric: true, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/routes/BacklogPage.tsx` around lines 19 - 27, The current comparator calculates identifierComparison using left.identifier.localeCompare(...) which yields ascending order (oldest first); to support "newest first" you should invert the comparison when the backlog sort direction is "newest first" (e.g. multiply identifierComparison by -1 or swap operands) before returning it, and likewise invert the secondary comparison using left.title.localeCompare(right.title) when identifierComparison === 0; update the sort callback in BacklogPage to apply the inversion based on the sortDirection flag so "newest first" surfaces newer identifiers first..factory/library/web-ui-validation-data-alignment.md-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorReplace hardcoded absolute path with relative path.
The absolute path
/Users/chris/workspace/Involute/packages/serveris specific to one developer's machine and won't work for others or in CI environments.📝 Proposed fix
-- Run `cd /Users/chris/workspace/Involute/packages/server && pnpm setup:web-ui-validation` before web-ui user-testing reruns. +- Run `cd packages/server && pnpm setup:web-ui-validation` (from repository root) before web-ui user-testing reruns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/library/web-ui-validation-data-alignment.md at line 3, Replace the hardcoded absolute path in the instruction line that currently says "cd /Users/chris/workspace/Involute/packages/server && pnpm setup:web-ui-validation" with a relative or repo-root-aware command (e.g., "cd packages/server && pnpm setup:web-ui-validation" or use a script like "pnpm --filter `@your-org/server` run setup:web-ui-validation") so it works across machines and CI; update the string in .factory/library/web-ui-validation-data-alignment.md to use the relative path or repository-script reference instead of the absolute path..factory/library/son-validation-dataset-restore.md-4-6 (1)
4-6:⚠️ Potential issue | 🟡 MinorReplace hardcoded absolute paths with relative paths.
Lines 4 and 6 contain absolute paths specific to one developer's machine. Use relative paths from the repository root for portability.
📝 Proposed fix
-- Command: `cd /Users/chris/workspace/Involute/packages/server && pnpm setup:son-validation` +- Command: `cd packages/server && pnpm setup:son-validation` (from repository root) - Equivalent manifest command: `.factory/services.yaml` → `commands.restore_son_validation_data` -- Source export: `/Users/chris/workspace/Involute/.factory/validation/import/user-testing/tmp/import-export-flow/export` +- Source export: `.factory/validation/import/user-testing/tmp/import-export-flow/export` (relative to repository root)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/library/son-validation-dataset-restore.md around lines 4 - 6, Replace the hardcoded absolute developer-specific paths in the markdown entries (the "Command" line, the "Source export" line, and any manifest reference under "Equivalent manifest command") with relative repository-root paths (e.g., use ./ or repo-root relative paths or variables like ${REPO_ROOT}) so the instructions and export path are portable; update the "Command" to use a relative path to packages/server (e.g., cd ./packages/server && pnpm setup:son-validation) and change the export path to a relative path under .factory (e.g., .factory/validation/import/... ) while keeping the "Equivalent manifest command" text unchanged except for using relative notation if it currently contains an absolute path..factory/library/user-testing.md-109-109 (1)
109-109:⚠️ Potential issue | 🟡 MinorSync this cap with the earlier summary table.
This section says Web UI validation is capped at 3 sessions, but the "Per-surface limits" section above still says 4. Please update both places to the same limit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/library/user-testing.md at line 109, The documentation is inconsistent: the "Per-surface limits" summary table lists Web UI validators as 4 but the later sentence "Concurrency: cap Web UI validators at 3 concurrent sessions..." sets 3; update both places to the same value (make them both 3 to match the observed memory constraint) so the "Per-surface limits" table entry for Web UI validators and the Concurrency sentence are identical.packages/cli/src/commands/comments.test.ts-451-454 (1)
451-454:⚠️ Potential issue | 🟡 MinorRestore the previous
INVOLUTE_CONFIG_PATHinrunCli.
runClioverwritesINVOLUTE_CONFIG_PATHbut always deletes it infinally, so any outer test configuration is lost after the helper returns. Capture the original value and restore it for proper isolation.🧪 Suggested teardown fix
async function runCli(args: string[], homeDir: string): Promise<{ exitCode: number; stderr: string; stdout: string }> { const originalHome = process.env.HOME; + const originalConfigPath = process.env.INVOLUTE_CONFIG_PATH; process.env.HOME = homeDir; process.env.INVOLUTE_CONFIG_PATH = join(homeDir, 'config.json'); @@ process.stderr.write = originalStderrWrite; process.exitCode = originalExitCode; process.env.HOME = originalHome; - delete process.env.INVOLUTE_CONFIG_PATH; + if (originalConfigPath === undefined) { + delete process.env.INVOLUTE_CONFIG_PATH; + } else { + process.env.INVOLUTE_CONFIG_PATH = originalConfigPath; + } } }Also applies to: 486-487
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/comments.test.ts` around lines 451 - 454, runCli currently overwrites process.env.INVOLUTE_CONFIG_PATH and always deletes it in the finally block, wiping any external test configuration; capture the original value (e.g. const originalInvoluteConfigPath = process.env.INVOLUTE_CONFIG_PATH) before overwriting and in the finally block restore it (if originalInvoluteConfigPath is undefined delete the env var, otherwise set process.env.INVOLUTE_CONFIG_PATH = originalInvoluteConfigPath). Update the same pattern where INVOLUTE_CONFIG_PATH is set (also at the other occurrence around lines 486-487) and ensure runCli’s finally restores originalHome and originalInvoluteConfigPath correctly..factory/library/user-testing.md-100-100 (1)
100-100:⚠️ Potential issue | 🟡 MinorUpdate the APP-team note to match the current seed setup.
packages/server/src/validation-data-setup.tsnow givesAPPthe canonical workflow states, so telling validators that APP has only two states is stale and will misdirect reruns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/library/user-testing.md at line 100, Update the documentation note that currently claims the APP team has only two workflow states: the seeded data (see the validation-data-setup seeding logic) now gives APP the canonical six states, so change the sentence to reflect that APP has the canonical workflow (like INV) and remove the warning to avoid using APP for six-column validation; keep the guidance about using the INV team key ("INV") and the sample GraphQL query teams(filter: { key: { eq: "INV" } }) for resolving team IDs.packages/web/src/App.test.tsx-98-102 (1)
98-102:⚠️ Potential issue | 🟡 MinorRestore spies in
afterEach.This file creates several
vi.spyOn(...)andwindow.confirmspies. If one test fails before its manualmockRestore(), later tests inherit mocked globals; addvi.restoreAllMocks()to teardown.🧹 Suggested teardown hardening
afterEach(() => { + vi.restoreAllMocks(); document.body.innerHTML = ''; window.localStorage.clear(); window.history.replaceState({}, '', '/'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/App.test.tsx` around lines 98 - 102, Add a global restore in the test teardown: inside the existing afterEach block (the one that clears document.body, localStorage and history) call vi.restoreAllMocks() so any vi.spyOn(...) or window.confirm mocks are restored even if a test fails; ensure the restore is added alongside the existing cleanup to avoid leaking spies between tests..factory/library/user-testing.md-103-103 (1)
103-103:⚠️ Potential issue | 🟡 MinorReplace the machine-local seed command.
This reseed instruction hard-codes
/Users/chris/..., so it only works on one workstation. Use a repo-relative command instead.📝 Suggested doc fix
-- **Labels are seeded**: The database contains 10+ labels (task, epic, spec, Feature, Bug, etc.). If `issueLabels` query returns empty, re-seed: `cd /Users/chris/workspace/Involute/packages/server && npx prisma db seed`. +- **Labels are seeded**: The database contains 10+ labels (task, epic, spec, Feature, Bug, etc.). If `issueLabels` query returns empty, re-seed: `cd packages/server && npx prisma db seed`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/library/user-testing.md at line 103, Update the "Labels are seeded" instruction to remove the hard-coded absolute path and instead tell users to run the Prisma seed from the repository root; replace the `/Users/chris/...` example with a repo-relative instruction such as "change to the repository root and run the Prisma seed command" or mention using a git-based repo-root resolution so the `npx prisma db seed` step works on any machine and fixes the `issueLabels` empty result guidance.packages/web/src/App.test.tsx-218-236 (1)
218-236:⚠️ Potential issue | 🟡 MinorMake the Apollo
useQuerymock respectfilterandfirst.The new team-scoping and >200-item cases still receive the full mixed-team dataset because this mock ignores
options.variables.filterandoptions.variables.first. That means these tests can pass without actually exercising the server-filtered path they are supposed to protect.🧪 Suggested mock update
apolloMocks.useQuery.mockImplementation((_, options) => { if (options?.variables && 'id' in options.variables) { const issueId = String(options.variables.id); return { data: { issue: queryState.data?.issues.nodes.find((issue) => issue.id === issueId) ?? null, users: queryState.data?.users ?? { nodes: [] }, issueLabels: queryState.data?.issueLabels ?? { nodes: [] }, }, error: queryState.error, loading: queryState.loading ?? false, }; } + const teamKey = options?.variables?.filter?.team?.key?.eq as string | undefined; + const first = options?.variables?.first as number | undefined; + const issues = queryState.data?.issues.nodes ?? []; + const visibleIssues = (teamKey ? issues.filter((issue) => issue.team.key === teamKey) : issues).slice( + 0, + first ?? issues.length, + ); + return { - data: queryState.data, + data: queryState.data + ? { + ...queryState.data, + issues: { + nodes: visibleIssues, + }, + } + : undefined, error: queryState.error, loading: queryState.loading ?? false, }; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/App.test.tsx` around lines 218 - 236, The current apolloMocks.useQuery mock ignores options.variables.filter and options.variables.first so tests get the full dataset; update the mock (apolloMocks.useQuery) to, when returning lists (queryState.data or the per-id branch), apply filtering based on options.variables.filter (e.g., team scoping predicate) and apply pagination/limit using options.variables.first (slice or take first N items) before constructing data.users, data.issueLabels and issues nodes; keep the existing id-specific branch behavior (resolve single issue by id) but ensure that when filter/first are present they are applied to the arrays used to build the returned data so tests exercise server-filtered paths.packages/server/src/validation-data-setup.ts-274-279 (1)
274-279:⚠️ Potential issue | 🟡 MinorUse
fileURLToPath(import.meta.url)in the entrypoint check.The current string comparison fails on Windows. On Windows,
import.meta.urlis formatted asfile:///C:/path/to/file.js(with triple slash and forward slashes), whileprocess.argv[1]isC:\path\to\file.js(backslashes, no leading slash). The expressionfile://${process.argv[1]}producesfile://C:\path\to\file.js, which will never match. UsefileURLToPath(import.meta.url)to convert to a native path for correct comparison on all platforms.🔍 Suggested fix
+import { fileURLToPath } from 'node:url'; import { PrismaClient, type Team } from '@prisma/client'; @@ -if (import.meta.url === `file://${process.argv[1]}`) { +if (process.argv[1] && fileURLToPath(import.meta.url) === process.argv[1]) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/validation-data-setup.ts` around lines 274 - 279, The entrypoint check using string comparison with import.meta.url is platform-unsafe; replace the condition if (import.meta.url === `file://${process.argv[1]}`) with a native-path comparison using fileURLToPath(import.meta.url) === process.argv[1], and add an import for fileURLToPath from 'url' (import { fileURLToPath } from 'url') so the check works on Windows and POSIX; keep the existing dotenv import logic unchanged inside that block.packages/server/src/son-validation-restore.ts-41-45 (1)
41-45:⚠️ Potential issue | 🟡 MinorUse path normalization for the direct-execution guard.
The comparison of
import.meta.urltofile://${process.argv[1]}is not portable: Windows paths and filenames with special characters that need URL escaping will not match. UsefileURLToPathto normalize paths before comparing.Suggested fix
+import { fileURLToPath } from 'node:url'; import { PrismaClient } from '@prisma/client'; @@ -if (import.meta.url === `file://${process.argv[1]}`) { +if (process.argv[1] && fileURLToPath(import.meta.url) === process.argv[1]) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/son-validation-restore.ts` around lines 41 - 45, The direct-execution guard compares import.meta.url to `file://${process.argv[1]}` which fails on Windows and with URL-escaped names; change the check to compare normalized file paths by converting import.meta.url with fileURLToPath and normalizing process.argv[1] (e.g. via path.resolve) before comparing. Import and use fileURLToPath from 'url' and path.resolve from 'path' and replace the existing condition with something like fileURLToPath(import.meta.url) === path.resolve(process.argv[1]) so the guard correctly detects direct execution; update the code around the current if-block that references import.meta.url and process.argv[1].packages/cli/src/commands/import.ts-194-221 (1)
194-221:⚠️ Potential issue | 🟡 Minor
summaryPathpoints to a file that may have been deleted.Lines 203-206 write
involute-import-summary.jsonand then remove the whole export directory whenshouldRetainExportis false, but Lines 217-220 still return that path inTeamImportResult. Any programmatic caller that tries to readsummaryPathafter a default successful run gets a dangling path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/import.ts` around lines 194 - 221, The returned summaryPath can be dangling when shouldRetainExport is false because the code deletes exportDir after writing the file; update the return so it does not expose a deleted path: if shouldRetainExport is true include summaryPath as before, otherwise omit summaryPath or set it to undefined/null in the returned TeamImportResult. Adjust the logic around summaryPath/TeamImportResult (and any callers/types) accordingly and ensure teamImportDependencies.rm still deletes the exportDir but you do not advertise the removed path.packages/cli/src/index.ts-583-646 (1)
583-646:⚠️ Potential issue | 🟡 MinorReject empty
issues updatecalls before sending the mutation.If the user passes no update flags, Lines 583-620 leave
updateInputempty and Line 622 still callsissueUpdate. That turns a CLI usage error into an unnecessary network round-trip and a vague server failure.Suggested fix
if (input.labels !== undefined) { const requestedNames = input.labels .split(',') .map((label) => label.trim()) .filter(Boolean); const labels = await fetchLabels(); const labelIds = requestedNames.map((name) => { const label = labels.find((candidate) => candidate.name === name); if (!label) { throw new CliError(`Label not found: ${name}`); } return label.id; }); updateInput.labelIds = labelIds; } + + if (Object.keys(updateInput).length === 0) { + throw new CliError( + 'No updates specified. Pass at least one of --state, --title, --assignee, or --labels.', + ); + } const result = await client.request<{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/index.ts` around lines 583 - 646, Check for an empty update before calling the GraphQL mutation: after populating updateInput (variable updateInput) but before calling client.request/issueUpdate, validate that updateInput has at least one key and if not throw a CliError like "No update flags provided" to short-circuit the operation; this prevents calling the issueUpdate mutation when the user passed no update flags. Ensure the check runs after processing state/title/assignee/labels and before the client.request invocation.
🧹 Nitpick comments (14)
docs/vision.md (1)
9-9: Optional: Consider hyphenating compound modifier.The phrase "keyboard-and-board driven operations" could be hyphenated as "keyboard-and-board-driven operations" per standard grammar rules for compound modifiers preceding a noun. This is purely stylistic and does not affect clarity.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/vision.md` at line 9, The compound modifier "keyboard-and-board driven operations" in the vision sentence should be hyphenated as "keyboard-and-board-driven operations"; update the sentence text (the line containing "Involute is a focused, self-hostable project management system...keyboard-and-board driven operations.") to use the hyphenated form for correct compound-modifier grammar..factory/validation/cli/scrutiny/reviews/fix-cli-identifier-lookup-pagination.json (1)
16-17: Absolute paths in validation artifact reduce portability.The
evidencefield contains machine-specific absolute paths (/Users/chris/...) that won't resolve for other developers or in CI environments. Consider using repository-relative paths or removing the local filesystem references from committed artifacts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/validation/cli/scrutiny/reviews/fix-cli-identifier-lookup-pagination.json around lines 16 - 17, The validation artifact .factory/validation/cli/scrutiny/reviews/fix-cli-identifier-lookup-pagination.json contains machine-specific absolute paths in the "evidence" field; edit the JSON (target the "evidence" key in that file) to remove or replace absolute filesystem paths with repository-relative paths or sanitized placeholders (e.g., "<repo>/.../handoffs/2026-04-03T...json") or compute paths relative to a REPO_ROOT environment variable so the artifact is portable across machines and CI.packages/cli/src/commands/export.test.ts (1)
84-92: Consider reducing brittleness in subcommand assertion.If command registration order changes or extra subcommands are added, this exact equality may fail noisily. You can assert required members instead.
Possible tweak
- expect(configCmd?.commands.map((command) => command.name())).toEqual(['set', 'get']); + expect(configCmd?.commands.map((command) => command.name())).toEqual( + expect.arrayContaining(['set', 'get']), + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/export.test.ts` around lines 84 - 92, The test is brittle because it asserts exact equality of subcommand names; update the assertion in the test using createProgram() (configCmd variable) to verify required subcommands exist rather than exact order—e.g., replace the expect on configCmd?.commands.map(...).toEqual(['set','get']) with an assertion that checks membership (use expect.arrayContaining(['set','get']) or individual toContain checks) so the test passes even if additional subcommands or ordering changes.packages/web/package.json (1)
9-9: Make the test script shell-portable across environments.Inline env-var assignment can fail on Windows shells. Consider using
cross-envsopnpm testworks consistently for all contributors.Proposed update
- "test": "NODE_OPTIONS=--max-old-space-size=8192 pnpm exec vitest --config vitest.config.ts --passWithNoTests", + "test": "pnpm exec cross-env NODE_OPTIONS=--max-old-space-size=8192 vitest --config vitest.config.ts --passWithNoTests",Also add
cross-envtodevDependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/package.json` at line 9, Update the "test" npm script in package.json to be shell-portable by replacing the inline env assignment ("NODE_OPTIONS=--max-old-space-size=8192 pnpm exec vitest ...") with a cross-platform invocation using the cross-env package (e.g., use cross-env NODE_OPTIONS=--max-old-space-size=8192 pnpm exec vitest ... in the "test" script) and add "cross-env" to devDependencies so contributors on Windows and other shells can run pnpm test reliably; also scan for any other scripts that set NODE_OPTIONS inline and switch them to use cross-env as well.packages/server/src/graphql-mutations.test.ts (1)
13-13: Deletion test bypasses the GraphQL mutation layer.This case currently exercises
issue-servicedirectly, so it won’t catch regressions in resolver wiring/auth/input handling forissueDeleteandcommentDelete.Suggested direction
-import { deleteComment, deleteIssue } from './issue-service.ts';- it('deletes a comment and then deletes its issue', async () => { + it('deletes a comment and then deletes its issue via GraphQL mutations', async () => { @@ - await expect(deleteComment(prisma, comment.id)).resolves.toEqual({ - id: comment.id, - }); + const deleteCommentResponse = await postGraphQL({ + query: ` + mutation CommentDelete($id: String!) { + commentDelete(id: $id) { + success + comment { id } + } + } + `, + variables: { id: comment.id }, + }); + expect(deleteCommentResponse.body.errors).toBeUndefined(); + expect(deleteCommentResponse.body.data.commentDelete).toEqual({ + success: true, + comment: { id: comment.id }, + }); @@ - await expect(deleteIssue(prisma, fixture.issue.id)).resolves.toEqual({ - id: fixture.issue.id, - }); + const deleteIssueResponse = await postGraphQL({ + query: ` + mutation IssueDelete($id: String!) { + issueDelete(id: $id) { + success + issue { id } + } + } + `, + variables: { id: fixture.issue.id }, + }); + expect(deleteIssueResponse.body.errors).toBeUndefined(); + expect(deleteIssueResponse.body.data.issueDelete).toEqual({ + success: true, + issue: { id: fixture.issue.id }, + });Also applies to: 228-258
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/graphql-mutations.test.ts` at line 13, Test is bypassing the GraphQL layer by calling deleteIssue/deleteComment from issue-service directly; change the tests to exercise the actual GraphQL mutations (issueDelete and commentDelete) so resolver wiring, auth and input validation are covered. Replace direct calls to deleteIssue/deleteComment in the test file with GraphQL mutation requests against your test schema/executableSchema or test server (using the same helper used elsewhere for queries/mutations), send the issueDelete and commentDelete mutations with the same inputs and authenticated context, and assert the same outcomes (status/errors and side effects) as before; keep references to the service functions only for expected behavior comparisons if needed, but ensure the primary invocation is the GraphQL mutations and test resolver-level error/permission handling.packages/server/src/issue-service.ts (1)
283-333: Avoid check-then-delete race in delete helpers.Both delete functions do
findUniquethendelete, which is susceptible to TOCTOU and doubles DB calls.Proposed refactor
-import type { Comment, Issue, Prisma, PrismaClient, WorkflowState } from '@prisma/client'; +import type { Comment, Issue, PrismaClient, WorkflowState } from '@prisma/client'; +import { Prisma } from '@prisma/client'; @@ export async function deleteIssue( prisma: PrismaClient, id: string, ): Promise<Pick<Issue, 'id'>> { - const issue = await prisma.issue.findUnique({ - where: { - id, - }, - select: { - id: true, - }, - }); - - if (!issue) { - throw createNotFoundError(ISSUE_NOT_FOUND_MESSAGE); - } - - await prisma.issue.delete({ - where: { - id, - }, - }); - - return issue; + try { + return await prisma.issue.delete({ + where: { id }, + select: { id: true }, + }); + } catch (error) { + if (error instanceof Prisma.PrismaClientKnownRequestError && error.code === 'P2025') { + throw createNotFoundError(ISSUE_NOT_FOUND_MESSAGE); + } + throw error; + } } @@ export async function deleteComment( prisma: PrismaClient, id: string, ): Promise<Pick<Comment, 'id'>> { - const comment = await prisma.comment.findUnique({ - where: { - id, - }, - select: { - id: true, - }, - }); - - if (!comment) { - throw createNotFoundError(COMMENT_NOT_FOUND_MESSAGE); - } - - await prisma.comment.delete({ - where: { - id, - }, - }); - - return comment; + try { + return await prisma.comment.delete({ + where: { id }, + select: { id: true }, + }); + } catch (error) { + if (error instanceof Prisma.PrismaClientKnownRequestError && error.code === 'P2025') { + throw createNotFoundError(COMMENT_NOT_FOUND_MESSAGE); + } + throw error; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/issue-service.ts` around lines 283 - 333, The current deleteIssue and deleteComment do a separate findUnique then delete (TOCTOU and double DB calls); instead, call prisma.issue.delete and prisma.comment.delete directly and catch Prisma errors: if the caught error is a PrismaClientKnownRequestError with code 'P2025' throw createNotFoundError(ISSUE_NOT_FOUND_MESSAGE) or createNotFoundError(COMMENT_NOT_FOUND_MESSAGE) respectively, otherwise rethrow the error; update deleteIssue and deleteComment to remove the pre-check findUnique and rely on the catch mapping to preserve behavior.packages/server/src/issues-filter.test.ts (1)
469-473: Minor redundancy in assertions.Lines 465-468 already assert that
nodes[0]matchesnewestIssue.idandnewestIssue.identifier. The subsequent.some()check on lines 469-473 is redundant since if an element is at index 0, it's guaranteed to be in the array. Consider removing the redundant check or keeping only one form of assertion.♻️ Suggested simplification
expect(response.body.data.issues.nodes[0]).toMatchObject({ id: newestIssue.id, identifier: newestIssue.identifier, }); - expect( - response.body.data.issues.nodes.some( - (issue: { identifier: string }) => issue.identifier === newestIssue.identifier, - ), - ).toBe(true); expect( response.body.data.issues.nodes.some( (issue: { identifier: string }) => issue.identifier === 'SON-1', ), ).toBe(false);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/issues-filter.test.ts` around lines 469 - 473, The test contains a redundant assertion: after already asserting nodes[0] matches newestIssue.id and newestIssue.identifier, the expect(... .some(... newestIssue.identifier ...)).toBe(true) is unnecessary; remove the .some() assertion block (the expect checking response.body.data.issues.nodes.some(... newestIssue.identifier ...)). Alternatively, keep only the .some() style and drop the explicit nodes[0] checks—ensure you update the assertions around response.body.data.issues.nodes and newestIssue.identifier consistently so only one form of verification remains.package.json (1)
14-14: Sequential test execution ensures isolation but sacrifices parallelism.The change from
pnpm -r testto sequential&&-chained filter commands ensures tests run in a deterministic order with early exit on failure. This is a reasonable trade-off for test isolation, though it will be slower than parallel execution. Consider if this is intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 14, The "test" npm script was changed to chain package-filtered test runs sequentially (the "test" script in package.json), which forces deterministic ordering but removes parallelism and slows CI; if you want parallel execution restore a recursive/parallel invocation (e.g., use pnpm -r test or pnpm -r --parallel with appropriate --filter flags) or explicitly document that sequential runs are intentional for isolation and early-exit behavior. Update the "test" script accordingly (or add a separate "test:ci" vs "test:fast" script) and ensure the chosen approach is reflected in CI config and any contributing docs.packages/web/src/components/IssueCard.test.tsx (1)
74-78: Consider using.at(-1)for clarity when accessing the last array element.Minor style improvement for readability.
♻️ Proposed change
- const lastCall = useSortableSpy.mock.calls[useSortableSpy.mock.calls.length - 1]?.[0]; + const lastCall = useSortableSpy.mock.calls.at(-1)?.[0];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/components/IssueCard.test.tsx` around lines 74 - 78, Replace the manual last-element access with Array.prototype.at for clarity: change how the test retrieves the last call from useSortableSpy.mock.calls (currently using useSortableSpy.mock.calls[useSortableSpy.mock.calls.length - 1]?.[0]) to use useSortableSpy.mock.calls.at(-1)?.[0] so the subsequent assertions against lastCall.data (issue, stateId, type) remain unchanged.Dockerfile (2)
29-29: Consider extracting the complex startup logic into a script.The inline shell command is lengthy and harder to maintain. An entrypoint script would improve readability and allow easier modifications.
♻️ Example using an entrypoint script
Create
packages/server/docker-entrypoint.sh:#!/bin/sh set -e pnpm --filter `@involute/server` exec prisma db push --skip-generate if [ "${SEED_DATABASE:-false}" = "true" ]; then pnpm --filter `@involute/server` exec prisma db seed fi exec node packages/server/dist/index.jsThen in Dockerfile:
-CMD ["sh", "-lc", "pnpm --filter `@involute/server` exec prisma db push --skip-generate && if [ \"${SEED_DATABASE:-false}\" = \"true\" ]; then pnpm --filter `@involute/server` exec prisma db seed; fi && node packages/server/dist/index.js"] +COPY packages/server/docker-entrypoint.sh /app/docker-entrypoint.sh +RUN chmod +x /app/docker-entrypoint.sh +ENTRYPOINT ["/app/docker-entrypoint.sh"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` at line 29, Extract the long CMD into a dedicated entrypoint script (e.g., packages/server/docker-entrypoint.sh) and update the Dockerfile to COPY that script into the image, set it executable, and use it as the container ENTRYPOINT or CMD; the script should run pnpm --filter `@involute/server` exec prisma db push --skip-generate, conditionally run pnpm --filter `@involute/server` exec prisma db seed when SEED_DATABASE=true, and finally exec node packages/server/dist/index.js so the process is PID 1. Ensure the Dockerfile replaces the current CMD array with adding the script file and either ENTRYPOINT ["sh", "/path/to/docker-entrypoint.sh"] or CMD ["sh", "/path/to/docker-entrypoint.sh"] and make the script executable (chmod +x) during build.
31-35: Web target runs Vite dev server—not suitable for production.The
webtarget executesvite --hostwhich starts the Vite development server. For production deployments, consider building static assets withvite buildand serving them via a lightweight server (nginx, serve, or the server package).♻️ Example production web target
FROM base AS web -EXPOSE 4201 +RUN pnpm --filter `@involute/web` build -CMD ["pnpm", "--filter", "@involute/web", "exec", "vite", "--host", "0.0.0.0", "--port", "4201"] +FROM nginx:alpine AS web-prod +COPY --from=web /app/packages/web/dist /usr/share/nginx/html +EXPOSE 80 +CMD ["nginx", "-g", "daemon off;"]Alternatively, if the current target is intentionally for development/E2E testing, consider renaming it to
web-devand documenting the intended use case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 31 - 35, The Dockerfile's `web` stage currently runs the Vite dev server (CMD invoking `"pnpm", "--filter", "@involute/web", "exec", "vite", "--host", "0.0.0.0", "--port", "4201"`), which is not appropriate for production; either change the `web` target to perform a production build (`vite build` for `@involute/web`) and serve the resulting static assets with a lightweight server (e.g., nginx or a Node static server) in the final image, or if you intend this image for development/e2e, rename the stage/target to `web-dev` and document that it runs Vite dev server—update CMD and any related startup scripts (references: `web` stage and the existing CMD invoking `vite`) accordingly.docker-compose.yml (1)
83-84: Ensure ./.tmp directory exists before mounting.The CLI service mounts
./.tmp:/exports, but if this directory doesn't exist, Docker behavior varies by platform (it may create it as root-owned or fail). Consider documenting this requirement or adding a.gitkeepfile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 83 - 84, The compose mounts './.tmp:/exports' for the CLI service which can be auto-created with wrong permissions or fail on some platforms; ensure the ./ .tmp directory exists in the repo and is tracked (e.g., add a .gitkeep inside ./ .tmp and commit it) and update any docs/README to state the requirement. Locate the CLI service volume entry referencing './.tmp:/exports' in docker-compose.yml, create the .tmp folder with a placeholder file (.gitkeep) and commit, and optionally add a short note in the project's README about the required ./ .tmp directory and expected permissions..factory/validation/cli/scrutiny/synthesis.json (1)
4-10: Clarify: Milestone status is "pass" despite test validator failure.The synthesis shows
"status": "pass"at the milestone level, butvalidatorsRun.test.passedisfalsewith exit code 1. This may be intentional (e.g., known test failures don't block milestone completion), but the reasoning should be documented to avoid confusion during future reviews.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/validation/cli/scrutiny/synthesis.json around lines 4 - 10, The milestone-level "status" field is set to "pass" while validatorsRun.test.passed is false (exitCode 1), which is confusing; update the synthesis to either make "status" reflect the failing validator or add explicit rationale fields documenting why a failing test does not block the milestone (e.g., add a "statusReason" or "validatorsRun.test.allowedFailure" note). Locate the top-level "status" property and the validatorsRun.test object in the JSON and either change "status" to "partial"/"failed" or add a clear explanatory field referencing validatorsRun.test.passed and validatorsRun.test.exitCode so reviewers can immediately see why the milestone is considered passing despite the test failure.packages/web/src/components/Column.tsx (1)
4-5: Move the HTML5 drag helpers into a shared board module.
Columnis a reusable component, but it now pulls logic fromroutes/BoardPage.tsx, and the same pattern is duplicated inIssueCard. That makes the drag payload helpers harder to reuse and test outside the route; extract them into a small sharedboardutility instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/components/Column.tsx` around lines 4 - 5, Column imports parseHtml5DragPayload from routes/BoardPage and duplicates logic used by IssueCard, so extract the HTML5 drag helpers into a shared board utility to make them reusable and testable. Create a new module (e.g., boardDragUtils) that exports parseHtml5DragPayload and the Html5BoardDragPayload/IssueSummary-related helpers, update Column to import parseHtml5DragPayload from that new module instead of ../routes/BoardPage, and update IssueCard to import the same helpers; ensure types (Html5BoardDragPayload, IssueSummary) are exported or re-exported from the new utility so both components compile and tests can target the helper functions directly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 35cf4d98-08f3-43c5-bf58-8133101191e1
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (69)
.dockerignore.env.example.factory/library/son-validation-dataset-restore.md.factory/library/user-testing.md.factory/library/web-ui-validation-data-alignment.md.factory/services.yaml.factory/validation/cli/scrutiny/reviews/cli-comment-commands.json.factory/validation/cli/scrutiny/reviews/cli-issue-commands.json.factory/validation/cli/scrutiny/reviews/cli-scaffold-and-config.json.factory/validation/cli/scrutiny/reviews/cli-verify-test-isolation.json.factory/validation/cli/scrutiny/reviews/fix-cli-command-level-json-flags.json.factory/validation/cli/scrutiny/reviews/fix-cli-comment-nonuuid-issue-id-resolution.json.factory/validation/cli/scrutiny/reviews/fix-cli-identifier-lookup-pagination.json.factory/validation/cli/scrutiny/synthesis.json.github/workflows/ci.yml.github/workflows/docker-publish.yml.gitignoreDockerfileREADME.mddocker-compose.ymldocs/milestones.mddocs/vision.mde2e/board-flow.spec.tse2e/global-teardown.mjspackage.jsonpackages/cli/package.jsonpackages/cli/src/commands/comments.test.tspackages/cli/src/commands/export.test.tspackages/cli/src/commands/import.test.tspackages/cli/src/commands/import.tspackages/cli/src/commands/issues.test.tspackages/cli/src/commands/shared.tspackages/cli/src/commands/verify.test.tspackages/cli/src/commands/verify.tspackages/cli/src/index.tspackages/cli/vitest.config.tspackages/server/package.jsonpackages/server/src/errors.tspackages/server/src/graphql-mutations.test.tspackages/server/src/issue-service.tspackages/server/src/issues-filter.test.tspackages/server/src/schema.tspackages/server/src/son-validation-restore.test.tspackages/server/src/son-validation-restore.tspackages/server/src/validation-data-constants.tspackages/server/src/validation-data-setup.test.tspackages/server/src/validation-data-setup.tspackages/shared/src/index.test.tspackages/shared/src/index.tspackages/shared/src/viewer-assertion.tspackages/shared/vitest.config.tspackages/web/package.jsonpackages/web/src/App.delete.test.tsxpackages/web/src/App.test.tsxpackages/web/src/App.tsxpackages/web/src/board/queries.tspackages/web/src/board/types.tspackages/web/src/board/utils.tspackages/web/src/components/Column.tsxpackages/web/src/components/IssueCard.test.tsxpackages/web/src/components/IssueCard.tsxpackages/web/src/components/IssueDetailDrawer.tsxpackages/web/src/lib/apollo.tsxpackages/web/src/routes/BacklogPage.tsxpackages/web/src/routes/BoardPage.tsxpackages/web/src/routes/IssuePage.tsxpackages/web/src/styles/app.csspackages/web/vitest.config.tsplaywright.config.ts
| setup_web_ui_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:web-ui-validation | ||
| restore_son_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:son-validation |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check all absolute paths in services.yaml
echo "=== Checking for absolute paths in .factory/services.yaml ==="
rg -n '/Users/' .factory/services.yamlRepository: fakechris/Involute
Length of output: 623
Hard-coded absolute paths break portability across multiple locations.
The .factory/services.yaml file contains hard-coded paths like /Users/chris/workspace/Involute at multiple locations (lines 2, 8, 9, 27, 34). This configuration will fail for other contributors and in CI environments. Replace with relative paths from the repository root.
🔧 Suggested fixes using relative paths
- init: /Users/chris/workspace/Involute/.factory/init.sh
+ init: ./.factory/init.sh- setup_web_ui_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:web-ui-validation
- restore_son_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:son-validation
+ setup_web_ui_validation_data: cd packages/server && pnpm setup:web-ui-validation
+ restore_son_validation_data: cd packages/server && pnpm setup:son-validation- start: "cd /Users/chris/workspace/Involute/packages/server && pnpm build && PORT=4200 node dist/index.js"
+ start: "cd packages/server && pnpm build && PORT=4200 node dist/index.js"- start: "cd /Users/chris/workspace/Involute/packages/web && PORT=4201 npx vite --port 4201"
+ start: "cd packages/web && PORT=4201 npx vite --port 4201"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| setup_web_ui_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:web-ui-validation | |
| restore_son_validation_data: cd /Users/chris/workspace/Involute/packages/server && pnpm setup:son-validation | |
| setup_web_ui_validation_data: cd packages/server && pnpm setup:web-ui-validation | |
| restore_son_validation_data: cd packages/server && pnpm setup:son-validation |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.factory/services.yaml around lines 8 - 9, The service entries
setup_web_ui_validation_data and restore_son_validation_data (and other entries
noted) use hard-coded absolute paths (/Users/chris/…) which breaks portability;
change those commands to use relative repository-root paths (e.g., cd to
packages/server from repo root) or an environment/templated repo-root variable
so CI and other contributors can run them without user-specific paths, and
update all occurrences referenced in the file (lines near the keys
setup_web_ui_validation_data, restore_son_validation_data and the other reported
entries) accordingly.
|
|
||
| api: | ||
| start: "cd /Users/chris/workspace/Involute/packages/server && PORT=4200 node dist/index.js" | ||
| start: "cd /Users/chris/workspace/Involute/packages/server && pnpm build && PORT=4200 node dist/index.js" |
There was a problem hiding this comment.
Same absolute path issue in api service.
The api.start command also uses an absolute path. This should be made relative for portability.
🔧 Suggested fix
- start: "cd /Users/chris/workspace/Involute/packages/server && pnpm build && PORT=4200 node dist/index.js"
+ start: "cd packages/server && pnpm build && PORT=4200 node dist/index.js"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| start: "cd /Users/chris/workspace/Involute/packages/server && pnpm build && PORT=4200 node dist/index.js" | |
| start: "cd packages/server && pnpm build && PORT=4200 node dist/index.js" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.factory/services.yaml at line 27, The api.start entry in
.factory/services.yaml uses an absolute path in its start command; change that
start command to use a relative path (relative to the repo root or where the
service runs) instead of "/Users/chris/...", e.g. adjust the cd target in the
start string so it points to the packages/server folder relatively (and verify
the build and node commands still run from that relative location). Ensure you
update the start value under the api service only and test the command from the
repo root.
|
|
||
| jobs: | ||
| publish: | ||
| if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd .github/workflows && ls -la docker-publish.yml && echo "---" && cat -n docker-publish.ymlRepository: fakechris/Involute
Length of output: 1577
🌐 Web query:
GitHub Actions job if condition secrets context available
💡 Result:
No, the secrets context is not available in GitHub Actions job-level if conditions (jobs.<job_id>.if). Official documentation explicitly lists available contexts for jobs..if as github, needs, vars, inputs (and status functions), excluding secrets. This is a security/design decision to prevent debugging issues with secrets in expressions evaluated before runner dispatch, as confirmed across multiple docs pages and the longstanding actions/runner issue #520 (closed without change). Workarounds: - Use a preliminary job to check secrets and output a boolean (via needs context): e.g., a check-secrets job runs always, checks if ${{ secrets.MY_SECRET }} != '', sets outputs.has-secret: true, then dependent job uses if: needs.check-secrets.outputs.has-secret == 'true'. - For fork/PR detection (no secrets), use if: github.secret_source == 'Actions' (github context, available in job.if). - Set secret as job env and check in step-level if (step.if supports secrets), but job still queues/runs. This holds as of 2026 docs (e.g., published 2025-06-04).
Citations:
- 1: https://docs.github.com/en/actions/learn-github-actions/contexts
- 2: https://docs.github.com/actions/security-guides/using-secrets-in-github-actions
- 3: https://stackoverflow.com/questions/70249519/how-to-check-if-a-secret-variable-is-empty-in-if-conditional-github-actions/70249520
- 4: Secrets cannot be used to condition job runs actions/runner#520
- 5: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/contexts
- 6: fix: GitHub Actions workflow fails due to invalid secret reference in job-level conditional atxtechbro/dotfiles#1347
- 7: https://stackoverflow.com/questions/70249519/how-to-check-if-a-secret-variable-is-empty-in-if-conditional-github-actions
Move the conditional check to step-level where secrets context is available.
Line 11 uses secrets in jobs.publish.if, but the secrets context is not available at job-level conditions. Per GitHub Actions documentation, only github, needs, vars, and inputs contexts are available in job-level if conditions. This invalid syntax will fail workflow validation.
Proposed fix
jobs:
publish:
- if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
runs-on: ubuntu-latest
@@
- name: Log in to Docker Hub
+ if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
uses: docker/login-action@v3
@@
- name: Build and push
+ if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
uses: docker/build-push-action@v6🧰 Tools
🪛 actionlint (1.7.12)
[error] 11-11: context "secrets" is not allowed here. available contexts are "github", "inputs", "needs", "vars". see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability for more details
(expression)
[error] 11-11: context "secrets" is not allowed here. available contexts are "github", "inputs", "needs", "vars". see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability for more details
(expression)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docker-publish.yml at line 11, The job-level condition
using secrets (jobs.publish.if) is invalid because the secrets context isn't
available at job-level; remove that job-level if and instead add the conditional
to the step(s) that perform publishing (e.g., the step that logs in/pushes
Docker) by adding an if: expression like if: ${{ secrets.DOCKERHUB_USERNAME !=
'' && secrets.DOCKERHUB_TOKEN != '' }} on those step definitions (the Docker
login/push step names) so the workflow validates and only runs publish steps
when the secrets exist.
| FROM node:22-bookworm-slim AS base | ||
|
|
||
| ENV PNPM_HOME=/pnpm | ||
| ENV PATH=$PNPM_HOME:$PATH | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| RUN corepack enable | ||
| RUN apt-get update \ | ||
| && apt-get install -y --no-install-recommends ca-certificates openssl \ | ||
| && rm -rf /var/lib/apt/lists/* |
There was a problem hiding this comment.
Add a non-root user for production security.
The container runs as root, which violates the principle of least privilege. If the container is compromised, an attacker gains root access. Consider adding a non-root user after installing dependencies.
🛡️ Proposed fix to add non-root user
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates openssl \
&& rm -rf /var/lib/apt/lists/*
+
+RUN addgroup --system --gid 1001 nodejs \
+ && adduser --system --uid 1001 involute --ingroup nodejsThen add USER involute before the CMD/ENTRYPOINT in each target stage (server, web, cli).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM node:22-bookworm-slim AS base | |
| ENV PNPM_HOME=/pnpm | |
| ENV PATH=$PNPM_HOME:$PATH | |
| WORKDIR /app | |
| RUN corepack enable | |
| RUN apt-get update \ | |
| && apt-get install -y --no-install-recommends ca-certificates openssl \ | |
| && rm -rf /var/lib/apt/lists/* | |
| FROM node:22-bookworm-slim AS base | |
| ENV PNPM_HOME=/pnpm | |
| ENV PATH=$PNPM_HOME:$PATH | |
| WORKDIR /app | |
| RUN corepack enable | |
| RUN apt-get update \ | |
| && apt-get install -y --no-install-recommends ca-certificates openssl \ | |
| && rm -rf /var/lib/apt/lists/* | |
| RUN addgroup --system --gid 1001 nodejs \ | |
| && adduser --system --uid 1001 involute --ingroup nodejs |
🧰 Tools
🪛 Trivy (0.69.3)
[error] 1-1: Image user should not be 'root'
Specify at least 1 USER command in Dockerfile with non-root user as argument
Rule: DS-0002
(IaC/Dockerfile)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` around lines 1 - 11, The Dockerfile currently builds as root
(FROM node:22-bookworm-slim AS base and subsequent RUN steps); add a non-root
user (e.g., user "involute") after the package/install steps and before
switching to application runtime by creating the user, chown-ing /app and any
other runtime dirs, and then set USER involute in each target stage (server,
web, cli) before the CMD/ENTRYPOINT; ensure the creation happens after
apt-get/install steps (near RUN corepack enable / apt-get ...), and reference
the base stage and each stage's CMD/ENTRYPOINT to add the USER directive.
| spawnSync('docker', ['compose', 'down', '--remove-orphans'], { | ||
| cwd: rootDir, | ||
| stdio: 'inherit', | ||
| }); |
There was a problem hiding this comment.
Handle teardown command failures explicitly.
Line 8 runs Docker teardown synchronously but ignores failures. If docker compose down fails, E2E can appear green while leaving stale containers/networks behind.
Proposed fix
export default async function globalTeardown() {
- spawnSync('docker', ['compose', 'down', '--remove-orphans'], {
+ const result = spawnSync('docker', ['compose', 'down', '--remove-orphans'], {
cwd: rootDir,
stdio: 'inherit',
});
+
+ if (result.error) {
+ throw result.error;
+ }
+
+ if (result.status !== 0) {
+ throw new Error(`docker compose down failed with exit code ${result.status ?? 'unknown'}`);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| spawnSync('docker', ['compose', 'down', '--remove-orphans'], { | |
| cwd: rootDir, | |
| stdio: 'inherit', | |
| }); | |
| export default async function globalTeardown() { | |
| const result = spawnSync('docker', ['compose', 'down', '--remove-orphans'], { | |
| cwd: rootDir, | |
| stdio: 'inherit', | |
| }); | |
| if (result.error) { | |
| throw result.error; | |
| } | |
| if (result.status !== 0) { | |
| throw new Error(`docker compose down failed with exit code ${result.status ?? 'unknown'}`); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/global-teardown.mjs` around lines 8 - 11, The synchronous Docker teardown
call using spawnSync in e2e/global-teardown.mjs currently ignores failures;
capture the return value (e.g., result = spawnSync(...)) and check result.error
and result.status; if result.error exists or result.status !== 0, log the error
and stderr details (using console.error or processLogger) and exit non‑zero
(process.exit(1)) so the E2E run fails visibly and stale containers/networks are
not silently left behind.
| const teams = await prisma.team.findMany({ | ||
| include: { | ||
| states: true, | ||
| _count: { | ||
| select: { | ||
| issues: true, | ||
| }, | ||
| }, | ||
| }, | ||
| orderBy: { | ||
| key: 'asc', | ||
| }, | ||
| }); | ||
|
|
||
| const sonTeamPresent = teams.some((team) => team.key === 'SON'); | ||
|
|
||
| await backfillImportedTeamStates(prisma); | ||
|
|
||
| const validationIssues = await prisma.issue.findMany({ | ||
| where: { | ||
| teamId: invTeam.id, | ||
| title: { | ||
| startsWith: `${VALIDATION_PREFIX}: `, | ||
| }, | ||
| }, | ||
| orderBy: { | ||
| identifier: 'asc', | ||
| }, | ||
| select: { | ||
| identifier: true, | ||
| }, | ||
| }); | ||
|
|
||
| return { | ||
| appTeamId: appTeam.id, | ||
| emptyTeamId: emptyTeam.id, | ||
| invIssueIdentifiers: validationIssues.map((issue) => issue.identifier), | ||
| labelsCount, | ||
| manyIssueCount, | ||
| sonTeamPresent, | ||
| teams: teams.map((team) => ({ | ||
| id: team.id, | ||
| issueCount: team._count.issues, | ||
| key: team.key, | ||
| name: team.name, | ||
| stateCount: team.states.length, | ||
| })), |
There was a problem hiding this comment.
Build the summary after backfilling imported states.
teams is loaded before backfillImportedTeamStates(), so the returned stateCount values can be stale for SON or any other imported team even though this function just added the missing states. Move the backfill earlier or reload teams afterward.
🔧 Suggested fix
- const teams = await prisma.team.findMany({
- include: {
- states: true,
- _count: {
- select: {
- issues: true,
- },
- },
- },
- orderBy: {
- key: 'asc',
- },
- });
-
- const sonTeamPresent = teams.some((team) => team.key === 'SON');
-
await backfillImportedTeamStates(prisma);
+
+ const teams = await prisma.team.findMany({
+ include: {
+ states: true,
+ _count: {
+ select: {
+ issues: true,
+ },
+ },
+ },
+ orderBy: {
+ key: 'asc',
+ },
+ });
+
+ const sonTeamPresent = teams.some((team) => team.key === 'SON');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const teams = await prisma.team.findMany({ | |
| include: { | |
| states: true, | |
| _count: { | |
| select: { | |
| issues: true, | |
| }, | |
| }, | |
| }, | |
| orderBy: { | |
| key: 'asc', | |
| }, | |
| }); | |
| const sonTeamPresent = teams.some((team) => team.key === 'SON'); | |
| await backfillImportedTeamStates(prisma); | |
| const validationIssues = await prisma.issue.findMany({ | |
| where: { | |
| teamId: invTeam.id, | |
| title: { | |
| startsWith: `${VALIDATION_PREFIX}: `, | |
| }, | |
| }, | |
| orderBy: { | |
| identifier: 'asc', | |
| }, | |
| select: { | |
| identifier: true, | |
| }, | |
| }); | |
| return { | |
| appTeamId: appTeam.id, | |
| emptyTeamId: emptyTeam.id, | |
| invIssueIdentifiers: validationIssues.map((issue) => issue.identifier), | |
| labelsCount, | |
| manyIssueCount, | |
| sonTeamPresent, | |
| teams: teams.map((team) => ({ | |
| id: team.id, | |
| issueCount: team._count.issues, | |
| key: team.key, | |
| name: team.name, | |
| stateCount: team.states.length, | |
| })), | |
| await backfillImportedTeamStates(prisma); | |
| const teams = await prisma.team.findMany({ | |
| include: { | |
| states: true, | |
| _count: { | |
| select: { | |
| issues: true, | |
| }, | |
| }, | |
| }, | |
| orderBy: { | |
| key: 'asc', | |
| }, | |
| }); | |
| const sonTeamPresent = teams.some((team) => team.key === 'SON'); | |
| const validationIssues = await prisma.issue.findMany({ | |
| where: { | |
| teamId: invTeam.id, | |
| title: { | |
| startsWith: `${VALIDATION_PREFIX}: `, | |
| }, | |
| }, | |
| orderBy: { | |
| identifier: 'asc', | |
| }, | |
| select: { | |
| identifier: true, | |
| }, | |
| }); | |
| return { | |
| appTeamId: appTeam.id, | |
| emptyTeamId: emptyTeam.id, | |
| invIssueIdentifiers: validationIssues.map((issue) => issue.identifier), | |
| labelsCount, | |
| manyIssueCount, | |
| sonTeamPresent, | |
| teams: teams.map((team) => ({ | |
| id: team.id, | |
| issueCount: team._count.issues, | |
| key: team.key, | |
| name: team.name, | |
| stateCount: team.states.length, | |
| })), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/validation-data-setup.ts` around lines 73 - 119, The
teams list is loaded before backfillImportedTeamStates(prisma) so the returned
teams[].stateCount can be stale; move the call to
backfillImportedTeamStates(prisma) to run before fetching teams (the
prisma.team.findMany call), or alternatively re-query teams after backfill (the
teams variable used in teams.some(team => team.key === 'SON') and teams.map(...)
for the returned object) so stateCount reflects the newly backfilled states.
| children { | ||
| nodes { | ||
| id | ||
| identifier | ||
| title | ||
| } | ||
| } | ||
| parent { | ||
| id | ||
| identifier | ||
| title | ||
| } | ||
| comments(first: 100, orderBy: createdAt) { | ||
| nodes { | ||
| id | ||
| body | ||
| createdAt | ||
| user { | ||
| id | ||
| name | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Keep the board query summary-only.
The board now fetches parent, children, and up to 100 comments for every issue in the board payload. On large boards that makes the initial render much heavier even though only one issue drawer can be open at a time; fetch these detail fields lazily from the single-issue query instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/board/queries.ts` around lines 58 - 81, The board query is
returning heavy detail fields — remove the children, parent and comments(first:
100, orderBy: createdAt) selections from the board payload so the board remains
summary-only (delete the children { nodes { id identifier title } }, parent { id
identifier title }, and comments(...) block in the board GraphQL query).
Add/ensure those fields are present in the single-issue query (the single-issue
query used when opening the issue drawer) and change the UI code that opens an
issue drawer to lazily fetch issue details via that single-issue query instead
of relying on the board payload.
| const visibleIssues = useMemo(() => { | ||
| if (selectedTeamKey) { | ||
| return localIssues; | ||
| } | ||
|
|
||
| return filterIssuesByTeam(localIssues, selectedTeam?.key ?? null); | ||
| }, [localIssues, selectedTeam?.key, selectedTeamKey]); |
There was a problem hiding this comment.
Filter localIssues while a team switch is in flight.
Line 216 returns the full optimistic list whenever selectedTeamKey is set. After switching teams, that leaves the previous team's issues on screen until the refetch resolves, so the board and selected drawer can briefly operate on stale items under the new team context.
Suggested fix
const visibleIssues = useMemo(() => {
if (selectedTeamKey) {
- return localIssues;
+ return filterIssuesByTeam(localIssues, selectedTeamKey);
}
return filterIssuesByTeam(localIssues, selectedTeam?.key ?? null);
}, [localIssues, selectedTeam?.key, selectedTeamKey]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const visibleIssues = useMemo(() => { | |
| if (selectedTeamKey) { | |
| return localIssues; | |
| } | |
| return filterIssuesByTeam(localIssues, selectedTeam?.key ?? null); | |
| }, [localIssues, selectedTeam?.key, selectedTeamKey]); | |
| const visibleIssues = useMemo(() => { | |
| if (selectedTeamKey) { | |
| return filterIssuesByTeam(localIssues, selectedTeamKey); | |
| } | |
| return filterIssuesByTeam(localIssues, selectedTeam?.key ?? null); | |
| }, [localIssues, selectedTeam?.key, selectedTeamKey]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/routes/BoardPage.tsx` around lines 214 - 220, The current
visibleIssues logic returns the entire optimistic localIssues whenever
selectedTeamKey is truthy, which shows the previous team's items during a team
switch; change the branch so that when selectedTeamKey is set it filters
localIssues by that key (use filterIssuesByTeam(localIssues, selectedTeamKey))
instead of returning all items, otherwise fall back to filtering by
selectedTeam?.key; update the useMemo dependency list to include selectedTeamKey
and selectedTeam?.key as needed and keep references to visibleIssues,
selectedTeamKey, selectedTeam, localIssues, and filterIssuesByTeam to locate the
change.
| async function handleDragEnd(event: DragEndEvent) { | ||
| const issueId = String(event.active.id); | ||
| const targetStateId = event.over ? String(event.over.id) : null; | ||
| const targetStateId = getDropTargetStateId(event) ?? dragPreviewStateId; | ||
| const originStateId = dragOriginStateId; | ||
|
|
||
| setActiveIssueId(null); | ||
| setDragPreviewStateId(null); | ||
| setDragOriginStateId(null); | ||
|
|
||
| if (!targetStateId) { | ||
| return; | ||
| } | ||
|
|
||
| const issue = localIssues.find((item) => item.id === issueId); | ||
|
|
||
| if (!issue || issue.state.id === targetStateId) { | ||
| // Compare against the origin state to avoid skipping the mutation after | ||
| // handleDragOver has already optimistically moved the card. | ||
| if (!issue || originStateId === targetStateId) { | ||
| return; | ||
| } | ||
|
|
||
| const targetState = | ||
| selectedTeam?.states.nodes.find((item) => item.id === targetStateId) ?? null; | ||
|
|
||
| if (!targetState) { | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| await persistStateChange(issue, targetStateId); | ||
| // Call persistIssueUpdate directly instead of persistStateChange because | ||
| // handleDragOver already updated issue.state optimistically, which would | ||
| // cause persistStateChange to skip the mutation. | ||
| await persistIssueUpdate(issue, { stateId: targetStateId }, (current) => ({ | ||
| ...current, | ||
| state: targetState, | ||
| })); | ||
| } catch { | ||
| // error state already handled | ||
| } |
There was a problem hiding this comment.
Restore drag state from a drag-start snapshot, not from the previewed/query state.
Line 517 persists after handleDragOver has already moved the card optimistically, so a failed mutation rolls back to the already-moved list. Line 760 then resets from data?.issues.nodes, which can wipe out unrelated local edits that were never written back to the query result. Capture localIssues at drag start and reuse that snapshot for both cancel and failure paths.
Also applies to: 756-761
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/routes/BoardPage.tsx` around lines 485 - 523, The rollback
currently reads from the live query/preview state after handleDragOver; instead
capture a snapshot of localIssues when the drag begins and restore that snapshot
on cancel/failure. Add a drag-start snapshot (e.g., dragStartSnapshot state or
ref set in handleDragStart) that stores the current localIssues array, then in
handleDragEnd (and the cancel path that currently reads data?.issues.nodes) use
that snapshot to reset local issues instead of reading from
dragPreviewStateId/query data; ensure persistIssueUpdate success still updates
normally and clear the snapshot on completion.
| command: | ||
| `docker compose up -d db && ` + | ||
| `for attempt in $(seq 1 20); do ` + | ||
| `DATABASE_URL="${databaseUrl}" pnpm --filter @involute/server exec prisma db push --force-reset --skip-generate && break; ` + | ||
| `if [ "$attempt" = "20" ]; then exit 1; fi; ` + | ||
| `sleep 3; ` + | ||
| `done && ` + | ||
| `DATABASE_URL="${databaseUrl}" pnpm --filter @involute/server exec prisma db seed && ` + | ||
| `DATABASE_URL="${databaseUrl}" AUTH_TOKEN="${authToken}" PORT=4200 pnpm --filter @involute/server exec tsx src/index.ts`, | ||
| url: 'http://127.0.0.1:4200/health', | ||
| reuseExistingServer: !process.env.CI, | ||
| timeout: 120_000, | ||
| }, | ||
| { | ||
| command: `VITE_INVOLUTE_AUTH_TOKEN="${authToken}" VITE_INVOLUTE_GRAPHQL_URL="http://127.0.0.1:4200/graphql" pnpm --filter @involute/web exec vite --host 127.0.0.1 --port 4201`, | ||
| url: 'http://127.0.0.1:4201', | ||
| reuseExistingServer: !process.env.CI, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Playwright's webServerconfiguration, whenreuseExistingServeris true and the configuredurlalready responds, does Playwright skip running thecommand?
💡 Result:
No, Playwright does not skip running the command when reuseExistingServer is true and the configured url already responds. It reuses the existing server without starting a new one via the command.
Citations:
- 1: https://playwright.dev/docs/test-webserver
- 2: [BUG] webServer.reuseExistingServer is broken on node 18 microsoft/playwright#24101
Make server reuse opt-in for this seeded E2E setup.
When reuseExistingServer: !process.env.CI is true (locally), Playwright skips the command entirely if ports 4200/4201 are already responding. This bypasses the DB reset, seed, and fixed auth/GraphQL env setup, causing tests to run against whatever stack is running on those ports rather than the known test fixture.
Change both reuseExistingServer settings to require explicit opt-in:
🔧 Suggested change
- reuseExistingServer: !process.env.CI,
+ reuseExistingServer: process.env.PLAYWRIGHT_REUSE_EXISTING_SERVER === '1',Apply the same change to line 36 for the second webServer entry.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@playwright.config.ts` around lines 20 - 36, The Playwright webServer entries
currently set reuseExistingServer: !process.env.CI which lets local machines
accidentally skip running the seeded test stack; update both webServer configs
(the server entry that runs the backend command sequence and the one that runs
the Vite dev server) to require an explicit opt-in environment variable instead
of defaulting to true locally—e.g., replace the boolean expression with a check
like process.env.REUSE_EXISTING_SERVER === 'true' (or another explicit env flag
name you prefer) so tests only reuse an existing server when that flag is
intentionally set.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
Dockerfile (1)
1-60:⚠️ Potential issue | 🟠 MajorRuntime stages are still running as root.
This was flagged earlier and is still unresolved. Please add a non-root user and switch
server,web-dev, andcliruntime stages to it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 1 - 60, Add a non-root user in the base image and switch the runtime stages (server, web-dev, cli) to that user: create a user/group (e.g., INVOLUTE_USER) and home directory in the base stage, chown application files and the docker-entrypoint.sh (and web static files copied into the nginx stage) to that UID/GID, and then add USER <user> (or USER <uid>:<gid>) at the start of the server, web-dev and cli stages so ENTRYPOINT/CMD run as non-root; ensure any steps that require root (apt, pnpm install, copying from build stages) remain before switching user and that docker-entrypoint.sh retains executable permission for the non-root user.
🧹 Nitpick comments (3)
packages/web/src/routes/BoardPage.tsx (1)
207-211: Consider whether all three sensor types are necessary.
PointerSensoralready handles both mouse and touch events via the Pointer Events API. HavingMouseSensorandTouchSensoralongside may cause redundant event handling on devices that support pointer events. If the goal is broader compatibility with older browsers, this is fine; otherwise,PointerSensoralone may suffice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/routes/BoardPage.tsx` around lines 207 - 211, The sensors list registers PointerSensor, MouseSensor and TouchSensor which can duplicate events on platforms supporting Pointer Events; update useSensors in BoardPage.tsx to avoid redundant handlers by using only useSensor(PointerSensor, { activationConstraint: { distance: DND_ACTIVATION_DISTANCE } }) when the environment supports Pointer Events, and fall back to registering MouseSensor and TouchSensor only when Pointer Events are unavailable (e.g., check for typeof window !== "undefined" && "PointerEvent" in window). Adjust the sensors variable to conditionally include the fallback sensors or remove MouseSensor/TouchSensor entirely if you decide to rely on PointerSensor across supported browsers.packages/web/src/App.test.tsx (1)
28-47: Mutation mock setup is duplicated in hoisted and beforeEach.The same
useMutationimplementation logic appears both invi.hoisted()(lines 30-46) and inbeforeEach(lines 81-97). The hoisted version provides initial values, but thebeforeEachresets it with identical logic. Consider extracting the shared implementation to a helper function to reduce duplication.♻️ Suggested refactor
+function createMutationMock(document: unknown) { + const source = String(document); + + if (source.includes('mutation CommentCreate')) { + return [vi.fn().mockResolvedValue({ data: { commentCreate: { success: true, comment: null } } })]; + } + + if (source.includes('mutation CommentDelete')) { + return [vi.fn().mockResolvedValue({ data: { commentDelete: { success: true, commentId: 'comment-1' } } })]; + } + + if (source.includes('mutation IssueDelete')) { + return [vi.fn().mockResolvedValue({ data: { issueDelete: { success: true, issueId: 'issue-1' } } })]; + } + + return [vi.fn()]; +} + const apolloMocks = vi.hoisted(() => ({ useQuery: vi.fn(), - useMutation: vi.fn((document) => { - // ... duplicated logic - }), + useMutation: vi.fn(createMutationMock), }));Also applies to: 76-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/App.test.tsx` around lines 28 - 47, The test duplicates the useMutation mock logic in vi.hoisted() (apolloMocks) and in the beforeEach; extract that logic into a single helper function (e.g., buildUseMutationMock or createApolloMutationMocks) and have both apolloMocks and the beforeEach call that helper to get the same mock implementation for useMutation, keeping the same branch checks for 'mutation CommentCreate', 'mutation CommentDelete', and 'mutation IssueDelete' and returning the same mocked resolved values; update references to useMutation in apolloMocks and the beforeEach to call the helper instead of duplicating the code.README.md (1)
34-37: Consider making the browser-open step OS-neutral.Line [36] uses
open, which is macOS-only. A neutral instruction (e.g., “open this URL in your browser”) keeps docs portable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 34 - 37, The README currently uses the macOS-only command "open http://localhost:4201"; change this to an OS-neutral instruction: either replace the literal command with a human instruction like "open http://localhost:4201 in your browser" or provide cross-platform alternatives (e.g., mention "start" for Windows, "xdg-open" for Linux, or use a portable command such as "python -m webbrowser http://localhost:4201") so the documentation works for all environments and the snippet referencing "open http://localhost:4201" is removed or augmented.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Dockerfile`:
- Around line 46-53: Update the Dockerfile to expose the same port nginx
actually listens on: change the EXPOSE directive from 4201 to 80 so it matches
the listen port defined in packages/web/nginx.conf; verify the CMD ["nginx",
"-g", "daemon off;"] and nginx.conf remain consistent after this change.
In `@packages/server/docker-entrypoint.sh`:
- Line 4: Add an opt-in AUTO_DB_PUSH env flag (default "false") and gate the
pnpm --filter `@involute/server` exec prisma db push --skip-generate call behind
it so schema push only runs when AUTO_DB_PUSH is "true"; follow the existing
SEED_DATABASE pattern for parsing the env var, e.g., check
AUTO_DB_PUSH="${AUTO_DB_PUSH:-false}" and run the prisma db push line only if
AUTO_DB_PUSH equals "true", otherwise skip/NO-OP (do not replace with prisma
migrate deploy).
In `@README.md`:
- Line 142: The README contains machine-specific absolute paths in the links for
docs (the link texts [docs/vision.md] and [docs/milestones.md]); replace those
absolute file paths with repo-relative paths (e.g., ./docs/vision.md and
./docs/milestones.md or simply docs/vision.md and docs/milestones.md) so the
links work for everyone, updating the link targets in the README.md entry that
currently references the absolute local paths.
---
Duplicate comments:
In `@Dockerfile`:
- Around line 1-60: Add a non-root user in the base image and switch the runtime
stages (server, web-dev, cli) to that user: create a user/group (e.g.,
INVOLUTE_USER) and home directory in the base stage, chown application files and
the docker-entrypoint.sh (and web static files copied into the nginx stage) to
that UID/GID, and then add USER <user> (or USER <uid>:<gid>) at the start of the
server, web-dev and cli stages so ENTRYPOINT/CMD run as non-root; ensure any
steps that require root (apt, pnpm install, copying from build stages) remain
before switching user and that docker-entrypoint.sh retains executable
permission for the non-root user.
---
Nitpick comments:
In `@packages/web/src/App.test.tsx`:
- Around line 28-47: The test duplicates the useMutation mock logic in
vi.hoisted() (apolloMocks) and in the beforeEach; extract that logic into a
single helper function (e.g., buildUseMutationMock or createApolloMutationMocks)
and have both apolloMocks and the beforeEach call that helper to get the same
mock implementation for useMutation, keeping the same branch checks for
'mutation CommentCreate', 'mutation CommentDelete', and 'mutation IssueDelete'
and returning the same mocked resolved values; update references to useMutation
in apolloMocks and the beforeEach to call the helper instead of duplicating the
code.
In `@packages/web/src/routes/BoardPage.tsx`:
- Around line 207-211: The sensors list registers PointerSensor, MouseSensor and
TouchSensor which can duplicate events on platforms supporting Pointer Events;
update useSensors in BoardPage.tsx to avoid redundant handlers by using only
useSensor(PointerSensor, { activationConstraint: { distance:
DND_ACTIVATION_DISTANCE } }) when the environment supports Pointer Events, and
fall back to registering MouseSensor and TouchSensor only when Pointer Events
are unavailable (e.g., check for typeof window !== "undefined" && "PointerEvent"
in window). Adjust the sensors variable to conditionally include the fallback
sensors or remove MouseSensor/TouchSensor entirely if you decide to rely on
PointerSensor across supported browsers.
In `@README.md`:
- Around line 34-37: The README currently uses the macOS-only command "open
http://localhost:4201"; change this to an OS-neutral instruction: either replace
the literal command with a human instruction like "open http://localhost:4201 in
your browser" or provide cross-platform alternatives (e.g., mention "start" for
Windows, "xdg-open" for Linux, or use a portable command such as "python -m
webbrowser http://localhost:4201") so the documentation works for all
environments and the snippet referencing "open http://localhost:4201" is removed
or augmented.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1fc035c0-0ec3-47e7-9eb9-cd44838caed9
📒 Files selected for processing (13)
.factory/validation/cli/scrutiny/synthesis.json.gitignore.tmp/.gitkeepDockerfileREADME.mddocker-compose.ymlpackages/server/docker-entrypoint.shpackages/web/nginx.confpackages/web/src/App.test.tsxpackages/web/src/board/utils.tspackages/web/src/components/Column.tsxpackages/web/src/components/IssueCard.tsxpackages/web/src/routes/BoardPage.tsx
✅ Files skipped from review due to trivial changes (4)
- .tmp/.gitkeep
- packages/web/nginx.conf
- .gitignore
- .factory/validation/cli/scrutiny/synthesis.json
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/web/src/components/Column.tsx
| FROM nginx:1.27-alpine AS web | ||
|
|
||
| COPY packages/web/nginx.conf /etc/nginx/conf.d/default.conf | ||
| COPY --from=web-build /app/packages/web/dist /usr/share/nginx/html | ||
|
|
||
| EXPOSE 4201 | ||
|
|
||
| CMD ["nginx", "-g", "daemon off;"] |
There was a problem hiding this comment.
Expose the same port nginx actually listens on.
Line [51] exposes 4201, but packages/web/nginx.conf listens on 80. The web image won’t serve on the advertised container port.
🔧 Proposed fix
FROM nginx:1.27-alpine AS web
COPY packages/web/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=web-build /app/packages/web/dist /usr/share/nginx/html
-EXPOSE 4201
+EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM nginx:1.27-alpine AS web | |
| COPY packages/web/nginx.conf /etc/nginx/conf.d/default.conf | |
| COPY --from=web-build /app/packages/web/dist /usr/share/nginx/html | |
| EXPOSE 4201 | |
| CMD ["nginx", "-g", "daemon off;"] | |
| FROM nginx:1.27-alpine AS web | |
| COPY packages/web/nginx.conf /etc/nginx/conf.d/default.conf | |
| COPY --from=web-build /app/packages/web/dist /usr/share/nginx/html | |
| EXPOSE 80 | |
| CMD ["nginx", "-g", "daemon off;"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` around lines 46 - 53, Update the Dockerfile to expose the same
port nginx actually listens on: change the EXPOSE directive from 4201 to 80 so
it matches the listen port defined in packages/web/nginx.conf; verify the CMD
["nginx", "-g", "daemon off;"] and nginx.conf remain consistent after this
change.
| #!/bin/sh | ||
| set -e | ||
|
|
||
| pnpm --filter @involute/server exec prisma db push --skip-generate |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current startup strategy and whether migration files exist.
rg -n "prisma db push|prisma migrate deploy|prisma db seed" packages/server/docker-entrypoint.sh Dockerfile README.md
fd -t d migrations packages/server/prismaRepository: fakechris/Involute
Length of output: 259
🏁 Script executed:
# Check full docker-entrypoint.sh content
cat -n packages/server/docker-entrypoint.shRepository: fakechris/Involute
Length of output: 362
🏁 Script executed:
# Check if migrations directory exists (direct check)
ls -la packages/server/prisma/ 2>/dev/null | head -20Repository: fakechris/Involute
Length of output: 672
🏁 Script executed:
# Look for prisma configuration
find packages/server -name "prisma.schema" -o -name ".prismarc*" -o -name "prisma.json" 2>/dev/nullRepository: fakechris/Involute
Length of output: 44
🏁 Script executed:
# Search for any environment variable usage in docker context
rg -n "AUTO_|PRISMA_|DB_" packages/server/docker-entrypoint.sh DockerfileRepository: fakechris/Involute
Length of output: 44
🏁 Script executed:
# Search for migration or deployment docs
rg -i "migration|deploy" packages/server/README.md README.md docs/ --type md 2>/dev/null | head -20Repository: fakechris/Involute
Length of output: 257
🏁 Script executed:
# Check prisma schema for hints about approach
cat -n packages/server/prisma/schema.prisma | head -30Repository: fakechris/Involute
Length of output: 934
🏁 Script executed:
# Check if db push is configured in prisma config
rg -A5 -B5 "prisma" packages/server/package.json | head -40Repository: fakechris/Involute
Length of output: 1196
Gate db push at startup behind an opt-in env flag to avoid unintended schema changes in production.
Line 4 applies schema changes on every boot. The SEED_DATABASE pattern (line 6) already shows opt-in gating is used in this startup script. Add an AUTO_DB_PUSH flag (defaulting to false) to control this behavior.
Note: The codebase uses Prisma's shadow database strategy (db push), not explicit migrations, so the proposed prisma migrate deploy should be replaced with a no-op or skipped entirely when the flag is disabled.
🔧 Suggested approach
-pnpm --filter `@involute/server` exec prisma db push --skip-generate
+if [ "${AUTO_DB_PUSH:-false}" = "true" ]; then
+ pnpm --filter `@involute/server` exec prisma db push --skip-generate
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pnpm --filter @involute/server exec prisma db push --skip-generate | |
| if [ "${AUTO_DB_PUSH:-false}" = "true" ]; then | |
| pnpm --filter `@involute/server` exec prisma db push --skip-generate | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/docker-entrypoint.sh` at line 4, Add an opt-in AUTO_DB_PUSH
env flag (default "false") and gate the pnpm --filter `@involute/server` exec
prisma db push --skip-generate call behind it so schema push only runs when
AUTO_DB_PUSH is "true"; follow the existing SEED_DATABASE pattern for parsing
the env var, e.g., check AUTO_DB_PUSH="${AUTO_DB_PUSH:-false}" and run the
prisma db push line only if AUTO_DB_PUSH equals "true", otherwise skip/NO-OP (do
not replace with prisma migrate deploy).
| const updateInput: Record<string, unknown> = {}; | ||
|
|
||
| if (input.state) { | ||
| const state = (await fetchTeamStates(identifier.split('-')[0] ?? 'INV')).find( |
There was a problem hiding this comment.
🟡 Team key extracted from raw user input instead of resolved issue identifier in updateIssueViaCli
When --state is provided to issues update, the team key is extracted via identifier.split('-')[0] where identifier is the raw user input (packages/cli/src/index.ts:586). Since the server's issue resolver now accepts both UUIDs and identifiers (packages/server/src/schema.ts:272-284), a user can pass a UUID (e.g., issues update a99c0654-... --state Done). In that case, identifier.split('-')[0] yields a UUID fragment (e.g., a99c0654) instead of the team key. fetchTeamStates then queries for a non-existent team, returns an empty array, and the command fails with a misleading "State not found: Done" error. The already-resolved issue variable (line 577) has the correct issue.identifier (e.g., "INV-1") available.
| const state = (await fetchTeamStates(identifier.split('-')[0] ?? 'INV')).find( | |
| const state = (await fetchTeamStates(issue.identifier.split('-')[0] ?? 'INV')).find( | |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (2)
packages/web/src/App.comments.test.tsx (1)
276-342: Restorewindow.confirmin afinallyblock.If anything throws before Line 342, the spy remains installed and can contaminate later tests.
Example tightening
const confirmSpy = vi.spyOn(window, 'confirm').mockReturnValue(true); + try { - apolloMocks.useMutation.mockImplementation((document) => { + apolloMocks.useMutation.mockImplementation((document) => { const source = typeof document === 'string' ? document @@ - await waitFor(() => - expect(within(drawer).queryByText('Disposable comment')).not.toBeInTheDocument(), - ); - - confirmSpy.mockRestore(); + await waitFor(() => + expect(within(drawer).queryByText('Disposable comment')).not.toBeInTheDocument(), + ); + } finally { + confirmSpy.mockRestore(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/App.comments.test.tsx` around lines 276 - 342, The test installs a spy with vi.spyOn(window, 'confirm') but only calls confirmSpy.mockRestore() at the end, so if an error is thrown the spy remains; wrap the test interaction and assertions that use confirmSpy in a try/finally (or move the restore to an afterEach) and call confirmSpy.mockRestore() inside the finally block to guarantee restoration of window.confirm (reference the confirmSpy variable created by vi.spyOn(window, 'confirm') and the test steps that click Delete comment and await deleteComment).packages/web/src/test/app-test-helpers.tsx (1)
81-87: Restore global spies in the shared teardown.This helper clears DOM/env state but leaves spies active. If a test exits before its manual restore runs — for example the
window.confirmspy inpackages/web/src/App.issue-meta.test.tsxon Line 220 — later cases can inherit the mocked global.🔧 Suggested fix
afterEach(() => { + vi.restoreAllMocks(); cleanup(); document.body.innerHTML = ''; window.localStorage.clear(); window.history.replaceState({}, '', '/'); vi.unstubAllEnvs(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/test/app-test-helpers.tsx` around lines 81 - 87, The shared afterEach teardown clears DOM and env state but doesn't restore mocked globals, so add a call to restore spies (use vi.restoreAllMocks()) inside the afterEach block (near the existing vi.unstubAllEnvs() call) to ensure any global spies (e.g., window.confirm) are returned to their original implementations between tests; update the afterEach in this file to call vi.restoreAllMocks() so global spies are restored reliably.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/web/src/App.board-state.test.tsx`:
- Around line 36-43: After asserting mutate was called and that 'INV-1' appears
in the target column, also assert it was removed from the source column: locate
the source column via screen.getByTestId('column-Backlog') and use
within(...).queryByText('INV-1') (or await waitFor + within(...).queryByText) to
assert the card is not present (e.g., expect(...).not.toBeInTheDocument()). This
ensures the card was moved instead of copied; reference the existing test
variables/methods mutate, screen.getByTestId, and within to implement the check.
In `@packages/web/src/App.board.test.tsx`:
- Around line 27-36: The test "renders issue cards in the matching board
columns" currently only asserts that texts exist somewhere; update it to scope
assertions to each board column by importing within from `@testing-library/react`
and querying the column containers (e.g., find the column headers like 'Backlog'
and 'Ready' via screen.getByText or getByRole with accessible name) then use
within(columnElement).getByText/findByText to assert 'INV-1' and its details are
inside the Backlog column and 'INV-2' and its details are inside the Ready
column. Keep using renderTestApp() and await screen.findByText for async
elements but replace global screen.getByText checks with within-scoped checks on
the column elements.
In `@packages/web/src/App.comments.test.tsx`:
- Around line 258-264: After triggering the title change (titleInput) and
asserting updateIssue was called, wait for the drawer to reconcile the returned
issue before asserting comment presence: instead of immediately checking
within(drawer).getByText('Newest comment'), add an explicit wait for the DOM
update (e.g. wait for the new title or another stable post-update marker in the
drawer) using waitFor or findBy* so the test asserts against the reconciled
drawer state; reference updateIssue, drawer, titleInput, and the 'Newest
comment' text when adding this extra wait.
In `@packages/web/src/App.drag-utils.test.tsx`:
- Around line 1-2: Tests rely on environment variables but don't stub them, so
getAuthTokenDetails and getGraphqlUrlDetails may pick up
VITE_INVOLUTE_AUTH_TOKEN/VITE_INVOLUTE_GRAPHQL_URL from the machine and
invalidate the assertions; before asserting localStorage/default fallbacks,
explicitly set or delete process.env.VITE_INVOLUTE_AUTH_TOKEN and
process.env.VITE_INVOLUTE_GRAPHQL_URL to the desired test values (e.g.
undefined/empty or specific test strings) and ensure you restore them after the
test, and similarly control localStorage via localStorage.setItem/removeItem so
the test for precedence between getAuthTokenDetails and localStorage and the
fallback in getGraphqlUrlDetails are deterministic.
In `@packages/web/src/App.error-states.test.tsx`:
- Around line 48-75: Pin the VITE_INVOLUTE_AUTH_TOKEN env var to an empty string
in these tests so getAuthTokenDetails() can't be satisfied by a CI/dev shell
token: in both the "shows a missing-token bootstrap error..." and "shows a
distinct dev-default-token..." cases call vi.stubEnv('VITE_INVOLUTE_AUTH_TOKEN',
'') before invoking renderTestApp (and keep the existing vi.stubEnv('DEV',
false) or set DEV as needed for the dev-fallback case); this forces the code
paths in getAuthTokenDetails() to hit the "missing" and "dev-default" branches
respectively.
In `@packages/web/src/App.issue-detail-edit.test.tsx`:
- Around line 52-65: The test closes/reopens the issue drawer before the UI has
necessarily updated to the saved title; instead of only asserting mutate was
called, wait for the drawer input to show the saved value (e.g., use
waitFor/assert that within(drawer).getByLabelText('Issue title') hasValue
'Enter-saved title') before firing the Close click and then reopen; apply the
same additional wait in the second "Persisted title" reopen flow that appears
around the other occurrence (the block near lines 217-230) so both flows ensure
the post-mutation UI update is visible before closing and reopening.
In `@packages/web/src/App.routes.test.tsx`:
- Around line 43-57: The test currently uses
expect(apolloMocks.useQuery).toHaveBeenCalledWith(...) which can match any call;
update it to assert the first call specifically by using either
expect(apolloMocks.useQuery.mock.calls[0][0]) / [1] as appropriate or use Jest's
toHaveBeenNthCalledWith(1, ...) to lock the assertion to the initial invocation;
target apolloMocks.useQuery and keep the same expectation object
(expect.anything(), expect.objectContaining({...})) but apply it to the first
call so the test verifies the initial query includes variables.first: 200 and
the SON team filter.
In `@packages/web/src/test/app-test-helpers.tsx`:
- Around line 8-23: buildDefaultUseMutationMock currently does String(document)
which yields "[object Object]" for gql DocumentNode inputs so the mutation name
checks never match; update the function to extract the operation text from
DocumentNode inputs (e.g. use document.loc?.source?.body when document is an
object, or call print(document) from 'graphql' to serialize the AST) and fall
back to using the string directly if it's already a string; ensure the variable
you check (previously named source) uses this extracted string so branches for
'mutation CommentCreate', 'mutation CommentDelete', and 'mutation IssueDelete'
in buildDefaultUseMutationMock match the real DocumentNode gql constants used
elsewhere (like in packages/web/src/board/queries.ts).
In `@README.md`:
- Around line 98-102: Update the README instructions so the CLI is built before
invoking dist/index.js: insert a step that runs the build for the package (e.g.,
run "pnpm --filter `@involute/cli` build") immediately before the existing command
that runs dist/index.js, so dist/index.js exists on a fresh clone; update the
example block containing dist/index.js and the pnpm exec command accordingly.
---
Nitpick comments:
In `@packages/web/src/App.comments.test.tsx`:
- Around line 276-342: The test installs a spy with vi.spyOn(window, 'confirm')
but only calls confirmSpy.mockRestore() at the end, so if an error is thrown the
spy remains; wrap the test interaction and assertions that use confirmSpy in a
try/finally (or move the restore to an afterEach) and call
confirmSpy.mockRestore() inside the finally block to guarantee restoration of
window.confirm (reference the confirmSpy variable created by vi.spyOn(window,
'confirm') and the test steps that click Delete comment and await
deleteComment).
In `@packages/web/src/test/app-test-helpers.tsx`:
- Around line 81-87: The shared afterEach teardown clears DOM and env state but
doesn't restore mocked globals, so add a call to restore spies (use
vi.restoreAllMocks()) inside the afterEach block (near the existing
vi.unstubAllEnvs() call) to ensure any global spies (e.g., window.confirm) are
returned to their original implementations between tests; update the afterEach
in this file to call vi.restoreAllMocks() so global spies are restored reliably.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0e1c8a39-0bdb-4843-9150-5508e3386c91
📒 Files selected for processing (16)
README.mdpackages/web/package.jsonpackages/web/src/App.board-drawer.test.tsxpackages/web/src/App.board-sensors.test.tsxpackages/web/src/App.board-state.test.tsxpackages/web/src/App.board.test.tsxpackages/web/src/App.comments.test.tsxpackages/web/src/App.drag-utils.test.tsxpackages/web/src/App.error-states.test.tsxpackages/web/src/App.issue-detail-edit.test.tsxpackages/web/src/App.issue-meta.test.tsxpackages/web/src/App.navigation.test.tsxpackages/web/src/App.routes.test.tsxpackages/web/src/App.test.tsxpackages/web/src/test/app-test-helpers.tsxpackages/web/vitest.config.ts
💤 Files with no reviewable changes (1)
- packages/web/src/App.test.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/web/vitest.config.ts
- packages/web/package.json
| expect(mutate).toHaveBeenCalledWith({ | ||
| variables: { | ||
| id: 'issue-1', | ||
| input: { stateId: 'state-progress' }, | ||
| }, | ||
| }); | ||
|
|
||
| expect(await within(screen.getByTestId('column-In Progress')).findByText('INV-1')).toBeInTheDocument(); |
There was a problem hiding this comment.
Also verify the card leaves Backlog.
Right now this passes if INV-1 is copied into In Progress but never removed from its original column.
Example tightening
expect(await within(screen.getByTestId('column-In Progress')).findByText('INV-1')).toBeInTheDocument();
+ await waitFor(() =>
+ expect(within(screen.getByTestId('column-Backlog')).queryByText('INV-1')).not.toBeInTheDocument(),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(mutate).toHaveBeenCalledWith({ | |
| variables: { | |
| id: 'issue-1', | |
| input: { stateId: 'state-progress' }, | |
| }, | |
| }); | |
| expect(await within(screen.getByTestId('column-In Progress')).findByText('INV-1')).toBeInTheDocument(); | |
| expect(mutate).toHaveBeenCalledWith({ | |
| variables: { | |
| id: 'issue-1', | |
| input: { stateId: 'state-progress' }, | |
| }, | |
| }); | |
| expect(await within(screen.getByTestId('column-In Progress')).findByText('INV-1')).toBeInTheDocument(); | |
| await waitFor(() => | |
| expect(within(screen.getByTestId('column-Backlog')).queryByText('INV-1')).not.toBeInTheDocument(), | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.board-state.test.tsx` around lines 36 - 43, After
asserting mutate was called and that 'INV-1' appears in the target column, also
assert it was removed from the source column: locate the source column via
screen.getByTestId('column-Backlog') and use within(...).queryByText('INV-1')
(or await waitFor + within(...).queryByText) to assert the card is not present
(e.g., expect(...).not.toBeInTheDocument()). This ensures the card was moved
instead of copied; reference the existing test variables/methods mutate,
screen.getByTestId, and within to implement the check.
| it('renders issue cards in the matching board columns', async () => { | ||
| renderTestApp(); | ||
|
|
||
| expect(await screen.findByText('INV-1')).toBeInTheDocument(); | ||
| expect(screen.getByText('Backlog item')).toBeInTheDocument(); | ||
| expect(screen.getByText('task')).toBeInTheDocument(); | ||
| expect(screen.getByText('Admin')).toBeInTheDocument(); | ||
| expect(screen.getByText('INV-2')).toBeInTheDocument(); | ||
| expect(screen.getByText('Ready item')).toBeInTheDocument(); | ||
| }); |
There was a problem hiding this comment.
Scope these assertions to the actual columns.
This only proves the cards exist somewhere on the page. It still passes if INV-1 and INV-2 render under the wrong column, which is the behavior the test name says it is locking down.
Example tightening
- expect(await screen.findByText('INV-1')).toBeInTheDocument();
- expect(screen.getByText('Backlog item')).toBeInTheDocument();
- expect(screen.getByText('task')).toBeInTheDocument();
- expect(screen.getByText('Admin')).toBeInTheDocument();
- expect(screen.getByText('INV-2')).toBeInTheDocument();
- expect(screen.getByText('Ready item')).toBeInTheDocument();
+ const backlogColumn = await screen.findByTestId('column-Backlog');
+ const readyColumn = screen.getByTestId('column-Ready');
+
+ expect(within(backlogColumn).getByText('INV-1')).toBeInTheDocument();
+ expect(within(backlogColumn).getByText('Backlog item')).toBeInTheDocument();
+ expect(within(readyColumn).getByText('INV-2')).toBeInTheDocument();
+ expect(within(readyColumn).getByText('Ready item')).toBeInTheDocument();Also add within to the RTL import.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.board.test.tsx` around lines 27 - 36, The test "renders
issue cards in the matching board columns" currently only asserts that texts
exist somewhere; update it to scope assertions to each board column by importing
within from `@testing-library/react` and querying the column containers (e.g.,
find the column headers like 'Backlog' and 'Ready' via screen.getByText or
getByRole with accessible name) then use
within(columnElement).getByText/findByText to assert 'INV-1' and its details are
inside the Backlog column and 'INV-2' and its details are inside the Ready
column. Keep using renderTestApp() and await screen.findByText for async
elements but replace global screen.getByText checks with within-scoped checks on
the column elements.
| const titleInput = within(drawer).getByLabelText('Issue title'); | ||
| fireEvent.focus(titleInput); | ||
| fireEvent.change(titleInput, { target: { value: 'Retitled issue' } }); | ||
| fireEvent.keyDown(titleInput, { key: 'Enter', code: 'Enter' }); | ||
|
|
||
| await waitFor(() => expect(updateIssue).toHaveBeenCalled()); | ||
| expect(within(drawer).getByText('Newest comment')).toBeInTheDocument(); |
There was a problem hiding this comment.
Wait for the issue update to be reflected before checking comment preservation.
updateIssue being called is not the same as the drawer having processed the returned issue. If the later reconciliation drops comments asynchronously, this can still pass against the pre-update DOM.
Example tightening
await waitFor(() => expect(updateIssue).toHaveBeenCalled());
+ await waitFor(() =>
+ expect(within(drawer).getByLabelText('Issue title')).toHaveValue('Retitled issue'),
+ );
expect(within(drawer).getByText('Newest comment')).toBeInTheDocument();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const titleInput = within(drawer).getByLabelText('Issue title'); | |
| fireEvent.focus(titleInput); | |
| fireEvent.change(titleInput, { target: { value: 'Retitled issue' } }); | |
| fireEvent.keyDown(titleInput, { key: 'Enter', code: 'Enter' }); | |
| await waitFor(() => expect(updateIssue).toHaveBeenCalled()); | |
| expect(within(drawer).getByText('Newest comment')).toBeInTheDocument(); | |
| const titleInput = within(drawer).getByLabelText('Issue title'); | |
| fireEvent.focus(titleInput); | |
| fireEvent.change(titleInput, { target: { value: 'Retitled issue' } }); | |
| fireEvent.keyDown(titleInput, { key: 'Enter', code: 'Enter' }); | |
| await waitFor(() => expect(updateIssue).toHaveBeenCalled()); | |
| await waitFor(() => | |
| expect(within(drawer).getByLabelText('Issue title')).toHaveValue('Retitled issue'), | |
| ); | |
| expect(within(drawer).getByText('Newest comment')).toBeInTheDocument(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.comments.test.tsx` around lines 258 - 264, After
triggering the title change (titleInput) and asserting updateIssue was called,
wait for the drawer to reconcile the returned issue before asserting comment
presence: instead of immediately checking within(drawer).getByText('Newest
comment'), add an explicit wait for the DOM update (e.g. wait for the new title
or another stable post-update marker in the drawer) using waitFor or findBy* so
the test asserts against the reconciled drawer state; reference updateIssue,
drawer, titleInput, and the 'Newest comment' text when adding this extra wait.
| import { describe, expect, it } from 'vitest'; | ||
| import type { DragEndEvent } from '@dnd-kit/core'; |
There was a problem hiding this comment.
Stub the VITE_* env vars before asserting localStorage/default fallbacks.
getAuthTokenDetails() prefers VITE_INVOLUTE_AUTH_TOKEN over localStorage/dev-default, and getGraphqlUrlDetails() prefers VITE_INVOLUTE_GRAPHQL_URL over the hardcoded default. As written, these cases are machine-dependent, and the localStorage auth case on Lines 113-120 does not actually prove precedence when an env token is present.
🔧 Suggested fix
-import { describe, expect, it } from 'vitest';
+import { describe, expect, it, vi } from 'vitest';
it('prefers the runtime localStorage auth token when creating Apollo requests', () => {
+ vi.stubEnv('VITE_INVOLUTE_AUTH_TOKEN', '');
window.localStorage.setItem('involute.authToken', 'runtime-token');
it('falls back to the default dev auth token when no runtime token is configured', () => {
+ vi.stubEnv('VITE_INVOLUTE_AUTH_TOKEN', '');
+ vi.stubEnv('DEV', true);
expect(getAuthToken()).toBe('changeme-set-your-token');
it('keeps the normal env/default API URL path when no runtime override is set', () => {
+ vi.stubEnv('VITE_INVOLUTE_GRAPHQL_URL', '');
expect(getGraphqlUrl()).toBe('http://localhost:4200/graphql');Also applies to: 113-128, 151-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.drag-utils.test.tsx` around lines 1 - 2, Tests rely on
environment variables but don't stub them, so getAuthTokenDetails and
getGraphqlUrlDetails may pick up
VITE_INVOLUTE_AUTH_TOKEN/VITE_INVOLUTE_GRAPHQL_URL from the machine and
invalidate the assertions; before asserting localStorage/default fallbacks,
explicitly set or delete process.env.VITE_INVOLUTE_AUTH_TOKEN and
process.env.VITE_INVOLUTE_GRAPHQL_URL to the desired test values (e.g.
undefined/empty or specific test strings) and ensure you restore them after the
test, and similarly control localStorage via localStorage.setItem/removeItem so
the test for precedence between getAuthTokenDetails and localStorage and the
fallback in getGraphqlUrlDetails are deterministic.
| it('shows a missing-token bootstrap error when no runtime token exists outside dev fallback', async () => { | ||
| vi.stubEnv('DEV', false); | ||
|
|
||
| renderTestApp({ | ||
| error: new Error('Not authenticated'), | ||
| loading: false, | ||
| }); | ||
|
|
||
| expect(await screen.findByText('Runtime auth token missing')).toBeInTheDocument(); | ||
| expect( | ||
| screen.getByText( | ||
| 'The board could not find a runtime auth token. Set `VITE_INVOLUTE_AUTH_TOKEN` or store the token in localStorage under `involute.authToken`, then reload.', | ||
| ), | ||
| ).toBeInTheDocument(); | ||
| }); | ||
|
|
||
| it('shows a distinct dev-default-token error when the fallback dev token is rejected', async () => { | ||
| renderTestApp({ | ||
| error: new Error('Not authenticated'), | ||
| loading: false, | ||
| }); | ||
|
|
||
| expect(await screen.findByText('Authentication failed')).toBeInTheDocument(); | ||
| expect( | ||
| screen.getByText( | ||
| 'The board used the default development token, but the API rejected it. Set `VITE_INVOLUTE_AUTH_TOKEN` or store a valid token in localStorage under `involute.authToken`, then reload.', | ||
| ), | ||
| ).toBeInTheDocument(); |
There was a problem hiding this comment.
Pin the auth env in these fallback-state tests.
getAuthTokenDetails() checks VITE_INVOLUTE_AUTH_TOKEN before the DEV fallback, so a token injected by CI or a developer shell makes both cases miss the "missing" / "dev-default" branches and assert the wrong copy instead.
🔧 Suggested fix
it('shows a missing-token bootstrap error when no runtime token exists outside dev fallback', async () => {
+ vi.stubEnv('VITE_INVOLUTE_AUTH_TOKEN', '');
vi.stubEnv('DEV', false);
it('shows a distinct dev-default-token error when the fallback dev token is rejected', async () => {
+ vi.stubEnv('VITE_INVOLUTE_AUTH_TOKEN', '');
+ vi.stubEnv('DEV', true);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.error-states.test.tsx` around lines 48 - 75, Pin the
VITE_INVOLUTE_AUTH_TOKEN env var to an empty string in these tests so
getAuthTokenDetails() can't be satisfied by a CI/dev shell token: in both the
"shows a missing-token bootstrap error..." and "shows a distinct
dev-default-token..." cases call vi.stubEnv('VITE_INVOLUTE_AUTH_TOKEN', '')
before invoking renderTestApp (and keep the existing vi.stubEnv('DEV', false) or
set DEV as needed for the dev-fallback case); this forces the code paths in
getAuthTokenDetails() to hit the "missing" and "dev-default" branches
respectively.
| await waitFor(() => | ||
| expect(mutate).toHaveBeenCalledWith({ | ||
| variables: { | ||
| id: 'issue-1', | ||
| input: { title: 'Enter-saved title' }, | ||
| }, | ||
| }), | ||
| ); | ||
|
|
||
| fireEvent.click(within(drawer).getByRole('button', { name: 'Close' })); | ||
| fireEvent.click(await screen.findByRole('button', { name: 'Open INV-1' })); | ||
| drawer = await screen.findByLabelText('Issue detail drawer'); | ||
|
|
||
| expect(within(drawer).getByLabelText('Issue title')).toHaveValue('Enter-saved title'); |
There was a problem hiding this comment.
Wait for the saved title to hit the UI before closing and reopening.
toHaveBeenCalledWith only proves the mutation fired. These two flows assert persistence after reopen, so they should wait for the drawer input to reflect the saved value first; otherwise the close/reopen step can race the post-response state update.
Example tightening
await waitFor(() =>
expect(mutate).toHaveBeenCalledWith({
variables: {
id: 'issue-1',
input: { title: 'Enter-saved title' },
},
}),
);
+ await waitFor(() =>
+ expect(within(drawer).getByLabelText('Issue title')).toHaveValue('Enter-saved title'),
+ );
fireEvent.click(within(drawer).getByRole('button', { name: 'Close' }));Apply the same extra wait in the later "Persisted title" reopen flow as well.
Also applies to: 217-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.issue-detail-edit.test.tsx` around lines 52 - 65, The
test closes/reopens the issue drawer before the UI has necessarily updated to
the saved title; instead of only asserting mutate was called, wait for the
drawer input to show the saved value (e.g., use waitFor/assert that
within(drawer).getByLabelText('Issue title') hasValue 'Enter-saved title')
before firing the Close click and then reopen; apply the same additional wait in
the second "Persisted title" reopen flow that appears around the other
occurrence (the block near lines 217-230) so both flows ensure the post-mutation
UI update is visible before closing and reopening.
| expect(apolloMocks.useQuery).toHaveBeenCalledWith( | ||
| expect.anything(), | ||
| expect.objectContaining({ | ||
| variables: { | ||
| first: 200, | ||
| filter: { | ||
| team: { | ||
| key: { | ||
| eq: 'SON', | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| }), | ||
| ); |
There was a problem hiding this comment.
Assert the first useQuery call here, not just any matching call.
toHaveBeenCalledWith still passes if the app first queries without a team filter and only later re-runs with SON. For a test named “before the initial board query runs,” this needs to inspect mock.calls[0] or lock the call count/order.
Example tightening
- expect(apolloMocks.useQuery).toHaveBeenCalledWith(
- expect.anything(),
- expect.objectContaining({
- variables: {
- first: 200,
- filter: {
- team: {
- key: {
- eq: 'SON',
- },
- },
- },
- },
- }),
- );
+ expect(apolloMocks.useQuery.mock.calls[0]?.[1]).toEqual(
+ expect.objectContaining({
+ variables: {
+ first: 200,
+ filter: {
+ team: {
+ key: {
+ eq: 'SON',
+ },
+ },
+ },
+ },
+ }),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(apolloMocks.useQuery).toHaveBeenCalledWith( | |
| expect.anything(), | |
| expect.objectContaining({ | |
| variables: { | |
| first: 200, | |
| filter: { | |
| team: { | |
| key: { | |
| eq: 'SON', | |
| }, | |
| }, | |
| }, | |
| }, | |
| }), | |
| ); | |
| expect(apolloMocks.useQuery.mock.calls[0]?.[1]).toEqual( | |
| expect.objectContaining({ | |
| variables: { | |
| first: 200, | |
| filter: { | |
| team: { | |
| key: { | |
| eq: 'SON', | |
| }, | |
| }, | |
| }, | |
| }, | |
| }), | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/App.routes.test.tsx` around lines 43 - 57, The test
currently uses expect(apolloMocks.useQuery).toHaveBeenCalledWith(...) which can
match any call; update it to assert the first call specifically by using either
expect(apolloMocks.useQuery.mock.calls[0][0]) / [1] as appropriate or use Jest's
toHaveBeenNthCalledWith(1, ...) to lock the assertion to the initial invocation;
target apolloMocks.useQuery and keep the same expectation object
(expect.anything(), expect.objectContaining({...})) but apply it to the first
call so the test verifies the initial query includes variables.first: 200 and
the SON team filter.
| function buildDefaultUseMutationMock(document: unknown) { | ||
| const source = String(document); | ||
|
|
||
| if (source.includes('mutation CommentCreate')) { | ||
| return [vi.fn().mockResolvedValue({ data: { commentCreate: { success: true, comment: null } } })]; | ||
| } | ||
|
|
||
| if (source.includes('mutation CommentDelete')) { | ||
| return [vi.fn().mockResolvedValue({ data: { commentDelete: { success: true, commentId: 'comment-1' } } })]; | ||
| } | ||
|
|
||
| if (source.includes('mutation IssueDelete')) { | ||
| return [vi.fn().mockResolvedValue({ data: { issueDelete: { success: true, issueId: 'issue-1' } } })]; | ||
| } | ||
|
|
||
| return [vi.fn()]; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full file structure and context around the function
cat -n packages/web/src/test/app-test-helpers.tsx | head -60Repository: fakechris/Involute
Length of output: 2153
🏁 Script executed:
# Check for the queries file and gql definitions
fd queries.tsRepository: fakechris/Involute
Length of output: 130
🏁 Script executed:
# Look for the workaround mentioned in App.issue-meta.test.tsx
if [ -f "packages/web/src/App.issue-meta.test.tsx" ]; then
cat -n packages/web/src/App.issue-meta.test.tsx | sed -n '180,210p'
else
find . -name "*issue-meta*test*" -type f
fiRepository: fakechris/Involute
Length of output: 1297
🏁 Script executed:
# Search for useMutation usage patterns to understand how documents are passed
rg "useMutation" packages/web/src/test/app-test-helpers.tsx -A 5 -B 2Repository: fakechris/Involute
Length of output: 805
🏁 Script executed:
# Check imports in app-test-helpers.tsx to understand DocumentNode types
head -50 packages/web/src/test/app-test-helpers.tsxRepository: fakechris/Involute
Length of output: 1549
🏁 Script executed:
# Check queries.ts to see the gql definitions
cat -n packages/web/src/board/queries.ts | head -50Repository: fakechris/Involute
Length of output: 1188
🏁 Script executed:
# Search for CommentCreate, CommentDelete, IssueDelete in queries.ts
rg "mutation CommentCreate|mutation CommentDelete|mutation IssueDelete" packages/web/src/board/queries.ts -B 2 -A 2Repository: fakechris/Involute
Length of output: 469
Extract GraphQL operation text from DocumentNode objects in mutation mock routing.
The gql constants in packages/web/src/board/queries.ts are DocumentNode objects, so String(document) converts them to [object Object] and these branches never match real operations. This is why packages/web/src/App.issue-meta.test.tsx re-implements the source extraction logic on lines 183-189 instead of using the shared default.
🔧 Suggested fix
+function getDocumentSource(document: unknown): string {
+ if (typeof document === 'string') {
+ return document;
+ }
+ if (document && typeof document === 'object' && 'loc' in document) {
+ const sourceBody = (document as { loc?: { source?: { body?: string } } }).loc?.source?.body;
+ if (typeof sourceBody === 'string') {
+ return sourceBody;
+ }
+ }
+ return String(document);
+}
+
function buildDefaultUseMutationMock(document: unknown) {
- const source = String(document);
+ const source = getDocumentSource(document);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function buildDefaultUseMutationMock(document: unknown) { | |
| const source = String(document); | |
| if (source.includes('mutation CommentCreate')) { | |
| return [vi.fn().mockResolvedValue({ data: { commentCreate: { success: true, comment: null } } })]; | |
| } | |
| if (source.includes('mutation CommentDelete')) { | |
| return [vi.fn().mockResolvedValue({ data: { commentDelete: { success: true, commentId: 'comment-1' } } })]; | |
| } | |
| if (source.includes('mutation IssueDelete')) { | |
| return [vi.fn().mockResolvedValue({ data: { issueDelete: { success: true, issueId: 'issue-1' } } })]; | |
| } | |
| return [vi.fn()]; | |
| function getDocumentSource(document: unknown): string { | |
| if (typeof document === 'string') { | |
| return document; | |
| } | |
| if (document && typeof document === 'object' && 'loc' in document) { | |
| const sourceBody = (document as { loc?: { source?: { body?: string } } }).loc?.source?.body; | |
| if (typeof sourceBody === 'string') { | |
| return sourceBody; | |
| } | |
| } | |
| return String(document); | |
| } | |
| function buildDefaultUseMutationMock(document: unknown) { | |
| const source = getDocumentSource(document); | |
| if (source.includes('mutation CommentCreate')) { | |
| return [vi.fn().mockResolvedValue({ data: { commentCreate: { success: true, comment: null } } })]; | |
| } | |
| if (source.includes('mutation CommentDelete')) { | |
| return [vi.fn().mockResolvedValue({ data: { commentDelete: { success: true, commentId: 'comment-1' } } })]; | |
| } | |
| if (source.includes('mutation IssueDelete')) { | |
| return [vi.fn().mockResolvedValue({ data: { issueDelete: { success: true, issueId: 'issue-1' } } })]; | |
| } | |
| return [vi.fn()]; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/src/test/app-test-helpers.tsx` around lines 8 - 23,
buildDefaultUseMutationMock currently does String(document) which yields
"[object Object]" for gql DocumentNode inputs so the mutation name checks never
match; update the function to extract the operation text from DocumentNode
inputs (e.g. use document.loc?.source?.body when document is an object, or call
print(document) from 'graphql' to serialize the AST) and fall back to using the
string directly if it's already a string; ensure the variable you check
(previously named source) uses this extracted string so branches for 'mutation
CommentCreate', 'mutation CommentDelete', and 'mutation IssueDelete' in
buildDefaultUseMutationMock match the real DocumentNode gql constants used
elsewhere (like in packages/web/src/board/queries.ts).
| Run the CLI against that local API: | ||
|
|
||
| ```bash | ||
| pnpm --filter @involute/cli exec node dist/index.js import team --token "$LINEAR_TOKEN" --team SON --keep-export --output .tmp/son-export | ||
| ``` |
There was a problem hiding this comment.
Add CLI build prerequisite before using dist/index.js.
Line [101] assumes packages/cli/dist/index.js already exists. On a fresh clone, this command fails unless the CLI has been built first. Please add an explicit build step in this section.
🔧 Proposed doc fix
Run the CLI against that local API:
```bash
+pnpm --filter `@involute/cli` build
pnpm --filter `@involute/cli` exec node dist/index.js import team --token "$LINEAR_TOKEN" --team SON --keep-export --output .tmp/son-export</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
Run the CLI against that local API:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 98 - 102, Update the README instructions so the CLI
is built before invoking dist/index.js: insert a step that runs the build for
the package (e.g., run "pnpm --filter `@involute/cli` build") immediately before
the existing command that runs dist/index.js, so dist/index.js exists on a fresh
clone; update the example block containing dist/index.js and the pnpm exec
command accordingly.
Summary
involute import teamacceptance loop with shared CLI helpers and import summariesValidation
pnpm testpnpm typecheckpnpm buildpnpm e2edocker compose up --build -d db server webcurl http://127.0.0.1:4200/healthcurl -I http://127.0.0.1:4201Summary by CodeRabbit
New Features
Tests
Documentation
Infrastructure