Skip to content

From v5_STABLE part 1#372

Draft
mason-sharp wants to merge 9 commits intomainfrom
from-v5_STABLE-part1
Draft

From v5_STABLE part 1#372
mason-sharp wants to merge 9 commits intomainfrom
from-v5_STABLE-part1

Conversation

@mason-sharp
Copy link
Member

Part 1 of adding recent commits intended for 5.0.6 into main

rasifr and others added 6 commits March 3, 2026 16:06
When the origin of a local tuple cannot be determined (e.g., data loaded
via pg_dump, frozen transactions, or truncated commit timestamps), store
NULL in spock.resolutions.local_origin instead of the value 0
(InvalidRepOriginId).

Also change conflict log messages to print "unknown" instead of the
origin ID when the origin is InvalidRepOriginId or not available.

Bug fixes found during testing:
- Fixed off-by-one error in spock_conflict_row_to_json() calls that were
  incorrectly passing &nulls[7] instead of &nulls[8] for local_tuple,
  and &nulls[11] instead of &nulls[12] for remote_tuple. This was
  overwriting the local_origin NULL flag.

Includes test case (014_pgdump_restore_conflict.pl) that reproduces the
pg_dump/restore scenario where rows have no replication origin tracking,
verifying that:
- Conflicts are logged with local_origin = NULL
- Resolution is apply_remote (correct behavior)
- Data converges correctly despite conflicts

(cherry picked from commit 7970dc2)
to the spock.resolutions table.

Do, however, allow optional logging to the Postgres log. They are
informational log entries only, not true conflicts requiring resolution.

  - Add `spock.log_origin_change` GUC to control when origin changes
    are logged to the PostgreSQL log
  - Add `sub_created_at` column to `spock.subscription` to help distinguish
    pre-existing data (e.g. pg_restore) from post-subscription data
  - Version bump to 5.0.6
  - Add TAP test 013_origin_change_restore to verify GUC modes and behavior
    and test with a restored (copied) database

GUC modes for spock.log_origin_change (for the PostgreSQL log)

none:                Do not log origin changes (default)
remote_only_differs: Log only when a row from one remote publisher is
                     updated by a different remote publisher (skip
                     locally-written tuples)
since_sub_creation:  Log origin changes for tuples modified after the
                     subscription was created (suppresses noise from
                     pg_restored data)

(cherry picked from commit fab0514)
(cherry picked from commit 3398466)
(cherry picked from commit b4df955)
(cherry picked from commit 91e602a)
@coderabbitai
Copy link

coderabbitai bot commented Mar 4, 2026

📝 Walkthrough

Walkthrough

Adds a new enum-backed GUC spock.log_origin_change and public log_origin_change variable; new conflict enum and function declarations; refined conflict logging and JSON payloads; a subscription created_at/sub_created_at timestamptz column with migration; and several TAP/regress tests and schedule entries.

Changes

Cohort / File(s) Summary
Configuration & Public Variable
docs/configuring.md, include/spock.h, src/spock.c
Documented and registered new GUC spock.log_origin_change; added extern int log_origin_change and enum-backed GUC registration with values none, remote_only_differs, since_sub_creation.
Conflict API & Logging
include/spock_conflict.h, src/spock_conflict.c
Added SpockSaveOriginConflictOption enum and new declarations for tuple/index lookup functions; changed conflict handling to use nullable origin strings, conditional saving of resolutions, adjusted JSON/log formatting and indices, and gated logging for certain apply paths.
Subscription Catalog & Migrations
include/spock_node.h, src/spock_node.c, sql/spock--5.0.5--5.0.6.sql, sql/spock--6.0.0-devel.sql
Added created_at/sub_created_at timestamptz column to subscription struct/table; bumped catalog attribute counts/Anum, initialized column on creation, preserved on alter, and loaded into in-memory subscription. Includes migration SQL.
Tests & Schedule
tests/regress/sql/tuple_origin.sql, tests/tap/schedule, tests/tap/t/013_origin_change_restore.pl, tests/tap/t/014_pgdump_restore_conflict.pl, tests/tap/t/100_progress_period.pl
Added TAP tests for origin-change restore, pg_dump/restore conflict scenarios, and progress-period measurement; updated test schedule and a minor regression test comment.

Poem

🐇 I nibble logs and chase the trace,
Timestamps tucked in subscription lace,
Origins whispered "none" or "since then",
Conflicts logged as strings again.
Hop—replication hums, and I grin.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title 'From v5_STABLE part 1' is vague and does not clearly convey the main changes: adding origin-change logging configuration, subscription creation timestamps, and related tests. Use a more descriptive title that explains the primary changes, such as 'Add origin-change logging configuration and subscription creation timestamps' or 'Implement spock.log_origin_change and sub_created_at tracking'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description check ✅ Passed The description accurately indicates that this PR backports recent commits from v5.0.6 into main, which aligns with the changeset of documentation updates, configuration options, and tests.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch from-v5_STABLE-part1

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@mason-sharp mason-sharp requested a review from rasifr March 4, 2026 00:11
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
tests/tap/t/014_pgdump_restore_conflict.pl (1)

82-82: Use a unique dump filename to avoid cross-run collisions.

A fixed /tmp/customer_data_dump.sql can interfere with concurrent test runs.

♻️ Proposed fix
-my $dump_file = "/tmp/customer_data_dump.sql";
+my $dump_file = "/tmp/customer_data_dump_$$.sql";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` at line 82, Replace the fixed
dump filename in the test by generating a unique temp filename instead of using
$dump_file = "/tmp/customer_data_dump.sql"; (e.g. use File::Temp or include
pid/timestamp/UUID in the name) so concurrent runs won't collide; update the
code that opens/writes/reads $dump_file to use the generated temporary path and
ensure the temp file is removed/cleaned up at the end of the test.
tests/tap/t/013_origin_change_restore.pl (1)

80-83: Prefer File::Temp for dump artifacts to improve portability and cleanup safety.

Hardcoding /tmp/... is brittle across environments and failure paths.

♻️ Proposed refactor
+use File::Temp qw(tempfile);
 ...
-my $dump_file = "/tmp/test_origin_dump_$$.dump";
+my ($dump_fh, $dump_file) = tempfile('test_origin_dump_XXXX',
+                                     SUFFIX => '.dump',
+                                     UNLINK => 1);
+close $dump_fh;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/013_origin_change_restore.pl` around lines 80 - 83, Replace the
hardcoded /tmp path for $dump_file with a secure temporary file from File::Temp:
create a temp file (e.g. via File::Temp->new or tempfile) before calling
system_or_bail, use its filename when invoking pg_dump (the system_or_bail call
that runs "$pg_bin/pg_dump" with '-f', $dump_file), and ensure the temp file is
removed or auto-unlinked (or explicitly unlink after the test) to avoid leaving
artifacts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/configuring.md`:
- Around line 207-211: Update the docs text about origin logging to reflect
current behavior: change the statement that local changes have origin `0` to say
that when local origin isn’t available logs show `unknown` and
`spock.resolutions.local_origin` contains NULL; reference the configuration flag
spock.log_origin_change and the origin value in logs/resolutions so readers know
where to look for the `unknown`/NULL behavior instead of `0`.

In `@sql/spock--5.0.5--5.0.6.sql`:
- Around line 1-2: The migration adds sub_created_at but leaves existing rows
NULL, breaking the guard in src/spock_conflict.c (the check against
MySubscription->created_at) because GetCurrentTimestamp() in src/spock_node.c
only sets the field for new subscriptions; fix the migration
(spock--5.0.5--5.0.6.sql) by backfilling sub_created_at for existing rows (e.g.,
UPDATE spock.subscription SET sub_created_at = now() WHERE sub_created_at IS
NULL), then optionally set a DEFAULT and/or NOT NULL constraint to prevent
future NULLs so the origin-differ logging guard sees a non-zero timestamp.
Ensure this backfill runs inside the same migration after the ADD COLUMN.

In `@tests/tap/schedule`:
- Line 20: The schedule is missing the test entry for
014_pgdump_restore_conflict.pl; either add the line "test:
014_pgdump_restore_conflict" into the tests/tap/schedule file right after the
existing "test: 013_origin_change_restore" entry so CI will run the test, or if
this test is intentionally excluded, add a brief comment in tests/tap/schedule
noting why 014_pgdump_restore_conflict is omitted (e.g., manual-only, flaky, or
requires special environment) referencing the test filename
014_pgdump_restore_conflict.pl and the schedule entry name test:
014_pgdump_restore_conflict.

In `@tests/tap/t/100_progress_period.pl`:
- Line 22: Remove the fixed Test::More plan and rely on dynamic planning via
done_testing(); specifically, change the Test::More import that currently uses
"tests => 14" (the line with use Test::More tests => 14) to a plain import
(e.g., use Test::More;) and keep the existing done_testing() call, ensuring no
other fixed plan is set elsewhere in the test (references: the Test::More import
and done_testing()).

---

Nitpick comments:
In `@tests/tap/t/013_origin_change_restore.pl`:
- Around line 80-83: Replace the hardcoded /tmp path for $dump_file with a
secure temporary file from File::Temp: create a temp file (e.g. via
File::Temp->new or tempfile) before calling system_or_bail, use its filename
when invoking pg_dump (the system_or_bail call that runs "$pg_bin/pg_dump" with
'-f', $dump_file), and ensure the temp file is removed or auto-unlinked (or
explicitly unlink after the test) to avoid leaving artifacts.

In `@tests/tap/t/014_pgdump_restore_conflict.pl`:
- Line 82: Replace the fixed dump filename in the test by generating a unique
temp filename instead of using $dump_file = "/tmp/customer_data_dump.sql"; (e.g.
use File::Temp or include pid/timestamp/UUID in the name) so concurrent runs
won't collide; update the code that opens/writes/reads $dump_file to use the
generated temporary path and ensure the temp file is removed/cleaned up at the
end of the test.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 50764f4 and 94c2bc9.

⛔ Files ignored due to path filters (1)
  • tests/regress/expected/tuple_origin.out is excluded by !**/*.out
📒 Files selected for processing (14)
  • docs/configuring.md
  • include/spock.h
  • include/spock_conflict.h
  • include/spock_node.h
  • sql/spock--5.0.5--5.0.6.sql
  • sql/spock--6.0.0-devel.sql
  • src/spock.c
  • src/spock_conflict.c
  • src/spock_node.c
  • tests/regress/sql/tuple_origin.sql
  • tests/tap/schedule
  • tests/tap/t/013_origin_change_restore.pl
  • tests/tap/t/014_pgdump_restore_conflict.pl
  • tests/tap/t/100_progress_period.pl

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
tests/tap/t/014_pgdump_restore_conflict.pl (3)

217-221: Apply worker restart pattern is correct but uses fixed sleeps.

The disable/enable pattern to restart the apply worker for GUC changes to take effect is correct. However, the fixed 2s + 3s sleeps could cause flakiness. Consider polling for worker state if the infrastructure supports it, or document these values as tested minimums.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` around lines 217 - 221, The
current restart pattern for the apply worker uses fixed sleeps after calling
spock.sub_disable(...) and spock.sub_enable(...), which is flaky; replace the
hard-coded system_or_bail 'sleep' calls with a small polling loop that queries
the worker status (e.g., call a status query via psql_or_bail such as checking
spock.show_workers or the apply worker state) until the worker reports
stopped/started or a configurable timeout expires, then proceed to enable;
alternatively, if polling isn't possible, add a brief comment documenting that
the 2s and 3s values are tested minimums and make them configurable variables so
tests can adjust them.

199-203: Fixed sleep could cause test flakiness.

The 5-second sleep on line 200 is a timing-based wait that may not be sufficient in slow CI environments or could be unnecessarily long in fast environments. Consider polling for subscription readiness status instead.

🔧 Suggested improvement using polling
# Wait for subscription to be ready by polling
my $sub_ready = wait_for_value(2, 
    "SELECT sub_enabled FROM spock.subscription WHERE sub_name = 'pgdump_test_sub'",
    't', 30);
is($sub_ready, 't', 'Subscription is enabled');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` around lines 199 - 203, Replace
the fixed 5-second sleep-based wait (system_or_bail 'sleep', '5') and direct
scalar_query call with a polling-based wait to avoid timing flakiness: call
wait_for_value (e.g., wait_for_value(2, "SELECT sub_enabled FROM
spock.subscription WHERE sub_name = 'pgdump_test_sub'", 't', 30)) to poll until
the subscription becomes enabled (or times out) and then assert the returned
value with is(); remove the system_or_bail sleep and use the wait_for_value
result instead of scalar_query.

113-124: Consider using a unique temporary file path to avoid conflicts.

The hardcoded /tmp/customer_data_dump.sql path could cause issues if multiple test runs execute in parallel, as they'd overwrite each other's dump files.

Consider using File::Temp for a unique path or including the process ID:

🔧 Suggested improvement
-my $dump_file = "/tmp/customer_data_dump.sql";
+my $dump_file = "/tmp/customer_data_dump_$$.sql";

Or better, use Perl's File::Temp:

use File::Temp qw(tempfile);
my ($fh, $dump_file) = tempfile('customer_data_dump_XXXX', SUFFIX => '.sql', TMPDIR => 1);
close($fh);  # We just need the path
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` around lines 113 - 124, The
hardcoded dump path in the pg_dump invocation should be made unique to avoid
parallel-test collisions: replace the static $dump_file assignment with a
File::Temp-generated path (use File::Temp qw(tempfile) to create ($fh,
$dump_file) and close $fh) or build a filename including the process id (e.g.
append $$), then pass that $dump_file to the existing system_or_bail call that
invokes pg_dump (keep the same arguments and flags); ensure $dump_file is
cleaned up or left in TMPDIR as appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/tap/t/014_pgdump_restore_conflict.pl`:
- Around line 217-221: The current restart pattern for the apply worker uses
fixed sleeps after calling spock.sub_disable(...) and spock.sub_enable(...),
which is flaky; replace the hard-coded system_or_bail 'sleep' calls with a small
polling loop that queries the worker status (e.g., call a status query via
psql_or_bail such as checking spock.show_workers or the apply worker state)
until the worker reports stopped/started or a configurable timeout expires, then
proceed to enable; alternatively, if polling isn't possible, add a brief comment
documenting that the 2s and 3s values are tested minimums and make them
configurable variables so tests can adjust them.
- Around line 199-203: Replace the fixed 5-second sleep-based wait
(system_or_bail 'sleep', '5') and direct scalar_query call with a polling-based
wait to avoid timing flakiness: call wait_for_value (e.g., wait_for_value(2,
"SELECT sub_enabled FROM spock.subscription WHERE sub_name = 'pgdump_test_sub'",
't', 30)) to poll until the subscription becomes enabled (or times out) and then
assert the returned value with is(); remove the system_or_bail sleep and use the
wait_for_value result instead of scalar_query.
- Around line 113-124: The hardcoded dump path in the pg_dump invocation should
be made unique to avoid parallel-test collisions: replace the static $dump_file
assignment with a File::Temp-generated path (use File::Temp qw(tempfile) to
create ($fh, $dump_file) and close $fh) or build a filename including the
process id (e.g. append $$), then pass that $dump_file to the existing
system_or_bail call that invokes pg_dump (keep the same arguments and flags);
ensure $dump_file is cleaned up or left in TMPDIR as appropriate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 97f58704-2e61-4475-a27c-bbd372267524

📥 Commits

Reviewing files that changed from the base of the PR and between a38e955 and 3b19585.

📒 Files selected for processing (2)
  • tests/tap/schedule
  • tests/tap/t/014_pgdump_restore_conflict.pl
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/tap/schedule

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/tap/t/014_pgdump_restore_conflict.pl (2)

114-114: Hardcoded dump path risks conflicts in parallel test runs.

If multiple instances of this test run concurrently (e.g., in CI), they will overwrite each other's dump file, causing test failures.

🔧 Use unique temp file
-my $dump_file = "/tmp/customer_data_dump.sql";
+my $dump_file = "/tmp/customer_data_dump_$$.sql";  # $$ is the PID

Alternatively, use File::Temp:

use File::Temp qw(tempfile);
my ($fh, $dump_file) = tempfile(SUFFIX => '.sql', UNLINK => 0);
close($fh);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` at line 114, Replace the
hardcoded $dump_file = "/tmp/customer_data_dump.sql" with a unique temporary
file creation using File::Temp (add "use File::Temp qw(tempfile);" at top),
create the temp via tempfile() to get ($fh, $dump_file), close $fh and set
UNLINK => 0 if the test expects the file to persist; update any references to
$dump_file (used in the dump/restore steps) to use the generated path so
parallel runs won't conflict.

61-70: Consider using Perl-native sleep for efficiency.

The system_or_bail 'sleep', '0.1' spawns a shell process for each poll iteration. Over a 30-second timeout at 0.1s intervals, this could spawn up to 300 processes.

🔧 Use Perl's select() for sub-second sleep
 sub wait_for_value {
     my ($node, $query, $expected, $timeout) = `@_`;
     $timeout //= 30;
     for my $i (1..($timeout * 10)) {
         my $result = scalar_query($node, $query);
         return $result if $result eq $expected;
-        system_or_bail 'sleep', '0.1' if $i < $timeout * 10;
+        select(undef, undef, undef, 0.1) if $i < $timeout * 10;
     }
     return scalar_query($node, $query);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` around lines 61 - 70, The polling
loop in wait_for_value currently calls system_or_bail 'sleep', '0.1' which
spawns a process each iteration; change it to use Perl-native sub-second sleep
(e.g., import Time::HiRes::sleep and replace system_or_bail 'sleep', '0.1' with
sleep(0.1)) so wait_for_value and its scalar_query-based loop perform
efficiently without spawning external processes; ensure you add the Time::HiRes
import at the top of the test file and keep the existing loop/return logic in
the wait_for_value sub.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/tap/t/014_pgdump_restore_conflict.pl`:
- Line 3: The declared test plan is wrong — update the Test::More call to match
actual assertions by changing the test count from 20 to 29 (i.e., change the
`use Test::More tests => 20;` declaration to `use Test::More tests => 29;`) so
the TAP harness matches the combined tests from `create_cluster(...)`, the 23
assertions in the body, and `destroy_cluster`.

---

Nitpick comments:
In `@tests/tap/t/014_pgdump_restore_conflict.pl`:
- Line 114: Replace the hardcoded $dump_file = "/tmp/customer_data_dump.sql"
with a unique temporary file creation using File::Temp (add "use File::Temp
qw(tempfile);" at top), create the temp via tempfile() to get ($fh, $dump_file),
close $fh and set UNLINK => 0 if the test expects the file to persist; update
any references to $dump_file (used in the dump/restore steps) to use the
generated path so parallel runs won't conflict.
- Around line 61-70: The polling loop in wait_for_value currently calls
system_or_bail 'sleep', '0.1' which spawns a process each iteration; change it
to use Perl-native sub-second sleep (e.g., import Time::HiRes::sleep and replace
system_or_bail 'sleep', '0.1' with sleep(0.1)) so wait_for_value and its
scalar_query-based loop perform efficiently without spawning external processes;
ensure you add the Time::HiRes import at the top of the test file and keep the
existing loop/return logic in the wait_for_value sub.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b1ab087e-60dc-412b-8b29-3c751391e927

📥 Commits

Reviewing files that changed from the base of the PR and between 3b19585 and 5b1c834.

📒 Files selected for processing (1)
  • tests/tap/t/014_pgdump_restore_conflict.pl

@@ -0,0 +1,343 @@
use strict;
use warnings;
use Test::More tests => 20;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Count actual test assertions in the file
echo "=== Counting test assertions in body ==="
grep -cE '\b(pass|is|ok|like|unlike|isnt|cmp_ok|command_ok)\s*\(' tests/tap/t/014_pgdump_restore_conflict.pl

echo ""
echo "=== Lines containing test assertions in test file ==="
grep -nE '\b(pass|is|ok|like|unlike|isnt|cmp_ok)\s*\(' tests/tap/t/014_pgdump_restore_conflict.pl | head -30

echo ""
echo "=== Check SpockTest.pm for create_cluster and destroy_cluster test counts ==="
# Look for test declarations in create_cluster and destroy_cluster functions
grep -A 50 "^sub create_cluster" tests/tap/t/SpockTest.pm | grep -E '\b(pass|command_ok|ok)\s*\('

echo ""
echo "=== Check destroy_cluster function ==="
grep -A 20 "^sub destroy_cluster" tests/tap/t/SpockTest.pm | grep -E '\b(pass|command_ok|ok)\s*\('

Repository: pgEdge/spock

Length of output: 1944


🏁 Script executed:

#!/bin/bash
# First, verify SpockTest.pm exists and locate it
echo "=== Finding SpockTest.pm ==="
find . -name "SpockTest.pm" -type f

echo ""
echo "=== Check if file exists at expected path ==="
if [ -f "tests/tap/t/SpockTest.pm" ]; then
  echo "File exists"
  
  echo ""
  echo "=== Extract create_cluster function (first 100 lines) ==="
  sed -n '/^sub create_cluster/,/^sub [a-z]/p' tests/tap/t/SpockTest.pm | head -100
fi

Repository: pgEdge/spock

Length of output: 3935


🏁 Script executed:

#!/bin/bash
echo "=== Extract destroy_cluster function ==="
sed -n '/^sub destroy_cluster/,/^sub [a-z]/p' tests/tap/t/SpockTest.pm | head -50

Repository: pgEdge/spock

Length of output: 1472


Test count must be updated from 20 to 29.

The declared count of 20 does not match the actual test assertions:

  • create_cluster(2, ...) adds 5 tests (2 pg_isready checks + 2 Spock extension checks + 1 pass)
  • The test body contains 23 test assertions
  • destroy_cluster adds 1 test

Total: 5 + 23 + 1 = 29 tests. Update line 3 to use Test::More tests => 29; to prevent TAP harness failure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/tap/t/014_pgdump_restore_conflict.pl` at line 3, The declared test plan
is wrong — update the Test::More call to match actual assertions by changing the
test count from 20 to 29 (i.e., change the `use Test::More tests => 20;`
declaration to `use Test::More tests => 29;`) so the TAP harness matches the
combined tests from `create_cluster(...)`, the 23 assertions in the body, and
`destroy_cluster`.

@mason-sharp mason-sharp marked this pull request as draft March 6, 2026 02:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants