Skip to content

Improvements to reliability branching#979

Open
nguidotti wants to merge 19 commits intoNVIDIA:release/26.04from
nguidotti:improve-reliable-branching
Open

Improvements to reliability branching#979
nguidotti wants to merge 19 commits intoNVIDIA:release/26.04from
nguidotti:improve-reliable-branching

Conversation

@nguidotti
Copy link
Copy Markdown
Contributor

@nguidotti nguidotti commented Mar 20, 2026

We can use a single dual simplex pivot to estimate the objective change after branch up or down in a variable (#963), which can be later be used for ranking the unreliable variables in reliability branching. To minimize the cost of reliability branching, we only apply trial branching to the 100 best variables. Other variables are not considered. Previously, the 100 unreliable variables were selected at random, which may not lead to best branching decision. In this PR, we also apply a cutoff based on the objective coefficients when no incumbent is available.

MIPLIB Benchmark on a GH200 system:

================================================================================
 main f0ba447  (1) vs improve-reliable-branching (2)
================================================================================

------------------------------------------------------------------------------------------------------------------------------
|                                        |       Run 1        |       Run 2        |     Abs. Diff.     |   Rel. Diff. (%)   |
------------------------------------------------------------------------------------------------------------------------------
| Feasible                                                 224                  226                   +2                 --- |
| Optimal                                                   69                   69                   +0                 --- |
| Solutions with <0.1% primal gap                          122                  124                   +2                 --- |
| Nodes explored (mean)                           4490538.7605         4498957.3655           +8418.6050              +0.187 |
| Nodes explored (shifted geomean)                   7308.8805            7705.4824            +396.6018              +5.147 |
| Relative MIP gap (mean)                               0.3269               0.3403              +0.0134              +3.944 |
| Relative MIP gap (shifted geomean)                    0.1192               0.1216              +0.0024              +1.977 |
| Solve time (mean)                                   453.2170             452.7833              -0.4337              -0.096 |
| Solve time (shifted geomean)                        230.0384             229.4358              -0.6026              -0.262 |
| Primal gap (mean)                                    11.6922              10.9459              -0.7463              -6.383 |
| Primal gap (shifted geomean)                          0.6535               0.6382              -0.0153              -2.337 |
| Primal integral (mean)                               50.7645              47.2675              -3.4970              -6.889 |
| Primal integral (shifted geomean)                    12.4180              10.4722              -1.9458             -15.669 |
------------------------------------------------------------------------------------------------------------------------------

In other words, this PR improves the shifted geomean primal integral by ~15% and reduce the mean primal gap by ~6%.

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

@nguidotti nguidotti added this to the 26.04 milestone Mar 20, 2026
@nguidotti nguidotti self-assigned this Mar 20, 2026
@nguidotti nguidotti requested a review from a team as a code owner March 20, 2026 16:07
@nguidotti nguidotti requested review from kaatish and rg20 March 20, 2026 16:07
@nguidotti nguidotti added non-breaking Introduces a non-breaking change do not merge Do not merge if this flag is set mip labels Mar 20, 2026
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 20, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@nguidotti nguidotti changed the base branch from main to release/26.04 March 20, 2026 16:08
@nguidotti nguidotti changed the title Improve reliable branching Improvements to reliability branching Mar 20, 2026
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 94c3dc3

@nguidotti nguidotti changed the title Improvements to reliability branching [WIP] Improvements to reliability branching Mar 20, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Added pivot-based single-pivot objective-change estimation and LP transpose storage; integrated these into strong-branching and reliability-branching flows; introduced dual-simplex phase2 helper templates; and widened solver integer parameter ranges for branching modes. (50 words)

Changes

Cohort / File(s) Summary
Branch-and-Bound Core
cpp/src/branch_and_bound/branch_and_bound.cpp
Reordered reliable_variable_selection call arguments; conditional enabling of dual-pivot ranking via settings_.reliability_branching == -2; removed local inf constant; transpose original LP into pc_.AT; pass root_relax_soln_ (full solution), upper_bound_, and basis lists/updates into strong_branching.
Pseudo-costs & Strong-branching
cpp/src/branch_and_bound/pseudo_costs.cpp, cpp/src/branch_and_bound/pseudo_costs.hpp
Added single-pivot objective-change estimation helpers and initialization; changed strong_branching signature to accept const lp_solution_t& root_solution, upper_bound, basic_list, nonbasic_list, and basis_factors; added AT transpose member to pseudo_costs_t; tightened reliable_variable_selection (const node, removed explicit solution, reads worker->leaf_solution.x), uses pivot estimates to rank candidates, and updated dual-solve/cutoff handling.
Dual Simplex Phase2 Helpers
cpp/src/dual_simplex/phase2.cpp, cpp/src/dual_simplex/phase2.hpp
Introduced templated helpers compute_reduced_cost_update and compute_delta_z (file-scope); replaced prior phase2 implementations with calls to these helpers; added explicit template instantiations for int/double.
Solver Settings
cpp/src/math_optimization/solver_settings.cu
Expanded integer parameter ranges: CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING max 1→2; CUOPT_MIP_RELIABILITY_BRANCHING min -1→-2 (defaults unchanged).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Improvements to reliability branching' is directly related to the main changeset, which focuses on enhancing reliability branching by using dual-simplex pivots for candidate ranking.
Description check ✅ Passed The PR description clearly describes the changes: using dual simplex pivot to estimate objective change for ranking unreliable variables in reliability branching, replacing random selection with the top 100 ranked variables, and applying cutoff based on objective coefficients.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 293-295: dual_phase2() can return dual::status_t::CUTOFF when
child_settings.cut_off is set, but the current helper treats any non-success as
failure and leaves obj as NaN; change the failure handling so that when
dual_phase2 returns CUTOFF you set obj to the applied cutoff
(child_settings.cut_off, i.e., upper_bound + settings.dual_tol) and treat it as
a valid result for seeding pseudo-costs instead of the generic failure path;
apply the same change to the other occurrence handling lines covering the same
logic (the block that currently inspects dual::status_t results and assigns
obj).
- Around line 898-906: Replace the std::cout/std::format usage in the block that
checks unreliabe_list.size() > max_num_candidates with the existing logger: call
log.printf() and pass a formatted message including
strong_branching_lp_iter.load(), branch_and_bound_lp_iters,
unreliable_list.size(), num_tasks, and reliable_threshold; remove the
std::format/include dependency and ensure the message format matches other
log.printf() calls in this file so logging is consistent with the parallel
solver pattern.
- Around line 219-245: The code currently passes basic_map[j] directly into
single_pivot_objective_change_estimate for each j in fractional; add a defensive
check on basic_map[j] before that call (e.g., if basic_map[j] < 0 then skip the
dual-pivot estimate for this j or assert/fail fast), so only valid basic indices
are forwarded; update both occurrences where fractional is iterated (the loop
using basic_map and the second occurrence mentioned) to guard around the call to
single_pivot_objective_change_estimate and avoid pushing an invalid basic index
into structures like e_k.i.

In `@cpp/src/dual_simplex/phase2.cpp`:
- Around line 175-206: The CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update but references out-of-scope symbols (lp,
basic_list, nonbasic_list); to fix, thread the required context into
compute_delta_z by updating its signature (and all callers) to accept the LP and
basis lists (or pass appropriate local equivalents available in this translation
unit) and then forward those parameters into
phase2::compute_reduced_cost_update, or alternatively construct/obtain the
needed lp, basic_list and nonbasic_list inside compute_delta_z before the debug
call so the debug path compiles; ensure you update function names
compute_delta_z and the call site(s) consistently.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: eb905199-9239-4fa9-a7c5-55a6b8c381b6

📥 Commits

Reviewing files that changed from the base of the PR and between 4d5f5e5 and 94c3dc3.

📒 Files selected for processing (6)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/pseudo_costs.cpp
  • cpp/src/branch_and_bound/pseudo_costs.hpp
  • cpp/src/dual_simplex/phase2.cpp
  • cpp/src/dual_simplex/phase2.hpp
  • cpp/src/math_optimization/solver_settings.cu

Comment on lines +175 to +206
#ifdef CHECK_CHANGE_IN_REDUCED_COST
const i_t m = A_transpose.n;
const i_t n = A_transpose.m;
std::vector<f_t> delta_y_dense(m);
delta_y.to_dense(delta_y_dense);
std::vector<f_t> delta_z_check(n);
std::vector<i_t> delta_z_mark_check(n, 0);
std::vector<i_t> delta_z_indices_check;
phase2::compute_reduced_cost_update(lp,
basic_list,
nonbasic_list,
delta_y_dense,
leaving_index,
direction,
delta_z_mark_check,
delta_z_indices_check,
delta_z_check,
work_estimate);
f_t error_check = 0.0;
for (i_t k = 0; k < n; ++k) {
const f_t diff = std::abs(delta_z[k] - delta_z_check[k]);
if (diff > 1e-6) {
printf("delta_z error %d transpose %e no transpose %e diff %e\n",
k,
delta_z[k],
delta_z_check[k],
diff);
}
error_check = std::max(error_check, diff);
}
if (error_check > 1e-6) { printf("delta_z error %e\n", error_check); }
#endif
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CHECK_CHANGE_IN_REDUCED_COST path is currently uncompilable after refactor.

At Line 183, the debug path uses phase2::compute_reduced_cost_update(...) and references lp, basic_list, and nonbasic_list, but those symbols are not in scope in this function. Enabling the macro will fail compilation.

Suggested fix (thread required context into compute_delta_z)
-template <typename i_t, typename f_t>
-void compute_delta_z(const csc_matrix_t<i_t, f_t>& A_transpose,
+template <typename i_t, typename f_t>
+void compute_delta_z(const lp_problem_t<i_t, f_t>& lp,
+                     const std::vector<i_t>& basic_list,
+                     const std::vector<i_t>& nonbasic_list,
+                     const csc_matrix_t<i_t, f_t>& A_transpose,
                      const sparse_vector_t<i_t, f_t>& delta_y,
                      i_t leaving_index,
                      i_t direction,
@@
-  phase2::compute_reduced_cost_update(lp,
-                                      basic_list,
-                                      nonbasic_list,
-                                      delta_y_dense,
-                                      leaving_index,
-                                      direction,
-                                      delta_z_mark_check,
-                                      delta_z_indices_check,
-                                      delta_z_check,
-                                      work_estimate);
+  compute_reduced_cost_update(lp,
+                              basic_list,
+                              nonbasic_list,
+                              delta_y_dense,
+                              leaving_index,
+                              direction,
+                              delta_z_mark_check,
+                              delta_z_indices_check,
+                              delta_z_check,
+                              work_estimate);
-        compute_delta_z(A_transpose,
+        compute_delta_z(lp,
+                        basic_list,
+                        nonbasic_list,
+                        A_transpose,
                         delta_y_sparse,
                         leaving_index,
                         direction,
                         nonbasic_mark,
                         delta_z_mark,
                         delta_z_indices,
                         delta_z,
                         phase2_work_estimate);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/dual_simplex/phase2.cpp` around lines 175 - 206, The
CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update but references out-of-scope symbols (lp,
basic_list, nonbasic_list); to fix, thread the required context into
compute_delta_z by updating its signature (and all callers) to accept the LP and
basis lists (or pass appropriate local equivalents available in this translation
unit) and then forward those parameters into
phase2::compute_reduced_cost_update, or alternatively construct/obtain the
needed lp, basic_list and nonbasic_list inside compute_delta_z before the debug
call so the debug path compiles; ensure you update function names
compute_delta_z and the call site(s) consistently.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 420-431: The cutoff sum in the else branch of pseudo_costs.cpp may
multiply objective coefficients by infinite bounds; update the computation in
the block that sets child_settings.cut_off to check
std::isfinite(child_problem.upper[i]) and std::isfinite(child_problem.lower[i])
before doing the multiplication with original_lp.objective[i]; skip terms with
non-finite bounds, and if a skipped term has a non-zero objective coefficient
treat the cutoff as unbounded (set child_settings.cut_off to +/-INFINITY based
on the sign of original_lp.objective[i]) or otherwise keep the finite
accumulated sum, ensuring no multiplication by non-finite values occurs (refer
to child_settings.cut_off, original_lp.objective, and
child_problem.upper/lower).
- Around line 140-161: The debug block wrongly accesses
delta_y.lp_solution.x[k]; delta_y is a sparse_vector_t with arrays .i and .x, so
replace delta_y.lp_solution.x[k] with delta_y.x[k] (keeping the loop over k <
delta_y.i.size()) so the index/value pairs are read correctly; leave the rest of
the CHECK_DELTA_OBJ computation (uses delta_z_indices, vstatus, delta_z,
variable_j, delta_obj_check, step_length) unchanged so the validity check
compiles and compares delta_obj_check to delta_obj as intended.
- Around line 121-131: The debug block references an undefined variable y
causing a compile error; change the code to use the solver's dual vector (e.g.
lp_solution.y) or declare a local alias before use. Specifically, in the
CHECK_DUAL_FEASIBILITY block update the call to matrix_transpose_vector_multiply
to pass lp_solution.y (or create vector<f_t> y = lp_solution.y) instead of the
undefined y so dual_residual, matrix_transpose_vector_multiply, vector_norm_inf
and settings.log.printf work correctly.
- Around line 296-308: The fallback cutoff calculation when upper_bound is not
finite can produce +/-inf by multiplying objective coefficients with infinite
variable bounds; update the block that sets child_settings.cut_off (the else
branch using original_lp.objective, child_problem.upper and child_problem.lower)
to ignore terms where the corresponding child_problem.upper or
child_problem.lower is not finite (or only compute the fallback when all
contributing bounds are finite), summing only finite products and leaving
child_settings.cut_off at a safe default (or signaling no cutoff) if any
required bound is infinite; ensure the logic preserves the existing behavior
when bounds are finite and still adds settings.dual_tol when applicable.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7905b856-7959-4fe1-bae4-678afac404e8

📥 Commits

Reviewing files that changed from the base of the PR and between 94c3dc3 and 703f30c.

📒 Files selected for processing (1)
  • cpp/src/branch_and_bound/pseudo_costs.cpp

@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 703f30c

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (2)

1992-1998: Consider documenting the magic value -2 for reliability branching mode.

The value -2 for settings_.reliability_branching enables dual-simplex objective estimation, but this isn't documented in the code. Other code paths check reliability_branching != 0 for general reliability branching (line 858), making the semantics of different values unclear.

Consider adding a comment or using a named constant to clarify the different modes:

  • 0: disabled
  • -2: enabled with dual pivot estimation
  • Other non-zero values: enabled without dual pivot estimation (?)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1992 - 1998, The
branch sets behavior when settings_.reliability_branching == -2 but that magic
value is undocumented; replace the literal with a named constant or enum (e.g.,
RELIABILITY_BRANCHING_DUAL_PIVOT) and add a short comment describing the modes
(0 = disabled, -2 = dual-pivot objective estimation, other non-zero =
reliability branching without dual pivot). Update the check in this block that
sets pc_.reliability_branching_settings.rank_candidates_with_dual_pivot and any
other checks that compare settings_.reliability_branching (e.g., places using
"reliability_branching != 0") to use the new named constant or enum variants so
the semantics are clear throughout the codebase.

792-810: Debug output should use logger or be removed before merge.

This std::cout with std::format bypasses the existing logging infrastructure (settings_.log) which handles log prefixes, file redirection, and verbosity control. Given the PR is marked "[WIP]", this appears to be temporary diagnostic code.

Consider:

  1. Removing this output before merging, or
  2. Converting to use settings_.log.printf() if the output is intended to be permanent, or
  3. Wrapping in a #ifdef debug macro if needed for development only
♻️ Suggested conversion to logger (if keeping)
-    std::cout << std::format(
-                   "{}: user_obj={:.3g}, solver_obj={:.3g}, user_lower={:.3g}, "
-                   "solver_lower={:.3g}, user_gap={:.3g}, "
-                   "solver_gap={:.3g}, tol={:.3g}",
-                   feasible_solution_symbol(thread_type),
-                   user_upper,
-                   upper,
-                   user_lower,
-                   lower,
-                   std::abs(user_upper - user_lower),
-                   std::abs(upper - lower),
-                   settings_.absolute_mip_gap_tol)
-              << std::endl;
+    settings_.log.debug(
+      "%c: user_obj=%.3g, solver_obj=%.3g, user_lower=%.3g, "
+      "solver_lower=%.3g, user_gap=%.3g, solver_gap=%.3g, tol=%.3g\n",
+      feasible_solution_symbol(thread_type),
+      user_upper,
+      upper,
+      user_lower,
+      lower,
+      std::abs(user_upper - user_lower),
+      std::abs(upper - lower),
+      settings_.absolute_mip_gap_tol);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 792 - 810, The
debug std::cout/std::format block should be removed or converted to use the
project's logging API; replace the print in branch_and_bound.cpp (the block that
reads upper_bound_, get_lower_bound(), compute_user_objective(original_lp_,
...), feasible_solution_symbol(...), settings_.absolute_mip_gap_tol) with a call
to settings_.log.printf(...) (or remove entirely) so logs respect prefixes,
redirection and verbosity; if this is for temporary debugging only, wrap it in a
debug macro instead of using std::cout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1992-1998: The branch sets behavior when
settings_.reliability_branching == -2 but that magic value is undocumented;
replace the literal with a named constant or enum (e.g.,
RELIABILITY_BRANCHING_DUAL_PIVOT) and add a short comment describing the modes
(0 = disabled, -2 = dual-pivot objective estimation, other non-zero =
reliability branching without dual pivot). Update the check in this block that
sets pc_.reliability_branching_settings.rank_candidates_with_dual_pivot and any
other checks that compare settings_.reliability_branching (e.g., places using
"reliability_branching != 0") to use the new named constant or enum variants so
the semantics are clear throughout the codebase.
- Around line 792-810: The debug std::cout/std::format block should be removed
or converted to use the project's logging API; replace the print in
branch_and_bound.cpp (the block that reads upper_bound_, get_lower_bound(),
compute_user_objective(original_lp_, ...), feasible_solution_symbol(...),
settings_.absolute_mip_gap_tol) with a call to settings_.log.printf(...) (or
remove entirely) so logs respect prefixes, redirection and verbosity; if this is
for temporary debugging only, wrap it in a debug macro instead of using
std::cout.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 91d703fa-65d3-4ee5-8b05-99dbef2b49dd

📥 Commits

Reviewing files that changed from the base of the PR and between 703f30c and bb71cde.

📒 Files selected for processing (2)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/dual_simplex/phase2.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/dual_simplex/phase2.cpp

Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
cpp/src/branch_and_bound/pseudo_costs.cpp (3)

231-249: ⚠️ Potential issue | 🟠 Major

Guard the dual-pivot path for nonbasic fractional candidates.

fractional is built from integrality alone, not basis status, so j can legitimately be fractional while basic_map[j] == -1. These two call sites still forward that -1 into single_pivot_objective_change_estimate(), which gives the estimator an invalid basis row. Leave root strong-branch values as NaN in that case, and fall back to the existing non-pivot ranking path for unreliable candidates with no basic row.

🛡️ Example guard for the root seeding path
   for (i_t k = 0; k < fractional.size(); k++) {
     const i_t j = fractional[k];
     assert(j >= 0);
+    if (basic_map[j] < 0) {
+      pc.strong_branch_down[k] = std::numeric_limits<f_t>::quiet_NaN();
+      pc.strong_branch_up[k]   = std::numeric_limits<f_t>::quiet_NaN();
+      continue;
+    }
 
     objective_change_estimate_t<f_t> estimate =
       single_pivot_objective_change_estimate(lp,
                                              settings,

Also applies to: 922-940

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 231 - 249, fractional
contains indices regardless of basis status so you must guard the dual-pivot
path when basic_map[j] == -1: before calling
single_pivot_objective_change_estimate(…) for candidate j, check if basic_map[j]
is -1 and if so skip the pivot-based estimate, leave any root strong-branch
values as NaN (or mark the estimate invalid) and let the code fall back to the
existing non-pivot ranking path; apply the same guard to the other identical
call site that also forwards basic_map[j] into
single_pivot_objective_change_estimate.

328-330: ⚠️ Potential issue | 🟠 Major

Don’t treat CUTOFF as a primal objective.

dual_phase2*() can return CUTOFF before it has a primal-feasible child solution. Using compute_objective(child_problem, solution.x) on that path can understate the branch penalty and poison the pseudo-cost update. Use child_settings.cut_off (or another solver-reported dual bound) for CUTOFF, and keep the primal-objective path for OPTIMAL/ITERATION_LIMIT only.

🎯 Bound-safe handling
-      } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::ITERATION_LIMIT ||
-                 status == dual::status_t::CUTOFF) {
+      } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::ITERATION_LIMIT) {
         obj = compute_objective(child_problem, solution.x);
+      } else if (status == dual::status_t::CUTOFF) {
+        obj = child_settings.cut_off;
       } else {

Also applies to: 469-471

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 328 - 330, The code
treats dual::status_t::CUTOFF like a primal-feasible result and calls
compute_objective(child_problem, solution.x), which is unsafe; modify the status
handling in the dual_phase2* result processing so that only OPTIMAL and
ITERATION_LIMIT use compute_objective(child_problem, solution.x) to produce a
primal objective, while CUTOFF uses the solver-reported dual bound (e.g.,
child_settings.cut_off or another reported bound) to set obj; update the
branch-and-bound pseudo-cost update paths (the block around handling status in
pseudo_costs.cpp and the analogous handling in the second occurrence) to select
the bound source based on status values rather than always using solution.x.

297-308: ⚠️ Potential issue | 🟠 Major

Only derive the fallback cutoff from finite bounds.

When no incumbent exists, this fallback still multiplies objective coefficients by ±inf bounds. That can make child_settings.cut_off non-finite, which either disables pruning or turns the trial LP into an immediate bogus cutoff. If any contributing bound is non-finite, skip the derived cutoff and leave cut_off at +inf.

📉 Safer cutoff derivation
       if (std::isfinite(upper_bound)) {
         child_settings.cut_off = upper_bound + settings.dual_tol;
       } else {
-        child_settings.cut_off = 0;
+        f_t derived_cutoff = 0;
+        bool finite_cutoff = true;
         for (i_t i = 0; i < original_lp.num_cols; ++i) {
           if (original_lp.objective[i] < 0) {
-            child_settings.cut_off += original_lp.objective[i] * child_problem.upper[i];
+            if (!std::isfinite(child_problem.upper[i])) { finite_cutoff = false; break; }
+            derived_cutoff += original_lp.objective[i] * child_problem.upper[i];
           } else if (original_lp.objective[i] > 0) {
-            child_settings.cut_off += original_lp.objective[i] * child_problem.lower[i];
+            if (!std::isfinite(child_problem.lower[i])) { finite_cutoff = false; break; }
+            derived_cutoff += original_lp.objective[i] * child_problem.lower[i];
           }
         }
+        child_settings.cut_off = finite_cutoff ? derived_cutoff
+                                               : std::numeric_limits<f_t>::infinity();
       }

Also applies to: 420-431

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 297 - 308, The
fallback cutoff derivation currently multiplies objective coefficients by
child_problem.upper/lower without checking for infinities; change the logic in
the block handling !std::isfinite(upper_bound) (affecting child_settings.cut_off
computation) to first verify all contributing terms are finite before summing:
iterate i over original_lp.num_cols and if any term original_lp.objective[i] or
the selected bound (child_problem.upper[i] or child_problem.lower[i]) is
non-finite, skip deriving a numeric cutoff and set child_settings.cut_off to
+infinity (i.e., leave it as infinite); only when every contributing
multiplication is finite compute the sum and assign it to
child_settings.cut_off. Also apply the same finite-checking guard to the
analogous code region referenced in the comment (the other block at lines
~420-431) so no derived cutoff uses ±inf bounds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 155-158: The CHECK_DELTA_OBJ validation block is using an
undefined variable delta_obj in the settings.log.printf call; replace that
reference with the correct variable delta_obj_down (the down-branch objective
delta computed immediately before this block) so the printf compares and logs
delta_obj_check against delta_obj_down; update the string/arguments in the
CHECK_DELTA_OBJ block (where settings.log.printf is called) to use
delta_obj_down.

---

Duplicate comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 231-249: fractional contains indices regardless of basis status so
you must guard the dual-pivot path when basic_map[j] == -1: before calling
single_pivot_objective_change_estimate(…) for candidate j, check if basic_map[j]
is -1 and if so skip the pivot-based estimate, leave any root strong-branch
values as NaN (or mark the estimate invalid) and let the code fall back to the
existing non-pivot ranking path; apply the same guard to the other identical
call site that also forwards basic_map[j] into
single_pivot_objective_change_estimate.
- Around line 328-330: The code treats dual::status_t::CUTOFF like a
primal-feasible result and calls compute_objective(child_problem, solution.x),
which is unsafe; modify the status handling in the dual_phase2* result
processing so that only OPTIMAL and ITERATION_LIMIT use
compute_objective(child_problem, solution.x) to produce a primal objective,
while CUTOFF uses the solver-reported dual bound (e.g., child_settings.cut_off
or another reported bound) to set obj; update the branch-and-bound pseudo-cost
update paths (the block around handling status in pseudo_costs.cpp and the
analogous handling in the second occurrence) to select the bound source based on
status values rather than always using solution.x.
- Around line 297-308: The fallback cutoff derivation currently multiplies
objective coefficients by child_problem.upper/lower without checking for
infinities; change the logic in the block handling !std::isfinite(upper_bound)
(affecting child_settings.cut_off computation) to first verify all contributing
terms are finite before summing: iterate i over original_lp.num_cols and if any
term original_lp.objective[i] or the selected bound (child_problem.upper[i] or
child_problem.lower[i]) is non-finite, skip deriving a numeric cutoff and set
child_settings.cut_off to +infinity (i.e., leave it as infinite); only when
every contributing multiplication is finite compute the sum and assign it to
child_settings.cut_off. Also apply the same finite-checking guard to the
analogous code region referenced in the comment (the other block at lines
~420-431) so no derived cutoff uses ±inf bounds.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2976ba07-38ba-4543-be57-2786b05b584d

📥 Commits

Reviewing files that changed from the base of the PR and between bb71cde and f535b49.

📒 Files selected for processing (2)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/pseudo_costs.cpp

Comment on lines +155 to +158
if (std::abs(delta_obj_check - delta_obj) > 1e-6) {
settings.log.printf("Delta obj check %e. Delta obj %e. Step length %e.\n",
delta_obj_check,
delta_obj,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd cpp/src/branch_and_bound && head -200 pseudo_costs.cpp | tail -70

Repository: NVIDIA/cuopt

Length of output: 2371


🏁 Script executed:

wc -l cpp/src/branch_and_bound/pseudo_costs.cpp

Repository: NVIDIA/cuopt

Length of output: 102


🏁 Script executed:

sed -n '140,170p' cpp/src/branch_and_bound/pseudo_costs.cpp

Repository: NVIDIA/cuopt

Length of output: 1154


Fix the undefined debug variable name in CHECK_DELTA_OBJ validation block.

At lines 155-158, the code references delta_obj which is not defined. The validation should compare against delta_obj_down, the down-branch objective delta calculated just before this block.

Fix
  if (std::abs(delta_obj_check - delta_obj) > 1e-6) {
    settings.log.printf("Delta obj check %e. Delta obj %e. Step length %e.\n",
                        delta_obj_check,
-                       delta_obj,
+                       delta_obj_down,
                        step_length);
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 155 - 158, The
CHECK_DELTA_OBJ validation block is using an undefined variable delta_obj in the
settings.log.printf call; replace that reference with the correct variable
delta_obj_down (the down-branch objective delta computed immediately before this
block) so the printf compares and logs delta_obj_check against delta_obj_down;
update the string/arguments in the CHECK_DELTA_OBJ block (where
settings.log.printf is called) to use delta_obj_down.

Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test e8dea1e

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cpp/src/branch_and_bound/pseudo_costs.hpp (1)

481-489: Pass the scoring solution explicitly.

Dropping the explicit solution parameter means the API no longer guarantees that fractional and the values used to rank it come from the same LP snapshot. Pulling the solution implicitly from worker makes that contract harder to audit and easier to misuse; a const lp_solution_t<i_t, f_t>& argument would keep the dependency explicit.

As per coding guidelines: "Ensure variables and constraints are accessed from the correct problem context (original vs presolve vs folded vs postsolve); verify index mapping consistency across problem transformations".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.hpp` around lines 481 - 489, The
function reliable_variable_selection currently relies on pulling the LP solution
implicitly (via worker) while taking fractional; change its signature to accept
an explicit const lp_solution_t<i_t, f_t>& solution parameter (add it before
fractional or adjacent) so the scoring uses the exact solution snapshot used to
compute fractional, update any internal uses to reference this solution instead
of worker->... and update all callers (e.g., places that call
reliable_variable_selection) to pass the matching lp_solution_t instance; also
ensure variable/constraint index mappings are applied consistently between
solution and fractional according to the problem transformation APIs
(original/presolve/fold/postsolve).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.hpp`:
- Around line 406-410: The flag rank_candidates_with_dual_pivot is enabled by
default but depends on the transpose matrix AT which is only given a dummy value
unless A.transpose(pc_.AT) is called externally; to fix, either set
rank_candidates_with_dual_pivot to false by default (disable dual-pivot ranking
until AT is valid) or make pseudo_costs_t::resize()/constructor explicitly
initialize/populate pc_.AT (and clear/reset it on reuse) so that AT is always
valid before any ranking; update all occurrences (including the similar flags at
lines referenced) and ensure any code that calls rank_candidates_with_dual_pivot
verifies AT is populated (or triggers A.transpose(pc_.AT)) before using it.

---

Nitpick comments:
In `@cpp/src/branch_and_bound/pseudo_costs.hpp`:
- Around line 481-489: The function reliable_variable_selection currently relies
on pulling the LP solution implicitly (via worker) while taking fractional;
change its signature to accept an explicit const lp_solution_t<i_t, f_t>&
solution parameter (add it before fractional or adjacent) so the scoring uses
the exact solution snapshot used to compute fractional, update any internal uses
to reference this solution instead of worker->... and update all callers (e.g.,
places that call reliable_variable_selection) to pass the matching lp_solution_t
instance; also ensure variable/constraint index mappings are applied
consistently between solution and fractional according to the problem
transformation APIs (original/presolve/fold/postsolve).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4fb6ba11-c7b1-4bcf-8b9c-03988b8e25f2

📥 Commits

Reviewing files that changed from the base of the PR and between f535b49 and e8dea1e.

📒 Files selected for processing (1)
  • cpp/src/branch_and_bound/pseudo_costs.hpp

Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 839d562

@nguidotti nguidotti changed the title [WIP] Improvements to reliability branching Improvements to reliability branching Mar 24, 2026
@nguidotti nguidotti added improvement Improves an existing functionality and removed do not merge Do not merge if this flag is set labels Mar 24, 2026
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test b0cb041

@chris-maes chris-maes added the P2 label Mar 25, 2026
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 7b5d429

@nguidotti nguidotti requested review from Kh4ster and chris-maes and removed request for kaatish and rg20 March 30, 2026 09:35
Copy link
Copy Markdown
Contributor

@Kh4ster Kh4ster left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :) I would advise doing another run on MIPLIB to confirm the results and also have a review from @chris-maes before merging

Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 9c9d019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality mip non-breaking Introduces a non-breaking change P2

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants