Improvements to reliability branching#979
Improvements to reliability branching#979nguidotti wants to merge 19 commits intoNVIDIA:release/26.04from
Conversation
…the cutoff when no incumbent was found.
… improve-reliable-branching # Conflicts: # cpp/src/branch_and_bound/pseudo_costs.cpp
…change estimate via dual simplex single pivot (NVIDIA#963).
…ange instead of the objective. fixed candidate ranking in reliability branching.
|
/ok to test 94c3dc3 |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded pivot-based single-pivot objective-change estimation and LP transpose storage; integrated these into strong-branching and reliability-branching flows; introduced dual-simplex phase2 helper templates; and widened solver integer parameter ranges for branching modes. (50 words) Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 293-295: dual_phase2() can return dual::status_t::CUTOFF when
child_settings.cut_off is set, but the current helper treats any non-success as
failure and leaves obj as NaN; change the failure handling so that when
dual_phase2 returns CUTOFF you set obj to the applied cutoff
(child_settings.cut_off, i.e., upper_bound + settings.dual_tol) and treat it as
a valid result for seeding pseudo-costs instead of the generic failure path;
apply the same change to the other occurrence handling lines covering the same
logic (the block that currently inspects dual::status_t results and assigns
obj).
- Around line 898-906: Replace the std::cout/std::format usage in the block that
checks unreliabe_list.size() > max_num_candidates with the existing logger: call
log.printf() and pass a formatted message including
strong_branching_lp_iter.load(), branch_and_bound_lp_iters,
unreliable_list.size(), num_tasks, and reliable_threshold; remove the
std::format/include dependency and ensure the message format matches other
log.printf() calls in this file so logging is consistent with the parallel
solver pattern.
- Around line 219-245: The code currently passes basic_map[j] directly into
single_pivot_objective_change_estimate for each j in fractional; add a defensive
check on basic_map[j] before that call (e.g., if basic_map[j] < 0 then skip the
dual-pivot estimate for this j or assert/fail fast), so only valid basic indices
are forwarded; update both occurrences where fractional is iterated (the loop
using basic_map and the second occurrence mentioned) to guard around the call to
single_pivot_objective_change_estimate and avoid pushing an invalid basic index
into structures like e_k.i.
In `@cpp/src/dual_simplex/phase2.cpp`:
- Around line 175-206: The CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update but references out-of-scope symbols (lp,
basic_list, nonbasic_list); to fix, thread the required context into
compute_delta_z by updating its signature (and all callers) to accept the LP and
basis lists (or pass appropriate local equivalents available in this translation
unit) and then forward those parameters into
phase2::compute_reduced_cost_update, or alternatively construct/obtain the
needed lp, basic_list and nonbasic_list inside compute_delta_z before the debug
call so the debug path compiles; ensure you update function names
compute_delta_z and the call site(s) consistently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: eb905199-9239-4fa9-a7c5-55a6b8c381b6
📒 Files selected for processing (6)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/branch_and_bound/pseudo_costs.cppcpp/src/branch_and_bound/pseudo_costs.hppcpp/src/dual_simplex/phase2.cppcpp/src/dual_simplex/phase2.hppcpp/src/math_optimization/solver_settings.cu
| #ifdef CHECK_CHANGE_IN_REDUCED_COST | ||
| const i_t m = A_transpose.n; | ||
| const i_t n = A_transpose.m; | ||
| std::vector<f_t> delta_y_dense(m); | ||
| delta_y.to_dense(delta_y_dense); | ||
| std::vector<f_t> delta_z_check(n); | ||
| std::vector<i_t> delta_z_mark_check(n, 0); | ||
| std::vector<i_t> delta_z_indices_check; | ||
| phase2::compute_reduced_cost_update(lp, | ||
| basic_list, | ||
| nonbasic_list, | ||
| delta_y_dense, | ||
| leaving_index, | ||
| direction, | ||
| delta_z_mark_check, | ||
| delta_z_indices_check, | ||
| delta_z_check, | ||
| work_estimate); | ||
| f_t error_check = 0.0; | ||
| for (i_t k = 0; k < n; ++k) { | ||
| const f_t diff = std::abs(delta_z[k] - delta_z_check[k]); | ||
| if (diff > 1e-6) { | ||
| printf("delta_z error %d transpose %e no transpose %e diff %e\n", | ||
| k, | ||
| delta_z[k], | ||
| delta_z_check[k], | ||
| diff); | ||
| } | ||
| error_check = std::max(error_check, diff); | ||
| } | ||
| if (error_check > 1e-6) { printf("delta_z error %e\n", error_check); } | ||
| #endif |
There was a problem hiding this comment.
CHECK_CHANGE_IN_REDUCED_COST path is currently uncompilable after refactor.
At Line 183, the debug path uses phase2::compute_reduced_cost_update(...) and references lp, basic_list, and nonbasic_list, but those symbols are not in scope in this function. Enabling the macro will fail compilation.
Suggested fix (thread required context into compute_delta_z)
-template <typename i_t, typename f_t>
-void compute_delta_z(const csc_matrix_t<i_t, f_t>& A_transpose,
+template <typename i_t, typename f_t>
+void compute_delta_z(const lp_problem_t<i_t, f_t>& lp,
+ const std::vector<i_t>& basic_list,
+ const std::vector<i_t>& nonbasic_list,
+ const csc_matrix_t<i_t, f_t>& A_transpose,
const sparse_vector_t<i_t, f_t>& delta_y,
i_t leaving_index,
i_t direction,
@@
- phase2::compute_reduced_cost_update(lp,
- basic_list,
- nonbasic_list,
- delta_y_dense,
- leaving_index,
- direction,
- delta_z_mark_check,
- delta_z_indices_check,
- delta_z_check,
- work_estimate);
+ compute_reduced_cost_update(lp,
+ basic_list,
+ nonbasic_list,
+ delta_y_dense,
+ leaving_index,
+ direction,
+ delta_z_mark_check,
+ delta_z_indices_check,
+ delta_z_check,
+ work_estimate);- compute_delta_z(A_transpose,
+ compute_delta_z(lp,
+ basic_list,
+ nonbasic_list,
+ A_transpose,
delta_y_sparse,
leaving_index,
direction,
nonbasic_mark,
delta_z_mark,
delta_z_indices,
delta_z,
phase2_work_estimate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/dual_simplex/phase2.cpp` around lines 175 - 206, The
CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update but references out-of-scope symbols (lp,
basic_list, nonbasic_list); to fix, thread the required context into
compute_delta_z by updating its signature (and all callers) to accept the LP and
basis lists (or pass appropriate local equivalents available in this translation
unit) and then forward those parameters into
phase2::compute_reduced_cost_update, or alternatively construct/obtain the
needed lp, basic_list and nonbasic_list inside compute_delta_z before the debug
call so the debug path compiles; ensure you update function names
compute_delta_z and the call site(s) consistently.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 420-431: The cutoff sum in the else branch of pseudo_costs.cpp may
multiply objective coefficients by infinite bounds; update the computation in
the block that sets child_settings.cut_off to check
std::isfinite(child_problem.upper[i]) and std::isfinite(child_problem.lower[i])
before doing the multiplication with original_lp.objective[i]; skip terms with
non-finite bounds, and if a skipped term has a non-zero objective coefficient
treat the cutoff as unbounded (set child_settings.cut_off to +/-INFINITY based
on the sign of original_lp.objective[i]) or otherwise keep the finite
accumulated sum, ensuring no multiplication by non-finite values occurs (refer
to child_settings.cut_off, original_lp.objective, and
child_problem.upper/lower).
- Around line 140-161: The debug block wrongly accesses
delta_y.lp_solution.x[k]; delta_y is a sparse_vector_t with arrays .i and .x, so
replace delta_y.lp_solution.x[k] with delta_y.x[k] (keeping the loop over k <
delta_y.i.size()) so the index/value pairs are read correctly; leave the rest of
the CHECK_DELTA_OBJ computation (uses delta_z_indices, vstatus, delta_z,
variable_j, delta_obj_check, step_length) unchanged so the validity check
compiles and compares delta_obj_check to delta_obj as intended.
- Around line 121-131: The debug block references an undefined variable y
causing a compile error; change the code to use the solver's dual vector (e.g.
lp_solution.y) or declare a local alias before use. Specifically, in the
CHECK_DUAL_FEASIBILITY block update the call to matrix_transpose_vector_multiply
to pass lp_solution.y (or create vector<f_t> y = lp_solution.y) instead of the
undefined y so dual_residual, matrix_transpose_vector_multiply, vector_norm_inf
and settings.log.printf work correctly.
- Around line 296-308: The fallback cutoff calculation when upper_bound is not
finite can produce +/-inf by multiplying objective coefficients with infinite
variable bounds; update the block that sets child_settings.cut_off (the else
branch using original_lp.objective, child_problem.upper and child_problem.lower)
to ignore terms where the corresponding child_problem.upper or
child_problem.lower is not finite (or only compute the fallback when all
contributing bounds are finite), summing only finite products and leaving
child_settings.cut_off at a safe default (or signaling no cutoff) if any
required bound is infinite; ensure the logic preserves the existing behavior
when bounds are finite and still adds settings.dual_tol when applicable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7905b856-7959-4fe1-bae4-678afac404e8
📒 Files selected for processing (1)
cpp/src/branch_and_bound/pseudo_costs.cpp
|
/ok to test 703f30c |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (2)
1992-1998: Consider documenting the magic value-2for reliability branching mode.The value
-2forsettings_.reliability_branchingenables dual-simplex objective estimation, but this isn't documented in the code. Other code paths checkreliability_branching != 0for general reliability branching (line 858), making the semantics of different values unclear.Consider adding a comment or using a named constant to clarify the different modes:
0: disabled-2: enabled with dual pivot estimation- Other non-zero values: enabled without dual pivot estimation (?)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1992 - 1998, The branch sets behavior when settings_.reliability_branching == -2 but that magic value is undocumented; replace the literal with a named constant or enum (e.g., RELIABILITY_BRANCHING_DUAL_PIVOT) and add a short comment describing the modes (0 = disabled, -2 = dual-pivot objective estimation, other non-zero = reliability branching without dual pivot). Update the check in this block that sets pc_.reliability_branching_settings.rank_candidates_with_dual_pivot and any other checks that compare settings_.reliability_branching (e.g., places using "reliability_branching != 0") to use the new named constant or enum variants so the semantics are clear throughout the codebase.
792-810: Debug output should use logger or be removed before merge.This
std::coutwithstd::formatbypasses the existing logging infrastructure (settings_.log) which handles log prefixes, file redirection, and verbosity control. Given the PR is marked "[WIP]", this appears to be temporary diagnostic code.Consider:
- Removing this output before merging, or
- Converting to use
settings_.log.printf()if the output is intended to be permanent, or- Wrapping in a
#ifdefdebug macro if needed for development only♻️ Suggested conversion to logger (if keeping)
- std::cout << std::format( - "{}: user_obj={:.3g}, solver_obj={:.3g}, user_lower={:.3g}, " - "solver_lower={:.3g}, user_gap={:.3g}, " - "solver_gap={:.3g}, tol={:.3g}", - feasible_solution_symbol(thread_type), - user_upper, - upper, - user_lower, - lower, - std::abs(user_upper - user_lower), - std::abs(upper - lower), - settings_.absolute_mip_gap_tol) - << std::endl; + settings_.log.debug( + "%c: user_obj=%.3g, solver_obj=%.3g, user_lower=%.3g, " + "solver_lower=%.3g, user_gap=%.3g, solver_gap=%.3g, tol=%.3g\n", + feasible_solution_symbol(thread_type), + user_upper, + upper, + user_lower, + lower, + std::abs(user_upper - user_lower), + std::abs(upper - lower), + settings_.absolute_mip_gap_tol);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 792 - 810, The debug std::cout/std::format block should be removed or converted to use the project's logging API; replace the print in branch_and_bound.cpp (the block that reads upper_bound_, get_lower_bound(), compute_user_objective(original_lp_, ...), feasible_solution_symbol(...), settings_.absolute_mip_gap_tol) with a call to settings_.log.printf(...) (or remove entirely) so logs respect prefixes, redirection and verbosity; if this is for temporary debugging only, wrap it in a debug macro instead of using std::cout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1992-1998: The branch sets behavior when
settings_.reliability_branching == -2 but that magic value is undocumented;
replace the literal with a named constant or enum (e.g.,
RELIABILITY_BRANCHING_DUAL_PIVOT) and add a short comment describing the modes
(0 = disabled, -2 = dual-pivot objective estimation, other non-zero =
reliability branching without dual pivot). Update the check in this block that
sets pc_.reliability_branching_settings.rank_candidates_with_dual_pivot and any
other checks that compare settings_.reliability_branching (e.g., places using
"reliability_branching != 0") to use the new named constant or enum variants so
the semantics are clear throughout the codebase.
- Around line 792-810: The debug std::cout/std::format block should be removed
or converted to use the project's logging API; replace the print in
branch_and_bound.cpp (the block that reads upper_bound_, get_lower_bound(),
compute_user_objective(original_lp_, ...), feasible_solution_symbol(...),
settings_.absolute_mip_gap_tol) with a call to settings_.log.printf(...) (or
remove entirely) so logs respect prefixes, redirection and verbosity; if this is
for temporary debugging only, wrap it in a debug macro instead of using
std::cout.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 91d703fa-65d3-4ee5-8b05-99dbef2b49dd
📒 Files selected for processing (2)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/dual_simplex/phase2.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
- cpp/src/dual_simplex/phase2.cpp
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
cpp/src/branch_and_bound/pseudo_costs.cpp (3)
231-249:⚠️ Potential issue | 🟠 MajorGuard the dual-pivot path for nonbasic fractional candidates.
fractionalis built from integrality alone, not basis status, sojcan legitimately be fractional whilebasic_map[j] == -1. These two call sites still forward that-1intosingle_pivot_objective_change_estimate(), which gives the estimator an invalid basis row. Leave root strong-branch values asNaNin that case, and fall back to the existing non-pivot ranking path for unreliable candidates with no basic row.🛡️ Example guard for the root seeding path
for (i_t k = 0; k < fractional.size(); k++) { const i_t j = fractional[k]; assert(j >= 0); + if (basic_map[j] < 0) { + pc.strong_branch_down[k] = std::numeric_limits<f_t>::quiet_NaN(); + pc.strong_branch_up[k] = std::numeric_limits<f_t>::quiet_NaN(); + continue; + } objective_change_estimate_t<f_t> estimate = single_pivot_objective_change_estimate(lp, settings,Also applies to: 922-940
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 231 - 249, fractional contains indices regardless of basis status so you must guard the dual-pivot path when basic_map[j] == -1: before calling single_pivot_objective_change_estimate(…) for candidate j, check if basic_map[j] is -1 and if so skip the pivot-based estimate, leave any root strong-branch values as NaN (or mark the estimate invalid) and let the code fall back to the existing non-pivot ranking path; apply the same guard to the other identical call site that also forwards basic_map[j] into single_pivot_objective_change_estimate.
328-330:⚠️ Potential issue | 🟠 MajorDon’t treat
CUTOFFas a primal objective.
dual_phase2*()can returnCUTOFFbefore it has a primal-feasible child solution. Usingcompute_objective(child_problem, solution.x)on that path can understate the branch penalty and poison the pseudo-cost update. Usechild_settings.cut_off(or another solver-reported dual bound) forCUTOFF, and keep the primal-objective path forOPTIMAL/ITERATION_LIMITonly.🎯 Bound-safe handling
- } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::ITERATION_LIMIT || - status == dual::status_t::CUTOFF) { + } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::ITERATION_LIMIT) { obj = compute_objective(child_problem, solution.x); + } else if (status == dual::status_t::CUTOFF) { + obj = child_settings.cut_off; } else {Also applies to: 469-471
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 328 - 330, The code treats dual::status_t::CUTOFF like a primal-feasible result and calls compute_objective(child_problem, solution.x), which is unsafe; modify the status handling in the dual_phase2* result processing so that only OPTIMAL and ITERATION_LIMIT use compute_objective(child_problem, solution.x) to produce a primal objective, while CUTOFF uses the solver-reported dual bound (e.g., child_settings.cut_off or another reported bound) to set obj; update the branch-and-bound pseudo-cost update paths (the block around handling status in pseudo_costs.cpp and the analogous handling in the second occurrence) to select the bound source based on status values rather than always using solution.x.
297-308:⚠️ Potential issue | 🟠 MajorOnly derive the fallback cutoff from finite bounds.
When no incumbent exists, this fallback still multiplies objective coefficients by
±infbounds. That can makechild_settings.cut_offnon-finite, which either disables pruning or turns the trial LP into an immediate bogus cutoff. If any contributing bound is non-finite, skip the derived cutoff and leavecut_offat+inf.📉 Safer cutoff derivation
if (std::isfinite(upper_bound)) { child_settings.cut_off = upper_bound + settings.dual_tol; } else { - child_settings.cut_off = 0; + f_t derived_cutoff = 0; + bool finite_cutoff = true; for (i_t i = 0; i < original_lp.num_cols; ++i) { if (original_lp.objective[i] < 0) { - child_settings.cut_off += original_lp.objective[i] * child_problem.upper[i]; + if (!std::isfinite(child_problem.upper[i])) { finite_cutoff = false; break; } + derived_cutoff += original_lp.objective[i] * child_problem.upper[i]; } else if (original_lp.objective[i] > 0) { - child_settings.cut_off += original_lp.objective[i] * child_problem.lower[i]; + if (!std::isfinite(child_problem.lower[i])) { finite_cutoff = false; break; } + derived_cutoff += original_lp.objective[i] * child_problem.lower[i]; } } + child_settings.cut_off = finite_cutoff ? derived_cutoff + : std::numeric_limits<f_t>::infinity(); }Also applies to: 420-431
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 297 - 308, The fallback cutoff derivation currently multiplies objective coefficients by child_problem.upper/lower without checking for infinities; change the logic in the block handling !std::isfinite(upper_bound) (affecting child_settings.cut_off computation) to first verify all contributing terms are finite before summing: iterate i over original_lp.num_cols and if any term original_lp.objective[i] or the selected bound (child_problem.upper[i] or child_problem.lower[i]) is non-finite, skip deriving a numeric cutoff and set child_settings.cut_off to +infinity (i.e., leave it as infinite); only when every contributing multiplication is finite compute the sum and assign it to child_settings.cut_off. Also apply the same finite-checking guard to the analogous code region referenced in the comment (the other block at lines ~420-431) so no derived cutoff uses ±inf bounds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 155-158: The CHECK_DELTA_OBJ validation block is using an
undefined variable delta_obj in the settings.log.printf call; replace that
reference with the correct variable delta_obj_down (the down-branch objective
delta computed immediately before this block) so the printf compares and logs
delta_obj_check against delta_obj_down; update the string/arguments in the
CHECK_DELTA_OBJ block (where settings.log.printf is called) to use
delta_obj_down.
---
Duplicate comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 231-249: fractional contains indices regardless of basis status so
you must guard the dual-pivot path when basic_map[j] == -1: before calling
single_pivot_objective_change_estimate(…) for candidate j, check if basic_map[j]
is -1 and if so skip the pivot-based estimate, leave any root strong-branch
values as NaN (or mark the estimate invalid) and let the code fall back to the
existing non-pivot ranking path; apply the same guard to the other identical
call site that also forwards basic_map[j] into
single_pivot_objective_change_estimate.
- Around line 328-330: The code treats dual::status_t::CUTOFF like a
primal-feasible result and calls compute_objective(child_problem, solution.x),
which is unsafe; modify the status handling in the dual_phase2* result
processing so that only OPTIMAL and ITERATION_LIMIT use
compute_objective(child_problem, solution.x) to produce a primal objective,
while CUTOFF uses the solver-reported dual bound (e.g., child_settings.cut_off
or another reported bound) to set obj; update the branch-and-bound pseudo-cost
update paths (the block around handling status in pseudo_costs.cpp and the
analogous handling in the second occurrence) to select the bound source based on
status values rather than always using solution.x.
- Around line 297-308: The fallback cutoff derivation currently multiplies
objective coefficients by child_problem.upper/lower without checking for
infinities; change the logic in the block handling !std::isfinite(upper_bound)
(affecting child_settings.cut_off computation) to first verify all contributing
terms are finite before summing: iterate i over original_lp.num_cols and if any
term original_lp.objective[i] or the selected bound (child_problem.upper[i] or
child_problem.lower[i]) is non-finite, skip deriving a numeric cutoff and set
child_settings.cut_off to +infinity (i.e., leave it as infinite); only when
every contributing multiplication is finite compute the sum and assign it to
child_settings.cut_off. Also apply the same finite-checking guard to the
analogous code region referenced in the comment (the other block at lines
~420-431) so no derived cutoff uses ±inf bounds.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2976ba07-38ba-4543-be57-2786b05b584d
📒 Files selected for processing (2)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/branch_and_bound/pseudo_costs.cpp
| if (std::abs(delta_obj_check - delta_obj) > 1e-6) { | ||
| settings.log.printf("Delta obj check %e. Delta obj %e. Step length %e.\n", | ||
| delta_obj_check, | ||
| delta_obj, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd cpp/src/branch_and_bound && head -200 pseudo_costs.cpp | tail -70Repository: NVIDIA/cuopt
Length of output: 2371
🏁 Script executed:
wc -l cpp/src/branch_and_bound/pseudo_costs.cppRepository: NVIDIA/cuopt
Length of output: 102
🏁 Script executed:
sed -n '140,170p' cpp/src/branch_and_bound/pseudo_costs.cppRepository: NVIDIA/cuopt
Length of output: 1154
Fix the undefined debug variable name in CHECK_DELTA_OBJ validation block.
At lines 155-158, the code references delta_obj which is not defined. The validation should compare against delta_obj_down, the down-branch objective delta calculated just before this block.
Fix
if (std::abs(delta_obj_check - delta_obj) > 1e-6) {
settings.log.printf("Delta obj check %e. Delta obj %e. Step length %e.\n",
delta_obj_check,
- delta_obj,
+ delta_obj_down,
step_length);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 155 - 158, The
CHECK_DELTA_OBJ validation block is using an undefined variable delta_obj in the
settings.log.printf call; replace that reference with the correct variable
delta_obj_down (the down-branch objective delta computed immediately before this
block) so the printf compares and logs delta_obj_check against delta_obj_down;
update the string/arguments in the CHECK_DELTA_OBJ block (where
settings.log.printf is called) to use delta_obj_down.
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test e8dea1e |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
cpp/src/branch_and_bound/pseudo_costs.hpp (1)
481-489: Pass the scoring solution explicitly.Dropping the explicit solution parameter means the API no longer guarantees that
fractionaland the values used to rank it come from the same LP snapshot. Pulling the solution implicitly fromworkermakes that contract harder to audit and easier to misuse; aconst lp_solution_t<i_t, f_t>&argument would keep the dependency explicit.As per coding guidelines: "Ensure variables and constraints are accessed from the correct problem context (original vs presolve vs folded vs postsolve); verify index mapping consistency across problem transformations".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/pseudo_costs.hpp` around lines 481 - 489, The function reliable_variable_selection currently relies on pulling the LP solution implicitly (via worker) while taking fractional; change its signature to accept an explicit const lp_solution_t<i_t, f_t>& solution parameter (add it before fractional or adjacent) so the scoring uses the exact solution snapshot used to compute fractional, update any internal uses to reference this solution instead of worker->... and update all callers (e.g., places that call reliable_variable_selection) to pass the matching lp_solution_t instance; also ensure variable/constraint index mappings are applied consistently between solution and fractional according to the problem transformation APIs (original/presolve/fold/postsolve).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.hpp`:
- Around line 406-410: The flag rank_candidates_with_dual_pivot is enabled by
default but depends on the transpose matrix AT which is only given a dummy value
unless A.transpose(pc_.AT) is called externally; to fix, either set
rank_candidates_with_dual_pivot to false by default (disable dual-pivot ranking
until AT is valid) or make pseudo_costs_t::resize()/constructor explicitly
initialize/populate pc_.AT (and clear/reset it on reuse) so that AT is always
valid before any ranking; update all occurrences (including the similar flags at
lines referenced) and ensure any code that calls rank_candidates_with_dual_pivot
verifies AT is populated (or triggers A.transpose(pc_.AT)) before using it.
---
Nitpick comments:
In `@cpp/src/branch_and_bound/pseudo_costs.hpp`:
- Around line 481-489: The function reliable_variable_selection currently relies
on pulling the LP solution implicitly (via worker) while taking fractional;
change its signature to accept an explicit const lp_solution_t<i_t, f_t>&
solution parameter (add it before fractional or adjacent) so the scoring uses
the exact solution snapshot used to compute fractional, update any internal uses
to reference this solution instead of worker->... and update all callers (e.g.,
places that call reliable_variable_selection) to pass the matching lp_solution_t
instance; also ensure variable/constraint index mappings are applied
consistently between solution and fractional according to the problem
transformation APIs (original/presolve/fold/postsolve).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4fb6ba11-c7b1-4bcf-8b9c-03988b8e25f2
📒 Files selected for processing (1)
cpp/src/branch_and_bound/pseudo_costs.hpp
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test 839d562 |
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test b0cb041 |
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test 7b5d429 |
Kh4ster
left a comment
There was a problem hiding this comment.
LGTM :) I would advise doing another run on MIPLIB to confirm the results and also have a review from @chris-maes before merging
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test 9c9d019 |
We can use a single dual simplex pivot to estimate the objective change after branch up or down in a variable (#963), which can be later be used for ranking the unreliable variables in reliability branching. To minimize the cost of reliability branching, we only apply trial branching to the 100 best variables. Other variables are not considered. Previously, the 100 unreliable variables were selected at random, which may not lead to best branching decision. In this PR, we also apply a cutoff based on the objective coefficients when no incumbent is available.
MIPLIB Benchmark on a GH200 system:
In other words, this PR improves the shifted geomean primal integral by
~15%and reduce the mean primal gap by~6%.Checklist