diff --git a/.cursor/rules/126-java-observability-logging.md b/.cursor/rules/126-java-observability-logging.md
index aa1d8ae6..79f86233 100644
--- a/.cursor/rules/126-java-observability-logging.md
+++ b/.cursor/rules/126-java-observability-logging.md
@@ -1,5 +1,6 @@
---
name: 126-java-observability-logging
+description: Use when you need to implement or improve Java logging and observability — including selecting SLF4J with Logback/Log4j2, applying proper log levels, parameterized logging, secure logging without sensitive data exposure, environment-specific configuration, log aggregation and monitoring, or validating logging through tests.
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
diff --git a/.cursor/rules/151-java-performance-jmeter.md b/.cursor/rules/151-java-performance-jmeter.md
index 0850a689..a264115a 100644
--- a/.cursor/rules/151-java-performance-jmeter.md
+++ b/.cursor/rules/151-java-performance-jmeter.md
@@ -1,5 +1,6 @@
---
name: 151-java-performance-jmeter
+description: Use when you need to set up JMeter performance testing for a Java project — including creating the run-jmeter.sh script from the exact template, configuring load tests with loops, threads, and ramp-up, or running performance tests from the project root.
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
diff --git a/.cursor/rules/161-java-profiling-detect.md b/.cursor/rules/161-java-profiling-detect.md
index d9d9c196..642b7fac 100644
--- a/.cursor/rules/161-java-profiling-detect.md
+++ b/.cursor/rules/161-java-profiling-detect.md
@@ -1,5 +1,6 @@
---
name: 161-java-profiling-detect
+description: Use when you need to set up Java application profiling to detect and measure performance issues — including automated async-profiler v4.0 setup, problem-driven profiling (CPU, memory, threading, GC, I/O), interactive profiling scripts, or collecting profiling data with flamegraphs and JFR recordings.
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
diff --git a/.cursor/rules/162-java-profiling-analyze.md b/.cursor/rules/162-java-profiling-analyze.md
index ce8ee9a4..e65cb4f7 100644
--- a/.cursor/rules/162-java-profiling-analyze.md
+++ b/.cursor/rules/162-java-profiling-analyze.md
@@ -1,5 +1,6 @@
---
name: 162-java-profiling-analyze
+description: Use when you need to analyze Java profiling data collected during the detection phase — including interpreting flamegraphs, memory allocation patterns, CPU hotspots, threading issues, systematic problem categorization, evidence documentation, or prioritizing fixes using Impact/Effort scoring.
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
diff --git a/.cursor/rules/163-java-profiling-refactor.md b/.cursor/rules/163-java-profiling-refactor.md
index 5835991a..d748ea7d 100644
--- a/.cursor/rules/163-java-profiling-refactor.md
+++ b/.cursor/rules/163-java-profiling-refactor.md
@@ -1,5 +1,6 @@
---
name: 163-java-profiling-refactor
+description: Use when you need to refactor Java code based on profiling analysis findings — including reviewing profiling-problem-analysis and profiling-solutions documents, identifying specific performance bottlenecks, and implementing targeted code changes to address them.
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
diff --git a/.cursor/rules/164-java-profiling-verify.md b/.cursor/rules/164-java-profiling-verify.md
index 6714a438..899b80bc 100644
--- a/.cursor/rules/164-java-profiling-verify.md
+++ b/.cursor/rules/164-java-profiling-verify.md
@@ -1,5 +1,6 @@
---
name: 164-java-profiling-verify
+description: Use when you need to verify Java performance optimizations by comparing profiling results before and after refactoring — including baseline validation, post-refactoring report generation, quantitative before/after metrics comparison, side-by-side flamegraph analysis, or creating profiling-comparison-analysis and profiling-final-results documentation.
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
diff --git a/README.md b/README.md
index 7a97d5d4..db389a55 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@
## Goal
-The project provides a curated collection of `System prompts` & `Skills` for moden `SDLC` that help software engineers and pipelines in their daily work for Java Enterprise development.
+The project provides a curated collection of `System prompts` & `Skills` for modern `SDLC` that help software engineers and pipelines in their daily work for Java Enterprise development.
The project add support for:
diff --git a/docs/2/index.html b/docs/2/index.html
index a45065ff..4f05f035 100644
--- a/docs/2/index.html
+++ b/docs/2/index.html
@@ -78,7 +78,7 @@
Cursor rules and Skills for Java
- A curated collection of System prompts and Skills for Java Enterprise development that help software engineers in their daily programming work & pipelines.
+ A curated collection of System prompts & Skills for modern SDLC that help software engineers and pipelines in their daily work for Java Enterprise development
@@ -93,7 +93,7 @@ Cursor rules and Skills for Java
Cursor rules and Skills for Java
- A curated collection of System prompts and Skills for Java Enterprise development that help software engineers in their daily programming work & pipelines.
+ A curated collection of System prompts & Skills for modern SDLC that help software engineers and pipelines in their daily work for Java Enterprise development
diff --git a/docs/feed.xml b/docs/feed.xml
index 314f3aff..5ea7d67a 100644
--- a/docs/feed.xml
+++ b/docs/feed.xml
@@ -2,7 +2,7 @@
https://jabrena.github.io/cursor-rules-java/
Cursor rules and Skills for Java
- 2026-03-09T08:27:24+0100
+ 2026-03-17T21:07:38+0100
diff --git a/docs/index.html b/docs/index.html
index 6a562ee9..df3dfc41 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -78,7 +78,7 @@
Cursor rules and Skills for Java
- A curated collection of System prompts and Skills for Java Enterprise development that help software engineers in their daily programming work & pipelines.
+ A curated collection of System prompts & Skills for modern SDLC that help software engineers and pipelines in their daily work for Java Enterprise development
@@ -93,7 +93,7 @@ Cursor rules and Skills for Java
Cursor rules and Skills for Java
- A curated collection of System prompts and Skills for Java Enterprise development that help software engineers in their daily programming work & pipelines.
+ A curated collection of System prompts & Skills for modern SDLC that help software engineers and pipelines in their daily work for Java Enterprise development
diff --git a/pom.xml b/pom.xml
index c0869305..fc185dd7 100644
--- a/pom.xml
+++ b/pom.xml
@@ -10,7 +10,7 @@
0.13.0-SNAPSHOT
pom
cursor-rules-java
- The project provides a curated collection of System prompts and Skills for moden
+ The project provides a curated collection of System prompts and Skills for modern
SDLC that help software engineers and pipelines in their daily work for Java Enterprise
development.
diff --git a/site-generator/jbake.properties b/site-generator/jbake.properties
index 658fa868..de5ac68d 100644
--- a/site-generator/jbake.properties
+++ b/site-generator/jbake.properties
@@ -4,7 +4,7 @@ server.contextPath=/cursor-rules-java
site.lang=en
site.title=Cursor rules and Skills for Java
-site.subtitle=A curated collection of System prompts and Skills for Java Enterprise development that help software engineers in their daily programming work & pipelines.
+site.subtitle=A curated collection of System prompts & Skills for modern SDLC that help software engineers and pipelines in their daily work for Java Enterprise development
site.avatar=images/avatar-icon.png
index.bigimg=images/jwst-image-rawpixel.jpg
diff --git a/skills-generator/src/main/resources/skill-inventory.json b/skills-generator/src/main/resources/skill-inventory.json
index 6b5aa1a1..d24ecec2 100644
--- a/skills-generator/src/main/resources/skill-inventory.json
+++ b/skills-generator/src/main/resources/skill-inventory.json
@@ -13,12 +13,19 @@
{"id": "123", "xml": true},
{"id": "124", "xml": true},
{"id": "125", "xml": true},
+ {"id": "126", "xml": true},
{"id": "128", "xml": true},
{"id": "131", "xml": true},
{"id": "132", "xml": true},
{"id": "141", "xml": true},
{"id": "142", "xml": true},
{"id": "143", "xml": true},
+ {"id": "144", "xml": true},
+ {"id": "151", "xml": true},
+ {"id": "161", "xml": true},
+ {"id": "162", "xml": true},
+ {"id": "163", "xml": true},
+ {"id": "164", "xml": true},
{"id": "170", "xml": true},
{"id": "171", "xml": true},
{"id": "172", "xml": true},
diff --git a/skills-generator/src/main/resources/skills/126-skill.xml b/skills-generator/src/main/resources/skills/126-skill.xml
new file mode 100644
index 00000000..9033c55d
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/126-skill.xml
@@ -0,0 +1,40 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Java Logging Best Practices
+ Use when you need to implement or improve Java logging and observability — including selecting SLF4J with Logback/Log4j2, applying proper log levels (ERROR, WARN, INFO, DEBUG, TRACE), parameterized logging, secure logging without sensitive data exposure, environment-specific configuration, log aggregation and monitoring, or validating logging through tests.
+
+
+
+ Before applying any logging recommendations, ensure the project compiles. Compilation failure is a blocking condition. After applying improvements, run full verification.
+
+ **MANDATORY**: Run `./mvnw compile` or `mvn compile` before applying any change
+ **SAFETY**: If compilation fails, stop immediately — do not proceed until resolved
+ **VERIFY**: Run `./mvnw clean verify` or `mvn clean verify` after applying improvements
+ **BEFORE APPLYING**: Read the reference for detailed good/bad examples, constraints, and safeguards for each logging pattern
+
+
+
+
+
diff --git a/skills-generator/src/main/resources/skills/144-skill.xml b/skills-generator/src/main/resources/skills/144-skill.xml
new file mode 100644
index 00000000..77815361
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/144-skill.xml
@@ -0,0 +1,40 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Java Data-Oriented Programming Best Practices
+ Use when you need to apply data-oriented programming best practices in Java — including separating code (behavior) from data structures using records, designing immutable data with pure transformation functions, keeping data flat and denormalized with ID-based references, starting with generic data structures converting to specific types when needed, ensuring data integrity through pure validation functions, and creating flexible generic data access layers.
+
+
+
+ Before applying any data-oriented programming recommendations, ensure the project compiles. Compilation failure is a blocking condition. After applying improvements, run full verification.
+
+ **MANDATORY**: Run `./mvnw compile` or `mvn compile` before applying any change
+ **SAFETY**: If compilation fails, stop immediately — do not proceed until resolved
+ **VERIFY**: Run `./mvnw clean verify` or `mvn clean verify` after applying improvements
+ **BEFORE APPLYING**: Read the reference for detailed good/bad examples, constraints, and safeguards for each data-oriented programming pattern
+
+
+
+
+
diff --git a/skills-generator/src/main/resources/skills/151-skill.xml b/skills-generator/src/main/resources/skills/151-skill.xml
new file mode 100644
index 00000000..404629d2
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/151-skill.xml
@@ -0,0 +1,38 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Run performance tests based on JMeter
+ Use when you need to set up JMeter performance testing for a Java project — including creating the run-jmeter.sh script from the exact template, configuring load tests with loops, threads, and ramp-up, or running performance tests from the project root with custom or default settings.
+
+
+
+ JMeter must be installed and available in PATH. If not available, show a message and exit. Use only the exact template for the run-jmeter.sh script.
+
+ **PREREQUISITE**: Verify JMeter is installed and accessible via `jmeter --version` before creating the script
+ **CRITICAL**: Copy the run-jmeter.sh template exactly — do not modify, interpret, or enhance
+ **PERMISSION**: Make the script executable with `chmod +x run-jmeter.sh`
+ **BEFORE APPLYING**: Read the reference for the exact script template and usage instructions
+
+
+
+
+
diff --git a/skills-generator/src/main/resources/skills/161-skill.xml b/skills-generator/src/main/resources/skills/161-skill.xml
new file mode 100644
index 00000000..32e7fc85
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/161-skill.xml
@@ -0,0 +1,39 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Java Profiling Workflow / Step 1 / Collect data to measure potential issues
+ Use when you need to set up Java application profiling to detect and measure performance issues — including automated async-profiler v4.0 setup, problem-driven profiling (CPU, memory, threading, GC, I/O), interactive profiling scripts, JFR integration with Java 25 (JEP 518, JEP 520), or collecting profiling data with flamegraphs and JFR recordings.
+
+
+
+ Copy bash scripts exactly from templates. Ensure JVM flags are applied for profiling compatibility. Verify Java processes are running before attaching profiler.
+
+ **CRITICAL**: Copy the bash script templates exactly — do not modify, interpret, or enhance
+ **SETUP**: Create profiler directory structure: profiler/scripts, profiler/results
+ **EXECUTABLE**: Make scripts executable with chmod +x
+ **BEFORE APPLYING**: Read the reference for exact script templates and setup instructions
+
+
+
+
+
diff --git a/skills-generator/src/main/resources/skills/162-skill.xml b/skills-generator/src/main/resources/skills/162-skill.xml
new file mode 100644
index 00000000..55e557af
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/162-skill.xml
@@ -0,0 +1,38 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Java Profiling Workflow / Step 2 / Analyze profiling data
+ Use when you need to analyze Java profiling data collected during the detection phase — including interpreting flamegraphs, memory allocation patterns, CPU hotspots, threading issues, systematic problem categorization, evidence documentation with profiling-problem-analysis and profiling-solutions markdown files, or prioritizing fixes using Impact/Effort scoring.
+
+
+
+ Validate profiling results represent realistic load before analysis. Document assumptions and limitations. Cross-reference multiple files.
+
+ **VALIDATE**: Ensure profiling results represent realistic load scenarios before analysis
+ **DOCUMENT**: Record assumptions and limitations in analysis reports
+ **CROSS-REFERENCE**: Use multiple profiling files to validate findings
+ **BEFORE APPLYING**: Read the reference for problem analysis and solutions templates
+
+
+
+
+
diff --git a/skills-generator/src/main/resources/skills/163-skill.xml b/skills-generator/src/main/resources/skills/163-skill.xml
new file mode 100644
index 00000000..caaa8632
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/163-skill.xml
@@ -0,0 +1,36 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Java Profiling Workflow / Step 3 / Refactor code to fix issues
+ Use when you need to refactor Java code based on profiling analysis findings — including reviewing docs/profiling-problem-analysis and docs/profiling-solutions, identifying specific performance bottlenecks, and implementing targeted code changes to address CPU, memory, or threading issues.
+
+
+
+ Verify that changes pass all tests before considering the refactoring complete.
+
+ **MANDATORY**: Run `./mvnw clean verify` or `mvn clean verify` after applying refactoring
+ **SAFETY**: If tests fail, fix issues before proceeding
+ **BEFORE APPLYING**: Read the analysis and solutions documents for specific recommendations
+
+
+
+
+
diff --git a/skills-generator/src/main/resources/skills/164-skill.xml b/skills-generator/src/main/resources/skills/164-skill.xml
new file mode 100644
index 00000000..1a59bc26
--- /dev/null
+++ b/skills-generator/src/main/resources/skills/164-skill.xml
@@ -0,0 +1,39 @@
+
+
+
+ Juan Antonio Breña Moral
+ 0.13.0-SNAPSHOT
+ Apache-2.0
+
+
+ Java Profiling Workflow / Step 4 / Verify results
+ Use when you need to verify Java performance optimizations by comparing profiling results before and after refactoring — including baseline validation, post-refactoring report generation, quantitative before/after metrics comparison, side-by-side flamegraph analysis, regression detection, or creating profiling-comparison-analysis and profiling-final-results documentation.
+
+
+
+ Use identical test conditions between baseline and post-refactoring. Verify both report sets are complete. Document test scenarios.
+
+ **CONSISTENCY**: Use identical test conditions and load patterns for baseline and post-refactoring
+ **VALIDATE**: Ensure both baseline and post-refactoring reports exist and are non-empty before comparison
+ **DOCUMENT**: Record test scenarios and load patterns for reproduction
+ **BEFORE APPLYING**: Read the reference for comparison templates and validation steps
+
+
+
+
+
diff --git a/skills/126-java-observability-logging/SKILL.md b/skills/126-java-observability-logging/SKILL.md
new file mode 100644
index 00000000..2b894d30
--- /dev/null
+++ b/skills/126-java-observability-logging/SKILL.md
@@ -0,0 +1,36 @@
+---
+name: 126-java-observability-logging
+description: Use when you need to implement or improve Java logging and observability — including selecting SLF4J with Logback/Log4j2, applying proper log levels (ERROR, WARN, INFO, DEBUG, TRACE), parameterized logging, secure logging without sensitive data exposure, environment-specific configuration, log aggregation and monitoring, or validating logging through tests. Part of the skills-for-java project
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Logging Best Practices
+
+Implement effective Java logging following standardized frameworks, meaningful log levels, core practices (parameterized logging, exception handling, no sensitive data), flexible configuration, security-conscious logging, monitoring and alerting, and comprehensive logging validation through testing.
+
+**What is covered in this Skill?**
+
+- Standardized framework selection: SLF4J facade with Logback or Log4j2
+- Meaningful and consistent log levels: ERROR, WARN, INFO, DEBUG, TRACE
+- Core practices: parameterized logging, proper exception handling, avoiding sensitive data
+- Configuration: environment-specific (logback.xml, log4j2.xml), output formats, log rotation
+- Security: mask sensitive data, control log access, secure transmission, GDPR/HIPAA compliance
+- Log monitoring and alerting: centralized aggregation (ELK, Splunk, Loki), automated alerts
+- Logging validation through testing: assert log messages, verify formats, test levels, measure performance impact
+
+**Scope:** The reference is organized by examples (good/bad code patterns) for each core area. Apply recommendations based on applicable examples.
+
+## Constraints
+
+Before applying any logging recommendations, ensure the project compiles. Compilation failure is a blocking condition. After applying improvements, run full verification.
+
+- **MANDATORY**: Run `./mvnw compile` or `mvn compile` before applying any change
+- **SAFETY**: If compilation fails, stop immediately — do not proceed until resolved
+- **VERIFY**: Run `./mvnw clean verify` or `mvn clean verify` after applying improvements
+- **BEFORE APPLYING**: Read the reference for detailed good/bad examples, constraints, and safeguards for each logging pattern
+
+## Reference
+
+For detailed guidance, examples, and constraints, see [references/126-java-observability-logging.md](references/126-java-observability-logging.md).
diff --git a/skills/126-java-observability-logging/references/126-java-observability-logging.md b/skills/126-java-observability-logging/references/126-java-observability-logging.md
new file mode 100644
index 00000000..79f86233
--- /dev/null
+++ b/skills/126-java-observability-logging/references/126-java-observability-logging.md
@@ -0,0 +1,1426 @@
+---
+name: 126-java-observability-logging
+description: Use when you need to implement or improve Java logging and observability — including selecting SLF4J with Logback/Log4j2, applying proper log levels, parameterized logging, secure logging without sensitive data exposure, environment-specific configuration, log aggregation and monitoring, or validating logging through tests.
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Logging Best Practices
+
+## Role
+
+You are a Senior software engineer with extensive experience in Java software development
+
+## Goal
+
+Effective Java logging involves selecting a standard framework (SLF4J with Logback/Log4j2), using appropriate log levels (ERROR, WARN, INFO, DEBUG, TRACE),
+and adhering to core practices like parameterized logging, proper exception handling, and avoiding sensitive data exposure.
+Configuration should be environment-specific with clear output formats.
+Security is paramount: mask sensitive data, control log access, and ensure secure transmission.
+Implement centralized log aggregation, monitoring, and alerting for proactive issue detection.
+Finally, logging behavior and its impact should be validated through comprehensive testing.
+
+### Implementing These Principles
+
+These guidelines are built upon the following core principles:
+
+1. **Standardized Framework Selection**: Utilize a widely accepted logging facade (preferably SLF4J) and a robust underlying implementation (Logback or Log4j2). This promotes consistency, flexibility, and access to advanced logging features.
+2. **Meaningful and Consistent Log Levels**: Employ logging levels (ERROR, WARN, INFO, DEBUG, TRACE) deliberately and consistently to categorize the severity and importance of messages. This allows for effective filtering, monitoring, and targeted issue diagnosis.
+3. **Adherence to Core Logging Practices**: Follow fundamental best practices such as using parameterized logging (avoiding string concatenation for performance and clarity), always logging exceptions with their stack traces, never logging sensitive data directly (PII, credentials).
+4. **Thoughtful and Flexible Configuration**: Manage logging configuration externally (e.g., `logback.xml`, `log4j2.xml`). Tailor configurations for different environments (dev, test, prod) with appropriate log levels for various packages, clear and informative output formats (including timestamps, levels, logger names, thread info), and robust log rotation and retention policies.
+5. **Security-Conscious Logging**: Prioritize security in all logging activities. Actively mask or filter sensitive information, control access to log files and log management systems, use secure protocols for transmitting logs, and ensure compliance with relevant data protection regulations (e.g., GDPR, HIPAA).
+6. **Proactive Log Monitoring and Alerting**: Implement centralized log aggregation systems (e.g., ELK Stack, Splunk, Grafana Loki). Establish automated alerts based on log patterns, error rates, or specific critical events to enable proactive issue detection and rapid response.
+7. **Comprehensive Logging Validation Through Testing**: Integrate logging into the testing strategy. Assert that critical log messages (especially errors and warnings) are generated as expected under specific conditions, verify log formats, test log level filtering, and assess any performance impact of logging.
+
+Remember, good logging in Java is about operational excellence - making your application's behavior transparent and debuggable while maintaining security and performance.
+
+## Constraints
+
+Before applying any recommendations, ensure the project is in a valid state by running Maven compilation. Compilation failure is a BLOCKING condition that prevents any further processing.
+
+- **MANDATORY**: Run `./mvnw compile` or `mvn compile` before applying any change
+- **PREREQUISITE**: Project must compile successfully and pass basic validation checks before any optimization
+- **CRITICAL SAFETY**: If compilation fails, IMMEDIATELY STOP and DO NOT CONTINUE with any recommendations
+- **BLOCKING CONDITION**: Compilation errors must be resolved by the user before proceeding with any object-oriented design improvements
+- **NO EXCEPTIONS**: Under no circumstances should design recommendations be applied to a project that fails to compile
+
+## Examples
+
+### Table of contents
+
+- Example 1: Choose an Appropriate Logging Framework
+- Example 2: Understand and Use Logging Levels Correctly
+- Example 3: Adhere to Core Logging Practices
+- Example 4: Follow Configuration Best Practices
+- Example 5: Implement Secure Logging Practices
+- Example 6: Establish Effective Log Monitoring and Alerting
+- Example 7: Incorporate Logging in Testing
+
+### Example 1: Choose an Appropriate Logging Framework
+
+Title: Select a Standard Logging Facade and Implementation
+Description: Using a standard logging facade like SLF4J allows for flexibility in choosing and switching an underlying logging implementation. The primary recommendation is SLF4J with Logback for its robustness and feature-richness.
+
+**Good example:**
+
+```java
+// GOOD: Using SLF4J facade with proper logger declaration
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.util.Objects;
+
+public class UserService {
+ // Logger declared using SLF4J - static final per class
+ private static final Logger logger = LoggerFactory.getLogger(UserService.class);
+
+ public void performAction(String input) {
+ // SLF4J parameterized logging - efficient and readable
+ logger.info("Performing action with input: {}", input);
+
+ if (Objects.isNull(input) || input.isEmpty()) {
+ logger.warn("Input is null or empty, using default behavior");
+ input = "default";
+ }
+
+ try {
+ processInput(input);
+ logger.debug("Action completed successfully for input: {}", input);
+ } catch (ProcessingException e) {
+ logger.error("Failed to process input: {}", input, e);
+ throw e;
+ }
+ }
+
+ public void performComplexOperation(String userId, String operation) {
+ // Using MDC (Mapped Diagnostic Context) for contextual logging
+ try (var mdcCloseable = org.slf4j.MDC.putCloseable("userId", userId)) {
+ logger.info("Starting operation: {}", operation);
+
+ // All log statements in this block will include userId context
+ performInternalSteps(operation);
+
+ logger.info("Operation completed successfully");
+ } catch (Exception e) {
+ logger.error("Operation failed", e);
+ throw e;
+ }
+ }
+
+ private void processInput(String input) throws ProcessingException {
+ // Simulate processing that might fail
+ if ("error".equals(input)) {
+ throw new ProcessingException("Invalid input: " + input);
+ }
+ logger.trace("Processing step completed for: {}", input);
+ }
+
+ private void performInternalSteps(String operation) {
+ logger.debug("Executing internal step 1 for operation: {}", operation);
+ // ... business logic
+ logger.debug("Executing internal step 2 for operation: {}", operation);
+ // ... more business logic
+ }
+
+ private static class ProcessingException extends Exception {
+ public ProcessingException(String message) {
+ super(message);
+ }
+ }
+}
+```
+
+**Bad example:**
+
+```java
+// AVOID: Using System.out.println or direct logging implementation
+public class BadLoggingService {
+
+ public void performAction(String input) {
+ // BAD: Using System.out.println - no control, no levels, no formatting
+ System.out.println("Starting action with: " + input);
+
+ if (input == null || input.isEmpty()) {
+ // BAD: Using System.err - not integrated with logging framework
+ System.err.println("Warning: Input is null or empty!");
+ }
+
+ try {
+ processInput(input);
+ System.out.println("Action completed for: " + input);
+ } catch (Exception e) {
+ // BAD: Just printing stack trace to stderr
+ e.printStackTrace();
+ // No structured logging, no log levels, no context
+ }
+ }
+
+ // BAD: Directly using concrete logging implementation
+ private java.util.logging.Logger julLogger =
+ java.util.logging.Logger.getLogger(BadLoggingService.class.getName());
+
+ public void anotherMethod() {
+ // BAD: Tied to specific implementation, less flexible
+ julLogger.info("Using JUL directly");
+ }
+
+ // BAD: Multiple logging frameworks in same class
+ private org.apache.logging.log4j.Logger log4jLogger =
+ org.apache.logging.log4j.LogManager.getLogger(BadLoggingService.class);
+
+ public void confusedLogging() {
+ julLogger.info("Using JUL");
+ log4jLogger.info("Using Log4j");
+ System.out.println("Using System.out");
+ // Inconsistent and confusing!
+ }
+}
+```
+
+### Example 2: Understand and Use Logging Levels Correctly
+
+Title: Apply Appropriate Logging Levels for Messages
+Description: Use logging levels consistently to categorize the severity and importance of log messages. ERROR for critical issues, WARN for potentially harmful situations, INFO for important business events, DEBUG for detailed information, and TRACE for fine-grained debugging.
+
+**Good example:**
+
+```java
+// GOOD: Proper usage of logging levels
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.util.Objects;
+
+public class OrderProcessor {
+ private static final Logger logger = LoggerFactory.getLogger(OrderProcessor.class);
+
+ public void processOrder(String orderId, String customerId) {
+ logger.trace("Entering processOrder with orderId: {}, customerId: {}", orderId, customerId);
+
+ // INFO: Important business events
+ logger.info("Processing order {} for customer {}", orderId, customerId);
+
+ try {
+ // Validate inputs
+ if (Objects.isNull(orderId) || orderId.trim().isEmpty()) {
+ // WARN: Potentially harmful situation that can be recovered
+ logger.warn("Order ID is null or empty for customer {}. Using generated ID.", customerId);
+ orderId = generateOrderId();
+ }
+
+ // DEBUG: Detailed information for development/troubleshooting
+ logger.debug("Validating order {} inventory", orderId);
+ validateInventory(orderId);
+
+ logger.debug("Calculating pricing for order {}", orderId);
+ calculatePricing(orderId);
+
+ logger.debug("Processing payment for order {}", orderId);
+ processPayment(orderId);
+
+ // INFO: Successful completion of important business operation
+ logger.info("Order {} processed successfully for customer {}", orderId, customerId);
+
+ } catch (InventoryException e) {
+ // WARN: Expected business exception that can be handled
+ logger.warn("Insufficient inventory for order {}. Will backorder.", orderId, e);
+ handleBackorder(orderId);
+
+ } catch (PaymentException e) {
+ // ERROR: Critical failure requiring immediate attention
+ logger.error("Payment processing failed for order {}. Customer: {}",
+ orderId, customerId, e);
+ throw new OrderProcessingException("Payment failed", e);
+
+ } catch (Exception e) {
+ // ERROR: Unexpected critical error
+ logger.error("Unexpected error processing order {} for customer {}",
+ orderId, customerId, e);
+ throw new OrderProcessingException("Unexpected error", e);
+ }
+
+ logger.trace("Exiting processOrder for orderId: {}", orderId);
+ }
+
+ public void performHealthCheck() {
+ logger.debug("Starting health check");
+
+ try {
+ checkDatabaseConnection();
+ checkExternalServices();
+
+ // INFO: System status information
+ logger.info("Health check passed - all systems operational");
+
+ } catch (HealthCheckException e) {
+ // ERROR: System health issue requiring immediate attention
+ logger.error("Health check failed - system may be degraded", e);
+ }
+ }
+
+ // Example methods
+ private String generateOrderId() { return "ORD-" + System.currentTimeMillis(); }
+ private void validateInventory(String orderId) throws InventoryException { /* ... */ }
+ private void calculatePricing(String orderId) { /* ... */ }
+ private void processPayment(String orderId) throws PaymentException { /* ... */ }
+ private void handleBackorder(String orderId) { /* ... */ }
+ private void checkDatabaseConnection() throws HealthCheckException { /* ... */ }
+ private void checkExternalServices() throws HealthCheckException { /* ... */ }
+
+ // Exception classes
+ private static class InventoryException extends Exception {
+ public InventoryException(String message) { super(message); }
+ }
+ private static class PaymentException extends Exception {
+ public PaymentException(String message) { super(message); }
+ }
+ private static class OrderProcessingException extends RuntimeException {
+ public OrderProcessingException(String message, Throwable cause) { super(message, cause); }
+ }
+ private static class HealthCheckException extends Exception {
+ public HealthCheckException(String message) { super(message); }
+ }
+}
+```
+
+**Bad example:**
+
+```java
+// AVOID: Misusing logging levels
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class BadLevelUsage {
+ private static final Logger logger = LoggerFactory.getLogger(BadLevelUsage.class);
+
+ public void processUser(String userId) {
+ // BAD: Using INFO for debug-level details
+ logger.info("Method entry: processUser with parameter: " + userId);
+ logger.info("Creating new StringBuilder object");
+ logger.info("Checking if userId is null");
+
+ if (userId == null) {
+ // BAD: Using ERROR for a normal business condition
+ logger.error("User ID is null!"); // This might be expected/handled
+ return;
+ }
+
+ // BAD: Using WARN for normal flow information
+ logger.warn("User ID is not null, proceeding with processing");
+
+ try {
+ String result = processUserData(userId);
+
+ // BAD: Using ERROR for successful operations
+ logger.error("Successfully processed user: " + userId + " with result: " + result);
+
+ } catch (Exception e) {
+ // BAD: Using INFO for critical errors
+ logger.info("An error occurred: " + e.getMessage());
+ // Should be ERROR or WARN depending on severity
+ }
+
+ // BAD: Overuse of TRACE for everything
+ logger.trace("About to return from method");
+ logger.trace("Setting return value to void");
+ logger.trace("Method execution completed");
+ // Too much noise - not useful
+ }
+
+ public void anotherBadExample(String data) {
+ // BAD: String concatenation instead of parameterized logging
+ logger.info("Processing data: " + data + " at time: " + System.currentTimeMillis());
+
+ // BAD: Using DEBUG for important business events
+ logger.debug("Payment of $1000 processed for customer ABC123");
+ // This should be INFO - it's an important business event
+
+ // BAD: Using same level for different severity issues
+ logger.warn("Configuration file not found, using defaults"); // This is OK for WARN
+ logger.warn("Database connection lost"); // This should be ERROR
+ logger.warn("User entered invalid email format"); // This might be INFO or DEBUG
+ }
+
+ private String processUserData(String userId) {
+ return "processed-" + userId;
+ }
+}
+```
+
+### Example 3: Adhere to Core Logging Practices
+
+Title: Implement Fundamental Best Practices
+Description: Follow core practices including using parameterized logging, proper exception handling, avoiding sensitive data exposure, and implementing performance considerations for logging operations.
+
+**Good example:**
+
+```java
+// GOOD: Core logging best practices
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.MDC;
+import java.util.Objects;
+
+public class SecureTransactionService {
+ private static final Logger logger = LoggerFactory.getLogger(SecureTransactionService.class);
+
+ public void processPayment(String userId, String amount, String cardNumber) {
+ // Set correlation ID for request tracing
+ String correlationId = generateCorrelationId();
+ MDC.put("correlationId", correlationId);
+
+ try {
+ logger.info("Processing payment for user: {}, amount: {}", userId, amount);
+
+ // GOOD: Mask sensitive information before logging
+ String maskedCard = maskCreditCard(cardNumber);
+ logger.debug("Processing payment with card: {}", maskedCard);
+
+ validatePaymentRequest(userId, amount, cardNumber);
+
+ if (logger.isDebugEnabled()) {
+ // GOOD: Guard clause for expensive operations
+ String debugInfo = buildComplexDebugInfo(userId, amount);
+ logger.debug("Payment validation details: {}", debugInfo);
+ }
+
+ processPaymentInternal(userId, amount, cardNumber);
+
+ logger.info("Payment processed successfully for user: {}, correlation: {}",
+ userId, correlationId);
+
+ } catch (ValidationException e) {
+ // GOOD: Log exception with context but don't expose sensitive data
+ logger.warn("Payment validation failed for user: {}, reason: {}, correlation: {}",
+ userId, e.getValidationError(), correlationId, e);
+ throw e;
+
+ } catch (PaymentProcessingException e) {
+ // GOOD: Log critical error with full context
+ logger.error("Payment processing failed for user: {}, amount: {}, correlation: {}",
+ userId, amount, correlationId, e);
+ throw e;
+
+ } catch (Exception e) {
+ // GOOD: Catch unexpected exceptions and log with context
+ logger.error("Unexpected error during payment processing for user: {}, correlation: {}",
+ userId, correlationId, e);
+ throw new PaymentProcessingException("Unexpected error", e);
+
+ } finally {
+ // GOOD: Clean up MDC to prevent memory leaks
+ MDC.remove("correlationId");
+ }
+ }
+
+ public void processLargeDataSet(List dataItems) {
+ logger.info("Processing {} data items", dataItems.size());
+
+ for (int i = 0; i < dataItems.size(); i++) {
+ String item = dataItems.get(i);
+
+ try {
+ processDataItem(item);
+
+ // GOOD: Log progress periodically, not for every item
+ if (i % 1000 == 0) {
+ logger.debug("Processed {} of {} items", i, dataItems.size());
+ }
+
+ } catch (Exception e) {
+ // GOOD: Log error but continue processing
+ logger.warn("Failed to process item at index {}: {}", i, item, e);
+ }
+ }
+
+ logger.info("Completed processing {} data items", dataItems.size());
+ }
+
+ // Utility methods
+ private String maskCreditCard(String cardNumber) {
+ if (cardNumber == null || cardNumber.length() < 8) {
+ return "****";
+ }
+ return cardNumber.substring(0, 4) + "****" + cardNumber.substring(cardNumber.length() - 4);
+ }
+
+ private String generateCorrelationId() {
+ return "TXN-" + System.currentTimeMillis() + "-" + Thread.currentThread().getId();
+ }
+
+ private String buildComplexDebugInfo(String userId, String amount) {
+ // Simulate expensive debug information building
+ return String.format("User: %s, Amount: %s, Timestamp: %d",
+ userId, amount, System.currentTimeMillis());
+ }
+
+ private void validatePaymentRequest(String userId, String amount, String cardNumber)
+ throws ValidationException {
+ if (Objects.isNull(userId) || userId.trim().isEmpty()) {
+ throw new ValidationException("INVALID_USER_ID");
+ }
+ // More validation...
+ }
+
+ private void processPaymentInternal(String userId, String amount, String cardNumber)
+ throws PaymentProcessingException {
+ // Simulate payment processing
+ if ("fail".equals(userId)) {
+ throw new PaymentProcessingException("Payment gateway error");
+ }
+ }
+
+ private void processDataItem(String item) {
+ // Simulate data processing
+ if ("error".equals(item)) {
+ throw new RuntimeException("Processing failed for item: " + item);
+ }
+ }
+
+ // Exception classes
+ private static class ValidationException extends Exception {
+ private final String validationError;
+
+ public ValidationException(String validationError) {
+ super("Validation failed: " + validationError);
+ this.validationError = validationError;
+ }
+
+ public String getValidationError() {
+ return validationError;
+ }
+ }
+
+ private static class PaymentProcessingException extends RuntimeException {
+ public PaymentProcessingException(String message) {
+ super(message);
+ }
+
+ public PaymentProcessingException(String message, Throwable cause) {
+ super(message, cause);
+ }
+ }
+}
+```
+
+**Bad example:**
+
+```java
+// AVOID: Poor logging practices
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class PoorLoggingPractices {
+ private static final Logger logger = LoggerFactory.getLogger(PoorLoggingPractices.class);
+
+ public void processPayment(String userId, String amount, String cardNumber, String ssn) {
+ // BAD: String concatenation instead of parameterized logging
+ logger.info("Processing payment for user: " + userId + " amount: " + amount);
+
+ // BAD: Logging sensitive information directly
+ logger.debug("Credit card: " + cardNumber + ", SSN: " + ssn);
+
+ try {
+ validatePayment(userId, amount);
+
+ // BAD: Expensive operation without guard clause
+ logger.debug("Payment details: " + buildExpensiveDebugString(userId, amount, cardNumber));
+
+ processPaymentTransaction(userId, amount, cardNumber);
+
+ } catch (Exception e) {
+ // BAD: Swallowing exception without proper logging
+ logger.info("Payment failed: " + e.getMessage());
+ // Lost the stack trace and context!
+
+ // BAD: Logging sensitive data in error message
+ logger.error("Payment failed for card: " + cardNumber + " and SSN: " + ssn);
+ }
+ }
+
+ public void processLargeDataSet(List items) {
+ // BAD: Logging every single item in large dataset
+ for (String item : items) {
+ logger.debug("Processing item: " + item); // Will flood logs
+ processItem(item);
+ logger.debug("Completed item: " + item); // Even more noise
+ }
+ }
+
+ public void handleUserLogin(String username, String password) {
+ // BAD: Logging passwords - NEVER do this!
+ logger.debug("User login attempt: username=" + username + ", password=" + password);
+
+ try {
+ authenticateUser(username, password);
+ // BAD: Inconsistent logging format
+ logger.info("User " + username + " logged in successfully");
+ } catch (AuthenticationException e) {
+ // BAD: Using wrong log level and exposing sensitive info
+ logger.error("Login failed for " + username + " with password " + password);
+ }
+ }
+
+ public void performDatabaseOperation(String query, String connectionString) {
+ // BAD: Logging database connection strings (may contain credentials)
+ logger.debug("Executing query: " + query + " on connection: " + connectionString);
+
+ try {
+ executeQuery(query);
+ } catch (SQLException e) {
+ // BAD: Not using parameterized logging with exception
+ logger.error("SQL error: " + e.getMessage() + " for query: " + query);
+ // Stack trace is lost!
+ }
+ }
+
+ // BAD: No try-with-resources or proper cleanup for MDC
+ public void badMDCUsage(String userId) {
+ org.slf4j.MDC.put("userId", userId);
+ logger.info("Processing for user");
+
+ // ... some processing ...
+
+ // BAD: Forgot to clear MDC - memory leak!
+ // MDC.clear() or MDC.remove("userId") is missing
+ }
+
+ // Helper methods with poor exception handling
+ private String buildExpensiveDebugString(String userId, String amount, String cardNumber) {
+ // Simulate expensive operation
+ StringBuilder sb = new StringBuilder();
+ for (int i = 0; i < 10000; i++) {
+ sb.append("Debug info for ").append(userId).append(" ");
+ }
+ return sb.toString();
+ }
+
+ private void validatePayment(String userId, String amount) throws ValidationException {
+ if (userId == null) throw new ValidationException("Invalid user");
+ }
+
+ private void processPaymentTransaction(String userId, String amount, String cardNumber) {
+ // Simulate processing
+ }
+
+ private void processItem(String item) {
+ // Simulate processing
+ }
+
+ private void authenticateUser(String username, String password) throws AuthenticationException {
+ if ("baduser".equals(username)) {
+ throw new AuthenticationException("Invalid credentials");
+ }
+ }
+
+ private void executeQuery(String query) throws SQLException {
+ if (query.contains("DROP")) {
+ throw new SQLException("Invalid query");
+ }
+ }
+
+ // Exception classes
+ private static class ValidationException extends Exception {
+ public ValidationException(String message) { super(message); }
+ }
+ private static class AuthenticationException extends Exception {
+ public AuthenticationException(String message) { super(message); }
+ }
+ private static class SQLException extends Exception {
+ public SQLException(String message) { super(message); }
+ }
+}
+```
+
+### Example 4: Follow Configuration Best Practices
+
+Title: Configure Your Logging Framework Thoughtfully
+Description: Proper configuration is key to effective logging. Use separate configurations per environment, implement different log levels for different packages, and include comprehensive output formats with timestamps, levels, logger names, and thread information.
+
+**Good example:**
+
+```java
+// Configuration shown through usage - actual config would be in logback.xml or log4j2.xml
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.MDC;
+
+public class ConfiguredLoggerExample {
+ private static final Logger appLogger = LoggerFactory.getLogger(ConfiguredLoggerExample.class);
+ private static final Logger performanceLogger = LoggerFactory.getLogger("performance." + ConfiguredLoggerExample.class.getName());
+
+ public void performBusinessOperation(String operationId, String userId) {
+ // Using MDC for contextual information
+ MDC.put("operationId", operationId);
+ MDC.put("userId", userId);
+
+ try {
+ long startTime = System.currentTimeMillis();
+ appLogger.info("Starting business operation");
+
+ // Simulate work
+ Thread.sleep(100);
+
+ long duration = System.currentTimeMillis() - startTime;
+ performanceLogger.info("Operation completed in {} ms", duration);
+
+ appLogger.info("Business operation completed successfully");
+
+ } catch (InterruptedException e) {
+ appLogger.warn("Operation interrupted", e);
+ Thread.currentThread().interrupt();
+ } catch (Exception e) {
+ appLogger.error("Business operation failed", e);
+ throw new RuntimeException("Operation failed", e);
+ } finally {
+ // Always clear MDC
+ MDC.clear();
+ }
+ }
+
+ public void demonstrateLogLevels() {
+ // These will be filtered based on configuration
+ appLogger.trace("This is trace level - very detailed");
+ appLogger.debug("This is debug level - detailed for development");
+ appLogger.info("This is info level - important business events");
+ appLogger.warn("This is warn level - potentially harmful situations");
+ appLogger.error("This is error level - critical issues");
+ }
+
+ public static void main(String[] args) {
+ ConfiguredLoggerExample example = new ConfiguredLoggerExample();
+ example.performBusinessOperation("OP123", "user456");
+ example.demonstrateLogLevels();
+ }
+}
+```
+
+**Bad example:**
+
+```java
+// Poor configuration consequences
+public class BadConfigConsequences {
+ private static final Logger logger = LoggerFactory.getLogger(BadConfigConsequences.class);
+
+ public void performOperation() {
+ // If configuration is bad (everything at TRACE), this floods logs
+ logger.trace("Entering method");
+ logger.trace("Creating variables");
+ logger.trace("About to check condition");
+
+ logger.info("User logged in.");
+ logger.error("Failed to connect to DB.", new RuntimeException("DB connection timeout"));
+
+ // With bad pattern configuration, output might be:
+ // User logged in.
+ // Failed to connect to DB.
+ // Problem: No timestamp, no level, no logger name, no thread info
+ }
+
+ public static void main(String[] args) {
+ BadConfigConsequences example = new BadConfigConsequences();
+ example.performOperation();
+ }
+}
+```
+
+### Example 5: Implement Secure Logging Practices
+
+Title: Ensure Logs Do Not Compromise Security
+Description: Actively mask or filter sensitive data, control access to log files, use secure transmission protocols, and comply with data protection regulations when logging information.
+
+**Good example:**
+
+```java
+// GOOD: Secure logging practices
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.util.regex.Pattern;
+import java.util.Objects;
+
+public class SecureLoggingService {
+ private static final Logger logger = LoggerFactory.getLogger(SecureLoggingService.class);
+
+ // Patterns for detecting sensitive data
+ private static final Pattern CREDIT_CARD_PATTERN = Pattern.compile("\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}");
+ private static final Pattern SSN_PATTERN = Pattern.compile("\\d{3}-\\d{2}-\\d{4}");
+ private static final Pattern EMAIL_PATTERN = Pattern.compile("[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}");
+
+ public void processUserRegistration(String username, String email, String ssn, String creditCard) {
+ // GOOD: Sanitize all user inputs before logging
+ String sanitizedUsername = sanitizeForLogging(username);
+ String maskedEmail = maskEmail(email);
+ String hashedSSN = hashSensitiveData(ssn);
+ String maskedCard = maskCreditCard(creditCard);
+
+ logger.info("Processing registration for user: {}, email: {}", sanitizedUsername, maskedEmail);
+ logger.debug("User data validation - SSN hash: {}, card mask: {}", hashedSSN, maskedCard);
+
+ try {
+ validateUserData(username, email, ssn, creditCard);
+ createUserAccount(username, email);
+
+ // GOOD: Log successful operation without sensitive data
+ logger.info("User registration completed successfully for: {}", sanitizedUsername);
+
+ // GOOD: Security event logging for audit trail
+ logger.info("SECURITY_EVENT: New user registration - username: {}, timestamp: {}",
+ sanitizedUsername, System.currentTimeMillis());
+
+ } catch (ValidationException e) {
+ // GOOD: Log validation failure without exposing sensitive validation details
+ logger.warn("User registration failed validation for user: {}, error code: {}",
+ sanitizedUsername, e.getErrorCode());
+
+ } catch (SecurityException e) {
+ // GOOD: Security incidents logged with careful information control
+ logger.error("SECURITY_ALERT: Registration security violation for user: {}, violation type: {}",
+ sanitizeForLogging(username), e.getViolationType());
+ // Don't log the actual violation details that might help attackers
+
+ } catch (Exception e) {
+ // GOOD: Generic error logging without sensitive context
+ logger.error("User registration failed for user: {} due to system error",
+ sanitizedUsername, e);
+ }
+ }
+
+ public void processPayment(String userId, PaymentRequest request) {
+ String correlationId = generateSecureCorrelationId();
+
+ try {
+ // GOOD: Log business operation with masked sensitive data
+ logger.info("Processing payment - user: {}, amount: {}, correlation: {}",
+ userId, request.getAmount(), correlationId);
+
+ // GOOD: Validate and log without exposing card details
+ validatePaymentMethod(request.getPaymentMethod());
+ logger.debug("Payment method validated for correlation: {}", correlationId);
+
+ ChargeResult result = processCharge(request);
+
+ // GOOD: Log success with minimal necessary information
+ logger.info("Payment processed successfully - correlation: {}, transaction: {}",
+ correlationId, result.getTransactionId());
+
+ // GOOD: Business intelligence logging (aggregated, non-sensitive)
+ logger.info("METRICS: Payment processed - amount: {}, merchant: {}, timestamp: {}",
+ request.getAmount(), request.getMerchantId(), System.currentTimeMillis());
+
+ } catch (FraudDetectedException e) {
+ // GOOD: Security event with controlled information
+ logger.error("SECURITY_ALERT: Fraud detected - correlation: {}, risk_score: {}, user: {}",
+ correlationId, e.getRiskScore(), userId);
+ // Don't log fraud detection details that could help bypass detection
+
+ } catch (PaymentException e) {
+ // GOOD: Business error with safe context
+ logger.error("Payment failed - correlation: {}, error_type: {}",
+ correlationId, e.getErrorType(), e);
+ }
+ }
+
+ // Secure data masking utilities
+ private String sanitizeForLogging(String input) {
+ if (input == null) return "[null]";
+
+ // Remove potentially dangerous characters for log injection protection
+ String sanitized = input.replaceAll("[\r\n\t]", "_")
+ .replaceAll("[<>\"'&]", "*");
+
+ // Limit length to prevent log flooding
+ if (sanitized.length() > 100) {
+ sanitized = sanitized.substring(0, 97) + "...";
+ }
+
+ return sanitized;
+ }
+
+ private String maskEmail(String email) {
+ if (email == null || !EMAIL_PATTERN.matcher(email).matches()) {
+ return "[INVALID_EMAIL]";
+ }
+
+ int atIndex = email.indexOf('@');
+ if (atIndex < 2) {
+ return "**@" + email.substring(atIndex + 1);
+ }
+
+ return email.substring(0, 2) + "***@" + email.substring(atIndex + 1);
+ }
+
+ private String maskCreditCard(String cardNumber) {
+ if (cardNumber == null || cardNumber.length() < 8) {
+ return "[INVALID_CARD]";
+ }
+
+ String digitsOnly = cardNumber.replaceAll("[^\\d]", "");
+ if (digitsOnly.length() < 8) {
+ return "[INVALID_CARD]";
+ }
+
+ return digitsOnly.substring(0, 4) + "****" +
+ digitsOnly.substring(digitsOnly.length() - 4);
+ }
+
+ private String hashSensitiveData(String data) {
+ if (data == null) return "[null]";
+
+ // Use a secure hash for audit purposes (not for security)
+ return "HASH_" + Math.abs(data.hashCode());
+ }
+
+ private String generateSecureCorrelationId() {
+ return "CORR_" + System.currentTimeMillis() + "_" +
+ Thread.currentThread().getId();
+ }
+
+ // Mock classes and methods
+ private void validateUserData(String username, String email, String ssn, String creditCard)
+ throws ValidationException {
+ if (username == null || username.trim().isEmpty()) {
+ throw new ValidationException("INVALID_USERNAME");
+ }
+ }
+
+ private void createUserAccount(String username, String email) {
+ // Account creation logic
+ }
+
+ private void validatePaymentMethod(String paymentMethod) {
+ // Payment validation logic
+ }
+
+ private ChargeResult processCharge(PaymentRequest request) throws PaymentException {
+ // Payment processing logic
+ return new ChargeResult("TXN_" + System.currentTimeMillis());
+ }
+
+ // Mock classes
+ private static class PaymentRequest {
+ private String amount = "100.00";
+ private String paymentMethod = "card";
+ private String merchantId = "MERCHANT_123";
+
+ public String getAmount() { return amount; }
+ public String getPaymentMethod() { return paymentMethod; }
+ public String getMerchantId() { return merchantId; }
+ }
+
+ private static class ChargeResult {
+ private final String transactionId;
+ public ChargeResult(String transactionId) { this.transactionId = transactionId; }
+ public String getTransactionId() { return transactionId; }
+ }
+
+ private static class ValidationException extends Exception {
+ private final String errorCode;
+ public ValidationException(String errorCode) {
+ super("Validation failed: " + errorCode);
+ this.errorCode = errorCode;
+ }
+ public String getErrorCode() { return errorCode; }
+ }
+
+ private static class SecurityException extends Exception {
+ private final String violationType;
+ public SecurityException(String violationType) {
+ super("Security violation: " + violationType);
+ this.violationType = violationType;
+ }
+ public String getViolationType() { return violationType; }
+ }
+
+ private static class FraudDetectedException extends Exception {
+ private final int riskScore;
+ public FraudDetectedException(int riskScore) {
+ super("Fraud detected with risk score: " + riskScore);
+ this.riskScore = riskScore;
+ }
+ public int getRiskScore() { return riskScore; }
+ }
+
+ private static class PaymentException extends Exception {
+ private final String errorType;
+ public PaymentException(String errorType) {
+ super("Payment error: " + errorType);
+ this.errorType = errorType;
+ }
+ public String getErrorType() { return errorType; }
+ }
+}
+```
+
+**Bad example:**
+
+```java
+// AVOID: Insecure logging practices
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class InsecureLoggingService {
+ private static final Logger logger = LoggerFactory.getLogger(InsecureLoggingService.class);
+
+ public void processUserRegistration(String username, String password, String email,
+ String ssn, String creditCard) {
+ // BAD: Logging passwords - NEVER do this!
+ logger.debug("User registration: username={}, password={}, email={}",
+ username, password, email);
+
+ // BAD: Logging full SSN and credit card numbers
+ logger.info("Processing registration for SSN: {} with credit card: {}", ssn, creditCard);
+
+ try {
+ validateUser(username, password, email, ssn, creditCard);
+ createAccount(username, password);
+
+ } catch (ValidationException e) {
+ // BAD: Exposing sensitive validation details
+ logger.error("Validation failed for user {} with password {} - details: {}",
+ username, password, e.getValidationDetails());
+
+ } catch (SecurityException e) {
+ // BAD: Logging detailed security information that could help attackers
+ logger.error("Security violation for user {}: attack vector: {}, payload: {}, " +
+ "system_path: {}, internal_error: {}",
+ username, e.getAttackVector(), e.getPayload(),
+ e.getSystemPath(), e.getInternalError());
+ }
+ }
+
+ public void processPayment(String userId, String cardNumber, String cvv, String amount) {
+ // BAD: Logging complete payment card information
+ logger.info("Processing payment: user={}, card={}, cvv={}, amount={}",
+ userId, cardNumber, cvv, amount);
+
+ try {
+ validateCard(cardNumber, cvv);
+
+ // BAD: Logging sensitive database connection info
+ logger.debug("Connecting to payment DB with connection string: {}",
+ getDatabaseConnectionString());
+
+ processTransaction(cardNumber, cvv, amount);
+
+ } catch (FraudException e) {
+ // BAD: Exposing fraud detection algorithms and thresholds
+ logger.error("Fraud detected for card {}: algorithm={}, threshold={}, " +
+ "risk_factors={}, detection_rules={}",
+ cardNumber, e.getAlgorithm(), e.getThreshold(),
+ e.getRiskFactors(), e.getDetectionRules());
+
+ } catch (PaymentException e) {
+ // BAD: Including full stack trace with sensitive system information
+ logger.error("Payment failed for card {} with full system details", cardNumber, e);
+ }
+ }
+
+ public void handleSystemError(String operation, Exception e) {
+ // BAD: Logging system internals that could help attackers
+ logger.error("System error in operation {}: java_version={}, system_properties={}, " +
+ "environment_variables={}, file_paths={}",
+ operation, System.getProperty("java.version"),
+ System.getProperties(), System.getenv(),
+ System.getProperty("user.dir"));
+
+ // BAD: Full stack trace might expose sensitive system information
+ e.printStackTrace(); // Goes to stderr, not controlled by logging config
+ }
+
+ public void logUserActivity(String userId, String sessionId, String ipAddress,
+ String userAgent, String requestData) {
+ // BAD: Logging PII and potentially sensitive request data
+ logger.info("User activity: user={}, session={}, ip={}, userAgent={}, request={}",
+ userId, sessionId, ipAddress, userAgent, requestData);
+
+ // BAD: No data retention consideration
+ // This could violate GDPR, CCPA, or other privacy regulations
+ }
+
+ public void authenticateUser(String username, String password, String loginToken) {
+ // BAD: Logging authentication credentials
+ logger.debug("Authentication attempt: username={}, password={}, token={}",
+ username, password, loginToken);
+
+ try {
+ boolean success = authenticate(username, password, loginToken);
+ if (!success) {
+ // BAD: Detailed failure information that could help brute force attacks
+ logger.warn("Login failed for {} - password_attempts={}, last_success={}, " +
+ "account_locked={}, failed_reasons={}",
+ username, getPasswordAttempts(username),
+ getLastSuccessfulLogin(username), isAccountLocked(username),
+ getFailureReasons(username));
+ }
+ } catch (Exception e) {
+ // BAD: Logging authentication errors with sensitive context
+ logger.error("Authentication system error for user {} with credentials: " +
+ "password={}, token={}", username, password, loginToken, e);
+ }
+ }
+
+ // Mock methods with security issues
+ private String getDatabaseConnectionString() {
+ return "jdbc:mysql://prod-db:3306/payments?user=admin&password=secret123";
+ }
+
+ private void validateUser(String username, String password, String email, String ssn, String creditCard)
+ throws ValidationException { /* ... */ }
+ private void createAccount(String username, String password) { /* ... */ }
+ private void validateCard(String cardNumber, String cvv) { /* ... */ }
+ private void processTransaction(String cardNumber, String cvv, String amount)
+ throws PaymentException { /* ... */ }
+ private boolean authenticate(String username, String password, String token) { return true; }
+ private int getPasswordAttempts(String username) { return 3; }
+ private long getLastSuccessfulLogin(String username) { return System.currentTimeMillis(); }
+ private boolean isAccountLocked(String username) { return false; }
+ private String getFailureReasons(String username) { return "invalid_password"; }
+
+ // Exception classes with sensitive information exposure
+ private static class ValidationException extends Exception {
+ private final String validationDetails;
+ public ValidationException(String details) {
+ this.validationDetails = details;
+ }
+ public String getValidationDetails() { return validationDetails; }
+ }
+
+ private static class SecurityException extends Exception {
+ private final String attackVector, payload, systemPath, internalError;
+ public SecurityException(String attackVector, String payload, String systemPath, String internalError) {
+ this.attackVector = attackVector;
+ this.payload = payload;
+ this.systemPath = systemPath;
+ this.internalError = internalError;
+ }
+ public String getAttackVector() { return attackVector; }
+ public String getPayload() { return payload; }
+ public String getSystemPath() { return systemPath; }
+ public String getInternalError() { return internalError; }
+ }
+
+ private static class FraudException extends Exception {
+ private final String algorithm, threshold, riskFactors, detectionRules;
+ public FraudException(String algorithm, String threshold, String riskFactors, String detectionRules) {
+ this.algorithm = algorithm;
+ this.threshold = threshold;
+ this.riskFactors = riskFactors;
+ this.detectionRules = detectionRules;
+ }
+ public String getAlgorithm() { return algorithm; }
+ public String getThreshold() { return threshold; }
+ public String getRiskFactors() { return riskFactors; }
+ public String getDetectionRules() { return detectionRules; }
+ }
+
+ private static class PaymentException extends Exception {
+ public PaymentException(String message) { super(message); }
+ }
+}
+```
+
+### Example 6: Establish Effective Log Monitoring and Alerting
+
+Title: Implement Log Aggregation, Monitoring, and Alerting
+Description: Set up centralized log aggregation, implement alerts based on log patterns, use structured logging for querying, and regularly analyze logs to identify trends and issues.
+
+**Good example:**
+
+```java
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.MDC;
+import java.util.Objects;
+import java.util.UUID;
+
+public class MonitoredService {
+ private static final Logger logger = LoggerFactory.getLogger(MonitoredService.class);
+
+ public void handleApiRequest(String endpoint, String payload) {
+ String requestId = UUID.randomUUID().toString();
+ MDC.put("requestId", requestId);
+ MDC.put("endpoint", endpoint);
+
+ try {
+ logger.info("API request received");
+
+ if (payload.contains("");
+
+ for (int i = 0; i < 20; i++) {
+ service.handleApiRequest("/critical_op", "some_data");
+ }
+
+ service.performHealthCheck();
+ }
+}
+```
+
+**Bad example:**
+
+```java
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.util.Objects;
+
+public class UnmonitoredService {
+ private static final Logger logger = LoggerFactory.getLogger(UnmonitoredService.class);
+
+ public void doWork() {
+ try {
+ logger.info("Work started.");
+ // ... some logic ...
+ if (Math.random() > 0.5) {
+ throw new Exception("Something went wrong randomly!");
+ }
+ logger.info("Work finished.");
+ } catch (Exception e) {
+ // Logs an error, but if logs are only on local disk and not monitored,
+ // this critical issue might go unnoticed for a long time.
+ logger.error("Error during work", e);
+ }
+ }
+
+ public static void main(String[] args) {
+ UnmonitoredService service = new UnmonitoredService();
+ for (int i = 0; i < 5; i++) {
+ service.doWork();
+ }
+ // Problem: Logs are likely just going to console or a local file.
+ // - No central aggregation: Difficult to search across instances or time.
+ // - No alerts: Critical errors might be missed until users report them.
+ // - No analysis: Trends or recurring non-fatal issues are hard to spot.
+ }
+}
+```
+
+### Example 7: Incorporate Logging in Testing
+
+Title: Validate Logging Behavior and Impact During Testing
+Description: Ensure logging works as expected and doesn't negatively impact the application. Assert that specific log messages are generated, verify log formats, test different logging levels, and check performance impact.
+
+**Good example:**
+
+```java
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.util.Objects;
+
+// ---- System Under Test ----
+class ImportantService {
+ private static final Logger logger = LoggerFactory.getLogger(ImportantService.class);
+
+ public void processImportantData(String dataId) {
+ if (Objects.isNull(dataId)) {
+ logger.error("Data ID is null, cannot process.");
+ throw new IllegalArgumentException("Data ID cannot be null");
+ }
+ if (dataId.startsWith("invalid_")) {
+ logger.warn("Received potentially invalid data ID: {}", dataId);
+ // continue processing with caution
+ }
+ logger.info("Processing data for ID: {}", dataId);
+ // ... actual processing ...
+ }
+}
+
+// ---- Test Example (Conceptual - would use JUnit/TestNG) ----
+public class LoggingTestExample {
+
+ // This would be actual test code using testing frameworks
+ public void demonstrateLoggingTests() {
+ ImportantService service = new ImportantService();
+
+ // Test 1: Verify ERROR log for null input
+ try {
+ service.processImportantData(null);
+ // Should not reach here
+ } catch (IllegalArgumentException e) {
+ // In real test, would capture logs and assert:
+ // - ERROR level log contains "Data ID is null"
+ // - Only one ERROR log entry
+ System.out.println("Test 1 passed: ERROR log generated for null input");
+ }
+
+ // Test 2: Verify WARN log for invalid input
+ service.processImportantData("invalid_XYZ");
+ // In real test, would assert:
+ // - WARN level log contains "Received potentially invalid data ID: invalid_XYZ"
+ // - INFO level log contains "Processing data for ID: invalid_XYZ"
+ System.out.println("Test 2 passed: WARN and INFO logs generated for invalid input");
+
+ // Test 3: Verify INFO log for valid input
+ service.processImportantData("valid_123");
+ // In real test, would assert:
+ // - Only INFO level log contains "Processing data for ID: valid_123"
+ // - No WARN or ERROR logs
+ System.out.println("Test 3 passed: INFO log generated for valid input");
+ }
+
+ public void performanceTestExample() {
+ ImportantService service = new ImportantService();
+
+ // Performance test: measure logging overhead
+ long startTime = System.currentTimeMillis();
+
+ for (int i = 0; i < 1000; i++) {
+ service.processImportantData("test_" + i);
+ }
+
+ long duration = System.currentTimeMillis() - startTime;
+ System.out.println("Performance test: 1000 operations completed in " + duration + "ms");
+
+ // In real test, would assert that logging overhead is within acceptable limits
+ // e.g., assert duration < 1000; // Less than 1ms per operation including logging
+ }
+
+ public static void main(String[] args) {
+ LoggingTestExample test = new LoggingTestExample();
+ test.demonstrateLoggingTests();
+ test.performanceTestExample();
+
+ System.out.println("In real implementation, use JUnit/TestNG with LogCapture utilities");
+ System.out.println("Example dependencies: ch.qos.logback.classic.Logger with ListAppender");
+ }
+}
+```
+
+**Bad example:**
+
+```java
+// Code that is hard to test for logging behavior
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.util.Objects;
+
+public class UntestableLogging {
+ private static final Logger logger = LoggerFactory.getLogger(UntestableLogging.class);
+
+ public void complexOperation(String input) {
+ // ... many lines of code ...
+ if (input.equals("problem")) {
+ // Log message is deeply embedded, possibly conditional, hard to trigger specifically
+ // and verify without significant effort or refactoring.
+ logger.warn("A specific problem occurred with input: {}", input);
+ }
+ // ... more code ...
+ // No clear separation of concerns, making it hard to isolate and test logging.
+ }
+
+ public void methodWithSideEffects(String data) {
+ // Expensive logging operation always executed
+ logger.debug("Processing data: {}", buildExpensiveLogMessage(data));
+
+ // No way to test if logging is impacting performance
+ // No way to verify log content without running expensive operations
+ }
+
+ private String buildExpensiveLogMessage(String data) {
+ // Simulate expensive operation that shouldn't run in production
+ StringBuilder sb = new StringBuilder();
+ for (int i = 0; i < 10000; i++) {
+ sb.append("Debug info: ").append(data).append(" iteration: ").append(i);
+ }
+ return sb.toString();
+ }
+
+ public static void main(String[] args) {
+ UntestableLogging ul = new UntestableLogging();
+ ul.complexOperation("test");
+ ul.complexOperation("problem");
+ ul.methodWithSideEffects("example");
+
+ // Problem: Without proper test utilities or testable design,
+ // verifying that the WARN message is logged correctly (and only when expected)
+ // is difficult. Developers might skip testing logging, leading to unverified log output.
+ }
+}
+```
+
+## Output Format
+
+- **ANALYZE** Java code to identify specific logging issues and categorize them by impact (CRITICAL, SECURITY, PERFORMANCE, MAINTAINABILITY, OBSERVABILITY) and logging area (framework selection, log levels, security, configuration, performance)
+- **CATEGORIZE** logging improvements found: Framework Issues (inconsistent frameworks vs standardized SLF4J, outdated implementations vs modern logging), Log Level Problems (inappropriate levels vs proper level usage, missing contextual information vs structured logging), Security Issues (sensitive data exposure vs secure logging practices, plaintext credentials vs masked data), Configuration Problems (hardcoded settings vs environment-specific configuration, missing structured formats vs JSON/structured output), and Performance Issues (synchronous logging vs asynchronous patterns, excessive logging overhead vs optimized performance, string concatenation vs parameterized messages)
+- **APPLY** logging best practices directly by implementing the most appropriate improvements for each identified issue: Standardize on SLF4J facade with appropriate implementation (Logback/Log4j2), implement proper log level usage with contextual information, secure sensitive data through masking and filtering, configure environment-specific settings with external configuration files, optimize performance through asynchronous logging and parameterized messages, and establish structured logging with JSON output for better parsing and analysis
+- **IMPLEMENT** comprehensive logging refactoring using proven patterns: Migrate to SLF4J facade with consistent logger declarations, establish proper log level hierarchy (ERROR for failures, WARN for degraded service, INFO for business events, DEBUG for troubleshooting), implement secure logging practices with data masking and filtering, configure environment-specific logging with external properties, optimize performance through async appenders and lazy evaluation, and integrate structured logging with correlation IDs and contextual metadata
+- **REFACTOR** code systematically following the logging improvement roadmap: First standardize logging framework usage through SLF4J adoption, then optimize log level usage and add proper contextual information, implement security measures for sensitive data protection, configure environment-specific logging settings with external configuration, optimize logging performance through asynchronous patterns and parameterized messages, and establish comprehensive monitoring and alerting integration
+- **EXPLAIN** the applied logging improvements and their benefits: Framework standardization benefits through SLF4J facade adoption and consistent logger usage, observability enhancements via proper log levels and structured output, security improvements through sensitive data masking and secure logging practices, performance optimizations from asynchronous logging and parameterized messages, and operational benefits from environment-specific configuration and monitoring integration
+- **VALIDATE** that all applied logging refactoring compiles successfully, maintains existing functionality, eliminates security vulnerabilities, follows logging best practices, and achieves the intended observability and performance improvements through comprehensive testing and verification
+
+## Safeguards
+
+- **BLOCKING SAFETY CHECK**: ALWAYS run `./mvnw compile` before ANY logging recommendations
+- **CRITICAL VALIDATION**: Execute `./mvnw clean verify` to ensure all tests pass after logging changes
+- **SECURITY VERIFICATION**: Validate that no sensitive data (passwords, PII, tokens) is logged directly
+- **PERFORMANCE MONITORING**: Ensure logging configurations don't introduce significant performance overhead
+- **CONFIGURATION VALIDATION**: Verify logging configuration files are syntactically correct and environment-appropriate
+- **ROLLBACK READINESS**: Ensure all logging changes can be easily reverted without system disruption
+- **INCREMENTAL SAFETY**: Apply logging improvements incrementally, validating after each modification
+- **LOG LEVEL VERIFICATION**: Confirm log levels are appropriate for production environments (avoid DEBUG/TRACE in prod)
+- **DEPENDENCY COMPATIBILITY**: Verify logging framework dependencies don't conflict with existing project dependencies
+- **OPERATIONAL CONTINUITY**: Ensure logging changes don't break existing monitoring, alerting, or log aggregation systems
\ No newline at end of file
diff --git a/skills/144-java-data-oriented-programming/SKILL.md b/skills/144-java-data-oriented-programming/SKILL.md
index 1f4e4c70..e583803b 100644
--- a/skills/144-java-data-oriented-programming/SKILL.md
+++ b/skills/144-java-data-oriented-programming/SKILL.md
@@ -1,21 +1,35 @@
---
name: 144-java-data-oriented-programming
description: Use when you need to apply data-oriented programming best practices in Java — including separating code (behavior) from data structures using records, designing immutable data with pure transformation functions, keeping data flat and denormalized with ID-based references, starting with generic data structures converting to specific types when needed, ensuring data integrity through pure validation functions, and creating flexible generic data access layers. Part of the skills-for-java project
+license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
- version: 0.12.0-SNAPSHOT
+ version: 0.13.0-SNAPSHOT
---
# Java Data-Oriented Programming Best Practices
-Identify and apply data-oriented programming best practices in Java to improve code clarity, maintainability, and predictability by strictly separating data structures from behavior and ensuring all data transformations are explicit, pure, and traceable.
+Apply data-oriented programming in Java: separate data from behavior with records, use immutable data structures, pure functions for transformations, flat denormalized structures with ID references, generic-to-specific type conversion when needed, pure validation functions, and flexible generic data access layers. All transformations should be explicit, traceable, and composed of clear pure functional steps.
-**Core areas:** Records for immutable data carriers over mutable POJOs, data-behavior separation with pure static utility classes holding operations, pure functions for data transformation that depend only on inputs and produce no side effects, flat denormalized data structures with ID-based references over deep nesting, generic `Map` representations for dynamic schemas converted to specific types when needed, `Optional` and custom result types in validation functions instead of throwing exceptions, generic `DataStore` interfaces for flexible data access layers, composable `Function` pipelines using `andThen` for explicit and traceable multi-step transformations.
+**What is covered in this Skill?**
-**Prerequisites:** Run `./mvnw compile` before applying any changes. If compilation fails, **stop immediately** — do not proceed until the project compiles successfully.
+- Separation of concerns: data structures (records, POJOs) vs behavior (utility classes, services)
+- Immutability: records, final fields, transformations produce new instances
+- Pure data transformations: functions depending only on inputs, no side effects
+- Flat and denormalized data: ID references instead of deep nesting
+- Generic until specific: Map<String,Object> for dynamic data, convert to records when processing
+- Data integrity: pure validation functions returning validation results
+- Flexible generic data access: generic CRUD interfaces, interchangeable implementations
-**Multi-step scope:** Step 1 validates the project compiles. Step 2 categorizes data-oriented programming opportunities by impact (CRITICAL, MAINTAINABILITY, PERFORMANCE, CLARITY) and area (data-behavior separation, immutability violations, impure functions, nested data structures, generic vs specific types, validation approaches). Step 3 separates data from behavior by extracting Records as pure data carriers and moving operations to static utility classes. Step 4 establishes immutability throughout the codebase using Records, `List.copyOf()`, and `Collections.unmodifiableList()`. Step 5 extracts pure static functions for all data processing and transformation operations to eliminate side effects. Step 6 flattens complex nested object hierarchies using ID references and lookup patterns in flat `Map`-based stores. Step 7 implements generic and flexible data types starting with `Map` for dynamic schemas, converting to specific record types via explicit converter classes with validation. Step 8 establishes validation boundaries using pure functions that return `Optional` error messages or result types instead of throwing exceptions for business validation. Step 9 designs flexible generic `DataStore` interfaces to decouple access logic from storage implementations. Step 10 creates composable data transformation pipelines using `Function.andThen()` with explicit tracing at each step. Step 11 runs `./mvnw clean verify` to confirm all tests pass after changes.
+**Scope:** The reference is organized by examples (good/bad code patterns) for each core area. Apply recommendations based on applicable examples.
-**Before applying changes:** Read the reference for detailed good/bad examples, constraints, and safeguards for each data-oriented programming pattern.
+## Constraints
+
+Before applying any data-oriented programming recommendations, ensure the project compiles. Compilation failure is a blocking condition. After applying improvements, run full verification.
+
+- **MANDATORY**: Run `./mvnw compile` or `mvn compile` before applying any change
+- **SAFETY**: If compilation fails, stop immediately — do not proceed until resolved
+- **VERIFY**: Run `./mvnw clean verify` or `mvn clean verify` after applying improvements
+- **BEFORE APPLYING**: Read the reference for detailed good/bad examples, constraints, and safeguards for each data-oriented programming pattern
## Reference
diff --git a/skills/144-java-data-oriented-programming/references/144-java-data-oriented-programming.md b/skills/144-java-data-oriented-programming/references/144-java-data-oriented-programming.md
index d5e88964..60a8bef3 100644
--- a/skills/144-java-data-oriented-programming/references/144-java-data-oriented-programming.md
+++ b/skills/144-java-data-oriented-programming/references/144-java-data-oriented-programming.md
@@ -4,7 +4,7 @@ description: Use when you need to apply data-oriented programming best practices
license: Apache-2.0
metadata:
author: Juan Antonio Breña Moral
- version: 0.12.0-SNAPSHOT
+ version: 0.13.0-SNAPSHOT
---
# Java Data-Oriented Programming Best Practices
diff --git a/skills/151-java-performance-jmeter/SKILL.md b/skills/151-java-performance-jmeter/SKILL.md
new file mode 100644
index 00000000..c0c8d119
--- /dev/null
+++ b/skills/151-java-performance-jmeter/SKILL.md
@@ -0,0 +1,34 @@
+---
+name: 151-java-performance-jmeter
+description: Use when you need to set up JMeter performance testing for a Java project — including creating the run-jmeter.sh script from the exact template, configuring load tests with loops, threads, and ramp-up, or running performance tests from the project root with custom or default settings. Part of the skills-for-java project
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Run performance tests based on JMeter
+
+Provide a complete JMeter performance testing solution by creating the run-jmeter.sh script from the exact template, making it executable, and configuring the project structure for load testing. Supports custom loops, threads, ramp-up, and environment variable overrides.
+
+**What is covered in this Skill?**
+
+- Create run-jmeter.sh in project root from the exact template (no modifications)
+- Project structure: src/test/resources/jmeter/load-test.jmx, target/ for results
+- Script options: -l (loops), -t (threads), -r (ramp-up), -g (GUI), -h (help)
+- Environment variables: JMETER_LOOPS, JMETER_THREADS, JMETER_RAMP_UP
+- Verify JMeter is installed and available before proceeding
+
+**Scope:** Copy the script template verbatim. Do not modify, interpret, or enhance the template content.
+
+## Constraints
+
+JMeter must be installed and available in PATH. If not available, show a message and exit. Use only the exact template for the run-jmeter.sh script.
+
+- **PREREQUISITE**: Verify JMeter is installed and accessible via `jmeter --version` before creating the script
+- **CRITICAL**: Copy the run-jmeter.sh template exactly — do not modify, interpret, or enhance
+- **PERMISSION**: Make the script executable with `chmod +x run-jmeter.sh`
+- **BEFORE APPLYING**: Read the reference for the exact script template and usage instructions
+
+## Reference
+
+For detailed guidance, examples, and constraints, see [references/151-java-performance-jmeter.md](references/151-java-performance-jmeter.md).
diff --git a/skills/151-java-performance-jmeter/references/151-java-performance-jmeter.md b/skills/151-java-performance-jmeter/references/151-java-performance-jmeter.md
new file mode 100644
index 00000000..a264115a
--- /dev/null
+++ b/skills/151-java-performance-jmeter/references/151-java-performance-jmeter.md
@@ -0,0 +1,379 @@
+---
+name: 151-java-performance-jmeter
+description: Use when you need to set up JMeter performance testing for a Java project — including creating the run-jmeter.sh script from the exact template, configuring load tests with loops, threads, and ramp-up, or running performance tests from the project root.
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Run performance tests based on JMeter
+
+## Role
+
+You are a Senior software engineer with extensive experience in Java software development and performance testing
+
+## Goal
+
+When a user requests JMeter performance testing setup, follow the instructions steps to provide a complete performance testing solution.
+
+## Constraints
+
+Prerequisites and requirements that must be met before proceeding with JMeter performance testing setup
+
+- Verify that JMeter is installed and available in PATH before proceeding
+- If JMeter is not available, show a message and exit
+
+## Steps
+
+### Step 1: Create Script File
+
+When a user requests JMeter performance testing setup:
+
+1. **Create the file** as `run-jmeter.sh` in the project root
+2. **Make it executable**: `chmod +x run-jmeter.sh`
+3. **Verify permissions**: Ensure the script has execute permissions
+
+**Required Project Structure:**
+```
+project-root/
+├── run-jmeter.sh # The performance script
+├── src/test/resources/jmeter/
+│ └── load-test.jmx # JMeter test plan
+└── target/ # Generated reports (auto-created)
+├── jmeter-results.jtl # Raw results
+├── jmeter-report/ # HTML dashboard
+│ └── index.html # Main report
+└── jmeter.log # Execution log
+```
+
+### Step 2: Copy Template Content
+
+Create the entire script with the following content:
+
+```bash
+#!/bin/bash
+
+# JMeter Load Test Script
+# This script runs JMeter tests and generates HTML reports
+
+set -e
+
+# Default configuration (can be overridden via environment variables)
+JMETER_LOOPS=${JMETER_LOOPS:-1000}
+JMETER_THREADS=${JMETER_THREADS:-1}
+JMETER_RAMP_UP=${JMETER_RAMP_UP:-1}
+GUI_MODE=false
+
+# Project paths
+PROJECT_DIR=$(pwd)
+TEST_PLAN="$PROJECT_DIR/src/test/resources/jmeter/load-test.jmx"
+RESULTS_FILE="$PROJECT_DIR/target/jmeter-results.jtl"
+REPORT_DIR="$PROJECT_DIR/target/jmeter-report"
+LOG_FILE="$PROJECT_DIR/jmeter.log"
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+BLUE='\033[0;34m'
+NC='\033[0m' # No Color
+
+# Function to print colored output
+print_info() {
+ echo -e "${BLUE}[INFO]${NC} $1"
+}
+
+print_success() {
+ echo -e "${GREEN}[SUCCESS]${NC} $1"
+}
+
+print_warning() {
+ echo -e "${YELLOW}[WARNING]${NC} $1"
+}
+
+print_error() {
+ echo -e "${RED}[ERROR]${NC} $1"
+}
+
+# Function to show usage
+show_usage() {
+ echo "JMeter Load Test Script"
+ echo "======================="
+ echo ""
+ echo "DESCRIPTION:"
+ echo " This script executes JMeter load tests in non-GUI mode and generates"
+ echo " comprehensive HTML reports. It provides flexible configuration options"
+ echo " and automatically opens the results in your browser."
+ echo ""
+ echo "USAGE:"
+ echo " $0 [OPTIONS]"
+ echo ""
+ echo "OPTIONS:"
+ echo " -l, --loops LOOPS Number of loops per thread (default: $JMETER_LOOPS)"
+ echo " -t, --threads THREADS Number of concurrent threads (default: $JMETER_THREADS)"
+ echo " -r, --ramp-up SECONDS Ramp-up period in seconds (default: $JMETER_RAMP_UP)"
+ echo " -g, --gui Open JMeter GUI instead of running in non-GUI mode"
+ echo " -h, --help Show this detailed help message"
+ echo ""
+ echo "ENVIRONMENT VARIABLES:"
+ echo " JMETER_LOOPS Override default number of loops"
+ echo " JMETER_THREADS Override default number of threads"
+ echo " JMETER_RAMP_UP Override default ramp-up period"
+ echo ""
+ echo "EXAMPLES:"
+ echo " $0 # Run with defaults (1000 loops, 1 thread, 1s ramp-up)"
+ echo " $0 -l 500 -t 10 -r 30 # 500 loops, 10 threads, 30s ramp-up"
+ echo " $0 --loops 100 --threads 5 # Long form options"
+ echo " $0 -g # Open JMeter GUI with test plan loaded"
+ echo " JMETER_THREADS=5 $0 # Using environment variables"
+ echo ""
+ echo "OUTPUT FILES:"
+ echo " target/jmeter-results.jtl # Raw test results (JTL format)"
+ echo " target/jmeter-report/index.html # HTML dashboard report"
+ echo " jmeter.log # JMeter execution log"
+ echo ""
+ echo "REQUIREMENTS:"
+ echo " - JMeter must be installed and available in PATH"
+ echo " - Test plan must exist at: src/test/resources/jmeter/load-test.jmx"
+ echo ""
+ echo "For more information about JMeter, visit: https://jmeter.apache.org/"
+}
+
+# Parse command line arguments
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ -l|--loops)
+ JMETER_LOOPS="$2"
+ shift 2
+ ;;
+ -t|--threads)
+ JMETER_THREADS="$2"
+ shift 2
+ ;;
+ -r|--ramp-up)
+ JMETER_RAMP_UP="$2"
+ shift 2
+ ;;
+ -g|--gui)
+ GUI_MODE=true
+ shift
+ ;;
+ -h|--help)
+ show_usage
+ exit 0
+ ;;
+ *)
+ print_error "Unknown option: $1"
+ show_usage
+ exit 1
+ ;;
+ esac
+done
+
+# Function to check if JMeter is installed
+check_jmeter() {
+ if ! command -v jmeter &> /dev/null; then
+ print_error "JMeter is not installed or not in PATH"
+ print_info "Please install JMeter and ensure it's in your PATH"
+ print_info "Download from: https://jmeter.apache.org/download_jmeter.cgi"
+ exit 1
+ fi
+
+ JMETER_VERSION=$(jmeter -v 2>&1 | head -n 1 | grep -o 'Version [0-9.]*' | cut -d' ' -f2)
+ print_info "Found JMeter version: $JMETER_VERSION"
+}
+
+# Function to check if test plan exists
+check_test_plan() {
+ if [[ ! -f "$TEST_PLAN" ]]; then
+ print_error "JMeter test plan not found: $TEST_PLAN"
+ exit 1
+ fi
+ print_info "Using test plan: $TEST_PLAN"
+}
+
+# Function to create target directory
+create_target_dir() {
+ mkdir -p "$(dirname "$RESULTS_FILE")"
+ mkdir -p "$REPORT_DIR"
+ print_info "Created target directories"
+}
+
+# Function to clean previous results
+clean_previous_results() {
+ if [[ -f "$RESULTS_FILE" ]]; then
+ rm "$RESULTS_FILE"
+ print_info "Cleaned previous results file"
+ fi
+
+ if [[ -d "$REPORT_DIR" ]]; then
+ rm -rf "$REPORT_DIR"
+ print_info "Cleaned previous report directory"
+ fi
+}
+
+# Function to run JMeter test in non-GUI mode
+run_jmeter_test() {
+ print_info "Starting JMeter test..."
+ print_info "Configuration:"
+ print_info " - Loops: $JMETER_LOOPS"
+ print_info " - Threads: $JMETER_THREADS"
+ print_info " - Ramp-up: $JMETER_RAMP_UP seconds"
+
+ # Run JMeter in non-GUI mode
+ jmeter -n \
+ -t "$TEST_PLAN" \
+ -l "$RESULTS_FILE" \
+ -e \
+ -o "$REPORT_DIR" \
+ -Jloops="$JMETER_LOOPS" \
+ -Jthreads="$JMETER_THREADS" \
+ -Jrampup="$JMETER_RAMP_UP" \
+ -j "$LOG_FILE"
+
+ if [[ $? -eq 0 ]]; then
+ print_success "JMeter test completed successfully"
+ else
+ print_error "JMeter test failed"
+ exit 1
+ fi
+}
+
+# Function to run JMeter in GUI mode
+run_jmeter_gui() {
+ print_info "Opening JMeter GUI..."
+ print_info "Test plan will be loaded: $TEST_PLAN"
+ print_info "You can configure and run tests manually in the GUI"
+
+ # Run JMeter in GUI mode with test plan loaded
+ jmeter -t "$TEST_PLAN" \
+ -Jloops="$JMETER_LOOPS" \
+ -Jthreads="$JMETER_THREADS" \
+ -Jrampup="$JMETER_RAMP_UP"
+
+ print_success "JMeter GUI session completed"
+}
+
+# Function to show results
+show_results() {
+ print_success "Test Results:"
+ print_info "Results file: $RESULTS_FILE"
+ print_info "HTML report: $REPORT_DIR/index.html"
+ print_info "Log file: $LOG_FILE"
+
+ if [[ -f "$RESULTS_FILE" ]]; then
+ TOTAL_SAMPLES=$(tail -n +2 "$RESULTS_FILE" | wc -l)
+ print_info "Total samples: $TOTAL_SAMPLES"
+ fi
+
+ # Try to open the HTML report
+ if command -v open &> /dev/null; then
+ print_info "Opening HTML report in browser..."
+ open "$REPORT_DIR/index.html"
+ elif command -v xdg-open &> /dev/null; then
+ print_info "Opening HTML report in browser..."
+ xdg-open "$REPORT_DIR/index.html"
+ else
+ print_info "To view the HTML report, open: file://$REPORT_DIR/index.html"
+ fi
+}
+
+# Main execution
+main() {
+ print_info "JMeter Load Test Script"
+ print_info "======================="
+
+ check_jmeter
+ check_test_plan
+
+ if [[ "$GUI_MODE" == "true" ]]; then
+ # GUI mode - just open JMeter with the test plan
+ run_jmeter_gui
+ print_success "JMeter GUI session completed successfully!"
+ else
+ # Non-GUI mode - run automated test and generate report
+ create_target_dir
+ clean_previous_results
+ run_jmeter_test
+ show_results
+ print_success "JMeter load test completed successfully!"
+ fi
+}
+
+# Execute main function
+main "$@"
+```
+
+#### Step Constraints
+
+- **DO NOT** create your own version of the performance script
+- **DO NOT** modify the template logic, structure, or features
+- **DO NOT** add enhancements or interpretations
+- **ONLY** use the exact template provided
+- **ONLY** copy the script verbatim without any changes
+
+### Step 3: Provide Usage Information
+
+**Basic Usage:**
+```bash
+# Run with default settings (1000 loops, 1 thread, 1s ramp-up)
+./run-jmeter.sh
+
+# Custom configuration
+./run-jmeter.sh -l 500 -t 10 -r 30
+
+# Open JMeter GUI
+./run-jmeter.sh -g
+```
+
+**Available Options:**
+- `-l, --loops LOOPS`: Number of loops per thread (default: 1000)
+- `-t, --threads THREADS`: Number of concurrent threads (default: 1)
+- `-r, --ramp-up SECONDS`: Ramp-up period in seconds (default: 1)
+- `-g, --gui`: Open JMeter GUI instead of running in non-GUI mode
+- `-h, --help`: Show detailed help message
+
+**Environment Variables:**
+```bash
+# Override defaults using environment variables
+JMETER_LOOPS=500 JMETER_THREADS=5 ./run-jmeter.sh
+
+# Or export them
+export JMETER_LOOPS=1000
+export JMETER_THREADS=10
+export JMETER_RAMP_UP=30
+./run-jmeter.sh
+```
+
+**Example Scenarios:**
+```bash
+# Light load test
+./run-jmeter.sh -l 100 -t 1 -r 1
+
+# Medium load test
+./run-jmeter.sh -l 500 -t 5 -r 10
+
+# Heavy load test
+./run-jmeter.sh -l 1000 -t 20 -r 60
+
+# Quick GUI setup
+./run-jmeter.sh -g
+```
+
+## Output Format
+
+- Copy the performance script template exactly as specified
+- Create `run-jmeter.sh` file in the project root
+- Make the script executable with `chmod +x run-jmeter.sh`
+- **VERIFY**: Final script contains ONLY what appears in the template
+
+## Safeguards
+
+- **PREREQUISITE CHECK**: Verify JMeter is installed and accessible via `jmeter --version` before creating the script
+- **SCRIPT CREATION SAFETY**: Ensure `run-jmeter.sh` script is created with exactly the template content without modifications
+- **PERMISSION VALIDATION**: Verify the script is executable with `chmod +x run-jmeter.sh` and confirm permissions with `ls -la run-jmeter.sh`
+- **FUNCTIONALITY TEST**: Verify the script is working correctly with basic usage test: `./run-jmeter.sh -h`
+- **PROJECT STRUCTURE VERIFICATION**: Confirm the expected directory structure exists (`src/test/resources/jmeter/`) or will be created by the script
+- **TEMPLATE INTEGRITY**: Ensure the copied template content matches exactly what is specified in the fragment file without any alterations
+- **DEPENDENCY VALIDATION**: Verify that all required directories and paths referenced in the script will be accessible and writable
+- **EXECUTION SAFETY**: Test script execution in a safe environment before providing it to the user for production use
\ No newline at end of file
diff --git a/skills/161-java-profiling-detect/SKILL.md b/skills/161-java-profiling-detect/SKILL.md
new file mode 100644
index 00000000..e734fa25
--- /dev/null
+++ b/skills/161-java-profiling-detect/SKILL.md
@@ -0,0 +1,35 @@
+---
+name: 161-java-profiling-detect
+description: Use when you need to set up Java application profiling to detect and measure performance issues — including automated async-profiler v4.0 setup, problem-driven profiling (CPU, memory, threading, GC, I/O), interactive profiling scripts, JFR integration with Java 25 (JEP 518, JEP 520), or collecting profiling data with flamegraphs and JFR recordings. Part of the skills-for-java project
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 1 / Collect data to measure potential issues
+
+Set up the Java profiling detection phase: automated environment setup with async-profiler v4.0, problem-driven interactive profiling scripts, and comprehensive data collection for CPU hotspots, memory leaks, lock contention, GC issues, and I/O bottlenecks. Uses JEP 518 (Cooperative Sampling) and JEP 520 (Method Timing) for reduced overhead.
+
+**What is covered in this Skill?**
+
+- Run application with profiling JVM flags (run-java-process-for-profiling.sh)
+- Interactive profiling script (profiler/scripts/profile-java-process.sh) — copy exact template
+- Directory structure: profiler/scripts/, profiler/results/, profiler/current/
+- Automated OS/architecture detection and async-profiler download
+- CPU, memory, lock, GC, I/O profiling modes
+- Flamegraph and JFR output with timestamped results
+
+**Scope:** Use the exact bash script templates without modification or interpretation.
+
+## Constraints
+
+Copy bash scripts exactly from templates. Ensure JVM flags are applied for profiling compatibility. Verify Java processes are running before attaching profiler.
+
+- **CRITICAL**: Copy the bash script templates exactly — do not modify, interpret, or enhance
+- **SETUP**: Create profiler directory structure: profiler/scripts, profiler/results
+- **EXECUTABLE**: Make scripts executable with chmod +x
+- **BEFORE APPLYING**: Read the reference for exact script templates and setup instructions
+
+## Reference
+
+For detailed guidance, examples, and constraints, see [references/161-java-profiling-detect.md](references/161-java-profiling-detect.md).
diff --git a/skills/161-java-profiling-detect/references/161-java-profiling-detect.md b/skills/161-java-profiling-detect/references/161-java-profiling-detect.md
new file mode 100644
index 00000000..642b7fac
--- /dev/null
+++ b/skills/161-java-profiling-detect/references/161-java-profiling-detect.md
@@ -0,0 +1,2095 @@
+---
+name: 161-java-profiling-detect
+description: Use when you need to set up Java application profiling to detect and measure performance issues — including automated async-profiler v4.0 setup, problem-driven profiling (CPU, memory, threading, GC, I/O), interactive profiling scripts, or collecting profiling data with flamegraphs and JFR recordings.
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 1 / Collect data to measure potential issues
+
+## Role
+
+You are a Senior software engineer with extensive experience in Java software development
+
+## Goal
+
+This cursor rule provides a comprehensive Java application profiling framework designed to detect and measure performance issues systematically.
+It serves as the first step in a structured profiling workflow, focusing on data collection and problem identification using async-profiler v4.0.
+
+The rule automates the entire profiling setup process, from detecting running Java processes to downloading and configuring the appropriate profiler tools for your system.
+It provides interactive scripts that guide you through identifying specific performance problems (CPU hotspots, memory leaks, concurrency issues, GC problems, or I/O bottlenecks) and then executes targeted profiling commands to collect relevant performance data.
+
+Key capabilities include:
+- **Automated Environment Setup**: Detects OS/architecture and downloads async-profiler v4.0 automatically
+- **Problem-Driven Profiling**: Guides users through identifying specific performance issues before profiling
+- **Interactive Workflow**: Provides menu-driven interface for selecting appropriate profiling strategies
+- **Comprehensive Data Collection**: Supports CPU profiling, memory allocation tracking, lock contention analysis, GC monitoring, and I/O bottleneck detection
+- **Modern Tooling**: Leverages async-profiler v4.0 features including interactive heatmaps, native memory leak detection, and enhanced JFR conversion
+- **Enhanced JFR Integration (Java 25)**: Utilizes JEP 518 (JFR Cooperative Sampling) and JEP 520 (JFR Method Timing & Tracing) for improved profiling accuracy and reduced overhead
+- **Advanced Sampling**: Benefits from cooperative sampling techniques that minimize profiling impact while maintaining measurement precision
+- **Organized Results**: Maintains clean directory structure with timestamped results for easy analysis and comparison
+
+The rule ensures consistent, repeatable profiling procedures while providing the flexibility to target specific performance concerns based on your application's behavior and suspected issues.
+
+The profiling setup uses a clean folder structure with everything contained in the profiler directory:
+
+```text
+your-project/
+├── run-java-process-for-profiling.sh # ← Step 1: Run main application with profiling JVM flags
+└── profiler/ # ← All profiling-related files
+├── scripts/ # ← Profiling scripts and tools
+│ └── profile-java-process.sh # ← Step 2: Interactive profiling script
+├── results/ # ← Generated profiling output
+│ ├── *.html # ← Flamegraph files
+│ └── *.jfr # ← JFR recording files
+├── current/ # ← Symlink to current profiler version
+└── async-profiler-*/ # ← Downloaded profiler binaries
+```
+
+## Steps
+
+### Step 1: Setup Application Runner Script
+
+**IMPORTANT**: Use the exact bash script from the template without any modification or interpretation.
+
+```bash
+#!/bin/bash
+
+# Java Application Runner with Async-Profiler Support (Java 21-25 Enhanced)
+# This script runs Spring Boot or Quarkus applications with JVM flags optimized for async-profiler
+# Enhanced for Java 21-25 with virtual threads, modern GC, and improved profiling support
+
+set -e
+
+# Default values
+PROFILE_MODE="cpu"
+APP_JAR=""
+APP_CLASS="info.jab.ms.MainApplication"
+HEAP_SIZE="512m"
+PROFILE_NAME="default"
+FRAMEWORK="auto"
+ENABLE_GC_LOG="false"
+ENABLE_VIRTUAL_THREADS="false"
+JAVA_VERSION=""
+GC_ALGORITHM="auto"
+ENABLE_PREVIEW_FEATURES="false"
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+BLUE='\033[0;34m'
+NC='\033[0m' # No Color
+
+# Function to print colored output
+log_info() {
+ echo -e "${BLUE}[INFO]${NC} $1"
+}
+
+log_warn() {
+ echo -e "${YELLOW}[WARN]${NC} $1"
+}
+
+log_error() {
+ echo -e "${RED}[ERROR]${NC} $1"
+}
+
+log_success() {
+ echo -e "${GREEN}[SUCCESS]${NC} $1"
+}
+
+# Function to show usage
+show_usage() {
+ cat << EOF
+Usage: $0 [OPTIONS]
+
+Run Spring Boot or Quarkus application with async-profiler support
+
+OPTIONS:
+ -m, --mode MODE Profiling mode: cpu, alloc, wall, lock (default: cpu)
+ -f, --framework FRAMEWORK Framework: springboot, quarkus, auto (default: auto)
+ -j, --jar PATH Path to application JAR file
+ -c, --class CLASS Main class to run (default: info.jab.ms.MainApplication)
+ -h, --heap SIZE Heap size (default: 512m)
+ -p, --profile PROFILE Profile to activate (default: default)
+ For Spring Boot: sets spring.profiles.active
+ For Quarkus: sets quarkus.profile
+ --gc-log Enable GC logging (disabled by default)
+ --virtual-threads Enable virtual threads (Java 21+, disabled by default)
+ --gc GC_TYPE GC algorithm: auto, g1, zgc, parallel, serial (default: auto)
+ --preview Enable preview features for newer Java versions
+ --help Show this help message
+
+FRAMEWORK DETECTION:
+ auto Automatically detect framework from pom.xml or JAR naming
+ springboot Force Spring Boot mode
+ quarkus Force Quarkus mode
+
+EXAMPLES:
+ # Auto-detect framework and run with CPU profiling
+ $0 -m cpu
+
+ # Force Spring Boot with memory allocation profiling
+ $0 -f springboot -m alloc
+
+ # Force Quarkus with custom heap size and profile
+ $0 -f quarkus -h 1g -p dev
+
+ # Run Spring Boot with virtual threads (Java 21+)
+ $0 -f springboot --virtual-threads
+
+ # Run with ZGC for low-latency applications (Java 21+)
+ $0 -f springboot --gc zgc -h 2g
+
+ # Run with preview features enabled (Java 21+)
+ $0 -f springboot --preview
+
+ # Run with wall-clock profiling and virtual threads
+ $0 -m wall --virtual-threads
+
+ # Run with comprehensive GC logging
+ $0 -m cpu --gc-log
+
+ # Run with modern GC, virtual threads, and comprehensive logging
+ $0 -m alloc --gc-log --virtual-threads --gc g1
+
+EOF
+}
+
+# Parse command line arguments
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ -m|--mode)
+ PROFILE_MODE="$2"
+ shift 2
+ ;;
+ -f|--framework)
+ FRAMEWORK="$2"
+ shift 2
+ ;;
+ -j|--jar)
+ APP_JAR="$2"
+ shift 2
+ ;;
+ -c|--class)
+ APP_CLASS="$2"
+ shift 2
+ ;;
+ -h|--heap)
+ HEAP_SIZE="$2"
+ shift 2
+ ;;
+ -p|--profile)
+ PROFILE_NAME="$2"
+ shift 2
+ ;;
+ --gc-log)
+ ENABLE_GC_LOG="true"
+ shift
+ ;;
+ --virtual-threads)
+ ENABLE_VIRTUAL_THREADS="true"
+ shift
+ ;;
+ --gc)
+ GC_ALGORITHM="$2"
+ shift 2
+ ;;
+ --preview)
+ ENABLE_PREVIEW_FEATURES="true"
+ shift
+ ;;
+ --help)
+ show_usage
+ exit 0
+ ;;
+ *)
+ log_error "Unknown option: $1"
+ show_usage
+ exit 1
+ ;;
+ esac
+done
+
+# Validate profiling mode
+case $PROFILE_MODE in
+ cpu|alloc|wall|lock)
+ ;;
+ *)
+ log_error "Invalid profiling mode: $PROFILE_MODE. Use: cpu, alloc, wall, lock"
+ exit 1
+ ;;
+esac
+
+# Validate framework
+case $FRAMEWORK in
+ auto|springboot|quarkus)
+ ;;
+ *)
+ log_error "Invalid framework: $FRAMEWORK. Use: auto, springboot, quarkus"
+ exit 1
+ ;;
+esac
+
+# Validate GC algorithm
+case $GC_ALGORITHM in
+ auto|g1|zgc|parallel|serial)
+ ;;
+ *)
+ log_error "Invalid GC algorithm: $GC_ALGORITHM. Use: auto, g1, zgc, parallel, serial"
+ exit 1
+ ;;
+esac
+
+# Function to detect framework
+detect_framework() {
+ if [[ "$FRAMEWORK" != "auto" ]]; then
+ return 0
+ fi
+
+ # Check pom.xml for framework indicators
+ if [[ -f "pom.xml" ]]; then
+ if grep -q "spring-boot-starter" pom.xml; then
+ FRAMEWORK="springboot"
+ log_info "Detected Spring Boot from pom.xml"
+ elif grep -q "quarkus-universe-bom\|quarkus-bom" pom.xml; then
+ FRAMEWORK="quarkus"
+ log_info "Detected Quarkus from pom.xml"
+ fi
+ fi
+
+ # Check for JAR files if framework still not detected
+ if [[ "$FRAMEWORK" == "auto" ]]; then
+ if [[ -n "$APP_JAR" ]]; then
+ if [[ "$APP_JAR" == *"spring-boot"* ]]; then
+ FRAMEWORK="springboot"
+ log_info "Detected Spring Boot from JAR name"
+ elif [[ "$APP_JAR" == *"quarkus"* ]] || [[ "$APP_JAR" == *"runner"* ]]; then
+ FRAMEWORK="quarkus"
+ log_info "Detected Quarkus from JAR name"
+ fi
+ else
+ # Check for JAR files in target directory
+ if ls target/*-runner.jar >/dev/null 2>&1; then
+ FRAMEWORK="quarkus"
+ log_info "Detected Quarkus from runner JAR"
+ elif ls target/*spring-boot*.jar >/dev/null 2>&1; then
+ FRAMEWORK="springboot"
+ log_info "Detected Spring Boot from JAR files"
+ fi
+ fi
+ fi
+
+ # Default to Spring Boot if still not detected
+ if [[ "$FRAMEWORK" == "auto" ]]; then
+ FRAMEWORK="springboot"
+ log_warn "Could not detect framework, defaulting to Spring Boot"
+ fi
+}
+
+# Function to set framework-specific JAR if not provided
+set_default_jar() {
+ if [[ -z "$APP_JAR" ]]; then
+ case $FRAMEWORK in
+ springboot)
+ # Look for Spring Boot JAR
+ if ls target/*spring-boot*.jar >/dev/null 2>&1; then
+ APP_JAR=$(ls target/*spring-boot*.jar | head -1)
+ else
+ # Fallback to common naming pattern
+ PROJECT_NAME=$(basename "$(pwd)")
+ APP_JAR="target/${PROJECT_NAME}-1.0-SNAPSHOT.jar"
+ fi
+ ;;
+ quarkus)
+ # Look for Quarkus runner JAR
+ if ls target/*-runner.jar >/dev/null 2>&1; then
+ APP_JAR=$(ls target/*-runner.jar | head -1)
+ else
+ # Fallback to common naming pattern
+ PROJECT_NAME=$(basename "$(pwd)")
+ APP_JAR="target/${PROJECT_NAME}-1.0-SNAPSHOT-runner.jar"
+ fi
+ ;;
+ esac
+ log_info "Using default JAR: $APP_JAR"
+ fi
+}
+
+# Function to detect Java version
+detect_java_version() {
+ if command -v java >/dev/null 2>&1; then
+ JAVA_VERSION=$(java -version 2>&1 | head -n 1 | cut -d'"' -f2 | cut -d'.' -f1)
+ if [[ "$JAVA_VERSION" == "1" ]]; then
+ # Handle Java 8 version format (1.8.x)
+ JAVA_VERSION=$(java -version 2>&1 | head -n 1 | cut -d'"' -f2 | cut -d'.' -f2)
+ fi
+ log_info "Detected Java version: $JAVA_VERSION"
+
+ # Validate virtual threads support
+ if [[ "$ENABLE_VIRTUAL_THREADS" == "true" ]] && [[ "$JAVA_VERSION" -lt 21 ]]; then
+ log_warn "Virtual threads require Java 21+. Current version: $JAVA_VERSION. Disabling virtual threads."
+ ENABLE_VIRTUAL_THREADS="false"
+ fi
+
+ # Validate preview features
+ if [[ "$ENABLE_PREVIEW_FEATURES" == "true" ]] && [[ "$JAVA_VERSION" -lt 21 ]]; then
+ log_warn "Preview features flag requires Java 21+. Current version: $JAVA_VERSION. Disabling preview features."
+ ENABLE_PREVIEW_FEATURES="false"
+ fi
+ else
+ log_error "Java not found in PATH"
+ exit 1
+ fi
+}
+
+# Function to select optimal GC algorithm based on Java version
+select_gc_algorithm() {
+ if [[ "$GC_ALGORITHM" == "auto" ]]; then
+ if [[ "$JAVA_VERSION" -ge 21 ]]; then
+ # For Java 21+, prefer G1GC for balanced performance
+ GC_ALGORITHM="g1"
+ log_info "Auto-selected G1GC for Java $JAVA_VERSION"
+ elif [[ "$JAVA_VERSION" -ge 17 ]]; then
+ GC_ALGORITHM="g1"
+ log_info "Auto-selected G1GC for Java $JAVA_VERSION"
+ else
+ GC_ALGORITHM="parallel"
+ log_info "Auto-selected Parallel GC for Java $JAVA_VERSION"
+ fi
+ fi
+}
+
+# Detect framework, Java version, and set defaults
+detect_framework
+detect_java_version
+select_gc_algorithm
+set_default_jar
+
+# Generate timestamp for output files
+TIMESTAMP=$(date +"%Y%m%d-%H%M%S")
+
+log_info "Starting $FRAMEWORK application"
+log_info "Profile mode: $PROFILE_MODE"
+log_info "Framework: $FRAMEWORK"
+log_info "Profile: $PROFILE_NAME"
+log_info "Java version: $JAVA_VERSION"
+log_info "GC algorithm: $GC_ALGORITHM"
+
+# JVM flags for optimal profiling with async-profiler (Java 21-25 enhanced)
+JVM_FLAGS=(
+ # Memory settings
+ "-Xms$HEAP_SIZE"
+ "-Xmx$HEAP_SIZE"
+
+ # Profiling optimization flags
+ "-XX:+UnlockDiagnosticVMOptions"
+ "-XX:+DebugNonSafepoints"
+ "-XX:+PreserveFramePointer"
+
+ # Security settings for profiler attachment
+ "-Djdk.attach.allowAttachSelf=true"
+)
+
+# Add GC-specific flags
+case $GC_ALGORITHM in
+ g1)
+ JVM_FLAGS+=(
+ "-XX:+UseG1GC"
+ "-XX:MaxGCPauseMillis=200"
+ "-XX:G1HeapRegionSize=16m"
+ )
+ if [[ "$JAVA_VERSION" -ge 21 ]]; then
+ JVM_FLAGS+=(
+ "-XX:G1MixedGCCountTarget=8"
+ "-XX:G1HeapWastePercent=10"
+ )
+ fi
+ ;;
+ zgc)
+ if [[ "$JAVA_VERSION" -ge 21 ]]; then
+ JVM_FLAGS+=(
+ "-XX:+UseZGC"
+ )
+ elif [[ "$JAVA_VERSION" -ge 17 ]]; then
+ JVM_FLAGS+=(
+ "-XX:+UseZGC"
+ "-XX:+UnlockExperimentalVMOptions"
+ )
+ else
+ log_warn "ZGC requires Java 17+. Falling back to G1GC"
+ JVM_FLAGS+=(
+ "-XX:+UseG1GC"
+ "-XX:MaxGCPauseMillis=200"
+ )
+ fi
+ ;;
+ parallel)
+ JVM_FLAGS+=(
+ "-XX:+UseParallelGC"
+ )
+ ;;
+ serial)
+ JVM_FLAGS+=(
+ "-XX:+UseSerialGC"
+ )
+ ;;
+esac
+
+# Add Java version-specific optimizations
+if [[ "$JAVA_VERSION" -ge 21 ]]; then
+ JVM_FLAGS+=(
+ # Enhanced performance flags for Java 21+
+ "-XX:+UseStringDeduplication"
+ )
+fi
+
+# Function to get CPU core count (cross-platform)
+get_cpu_cores() {
+ if command -v nproc >/dev/null 2>&1; then
+ # Linux
+ nproc
+ elif [[ -f /proc/cpuinfo ]]; then
+ # Linux fallback
+ grep -c ^processor /proc/cpuinfo
+ elif command -v sysctl >/dev/null 2>&1; then
+ # macOS
+ sysctl -n hw.ncpu
+ else
+ # Default fallback
+ echo "4"
+ fi
+}
+
+# Add virtual threads support (Java 21+)
+if [[ "$ENABLE_VIRTUAL_THREADS" == "true" ]]; then
+ CPU_CORES=$(get_cpu_cores)
+ JVM_FLAGS+=(
+ "-Djdk.virtualThreadScheduler.parallelism=$CPU_CORES"
+ "-Djdk.virtualThreadScheduler.maxPoolSize=$((CPU_CORES * 256))"
+ )
+ log_info "Virtual threads enabled with optimized scheduler settings (CPU cores: $CPU_CORES)"
+fi
+
+# Add preview features if enabled
+if [[ "$ENABLE_PREVIEW_FEATURES" == "true" ]]; then
+ JVM_FLAGS+=(
+ "--enable-preview"
+ )
+ log_info "Preview features enabled"
+fi
+
+# Add GC logging if enabled (enhanced for Java 21-25)
+if [[ "$ENABLE_GC_LOG" == "true" ]]; then
+ if [[ "$JAVA_VERSION" -ge 21 ]]; then
+ JVM_FLAGS+=(
+ "-Xlog:gc*,heap*,ergo*:gc-${TIMESTAMP}.log:time,tags,level"
+ "-Xlog:gc+heap=info"
+ )
+ elif [[ "$JAVA_VERSION" -ge 11 ]]; then
+ JVM_FLAGS+=(
+ "-Xlog:gc*:gc-${TIMESTAMP}.log:time,tags"
+ )
+ else
+ # Legacy GC logging for Java 8
+ JVM_FLAGS+=(
+ "-XX:+PrintGC"
+ "-XX:+PrintGCDetails"
+ "-XX:+PrintGCTimeStamps"
+ "-Xloggc:gc-${TIMESTAMP}.log"
+ )
+ fi
+ log_info "GC logging enabled: gc-${TIMESTAMP}.log"
+fi
+
+# Framework-specific arguments (enhanced for Java 21-25)
+get_app_args() {
+ case $FRAMEWORK in
+ springboot)
+ APP_ARGS=(
+ "--spring.profiles.active=$PROFILE_NAME"
+ "--logging.level.info.jab.ms=DEBUG"
+ "--server.port=8080"
+ )
+
+ # Add virtual threads support for Spring Boot 3.2+ with Java 21+
+ if [[ "$ENABLE_VIRTUAL_THREADS" == "true" ]] && [[ "$JAVA_VERSION" -ge 21 ]]; then
+ APP_ARGS+=(
+ "--spring.threads.virtual.enabled=true"
+ )
+ log_info "Spring Boot virtual threads enabled"
+ fi
+ ;;
+ quarkus)
+ APP_ARGS=(
+ "-Dquarkus.profile=$PROFILE_NAME"
+ "-Dquarkus.log.category.\"info.jab.ms\".level=DEBUG"
+ "-Dquarkus.http.port=8080"
+ )
+
+ # Add virtual threads support for Quarkus with Java 21+
+ if [[ "$ENABLE_VIRTUAL_THREADS" == "true" ]] && [[ "$JAVA_VERSION" -ge 21 ]]; then
+ APP_ARGS+=(
+ "-Dquarkus.virtual-threads.enabled=true"
+ )
+ log_info "Quarkus virtual threads enabled"
+ fi
+ ;;
+ esac
+}
+
+# Get framework-specific arguments
+get_app_args
+
+# Function to run with Maven
+run_with_maven() {
+ log_info "Running with Maven..."
+ export JAVA_TOOL_OPTIONS="${JVM_FLAGS[*]}"
+
+ case $FRAMEWORK in
+ springboot)
+ # Start Spring Boot application
+ mvn spring-boot:run \
+ -Dspring-boot.run.profiles="$PROFILE_NAME" \
+ -Dspring-boot.run.arguments="$(IFS=' '; echo "${APP_ARGS[*]}")"
+ ;;
+ quarkus)
+ # Start Quarkus application in dev mode
+ mvn quarkus:dev \
+ -Dquarkus.args="$(IFS=' '; echo "${APP_ARGS[*]}")"
+ ;;
+ esac
+}
+
+# Function to run with JAR
+run_with_jar() {
+ if [[ ! -f "$APP_JAR" ]]; then
+ log_error "JAR file not found: $APP_JAR"
+ log_info "Building the application first..."
+ case $FRAMEWORK in
+ springboot)
+ mvn clean package -DskipTests
+ ;;
+ quarkus)
+ mvn clean package -DskipTests
+ ;;
+ esac
+ fi
+
+ if [[ ! -f "$APP_JAR" ]]; then
+ log_error "Failed to build or find JAR file: $APP_JAR"
+ exit 1
+ fi
+
+ log_info "Running JAR: $APP_JAR"
+
+ # Start the application
+ java "${JVM_FLAGS[@]}" -jar "$APP_JAR" "${APP_ARGS[@]}"
+}
+
+# Function to run with class
+run_with_class() {
+ log_info "Running main class: $APP_CLASS"
+
+ # Start the application
+ java "${JVM_FLAGS[@]}" -cp "target/classes:$(mvn dependency:build-classpath -Dmdep.outputFile=/dev/stdout -q)" "$APP_CLASS" "${APP_ARGS[@]}"
+}
+
+# Main execution
+log_info "JVM Flags: ${JVM_FLAGS[*]}"
+
+# Determine how to run the application
+if [[ -f "pom.xml" ]] && command -v mvn >/dev/null 2>&1; then
+ if [[ -f "$APP_JAR" ]]; then
+ run_with_jar
+ else
+ run_with_maven
+ fi
+elif [[ -f "$APP_JAR" ]]; then
+ run_with_jar
+else
+ run_with_class
+fi
+
+log_success "Application completed!"
+
+# Enhanced Java 21-25 Features Summary:
+# - Automatic Java version detection with compatibility warnings
+# - Virtual threads support (Java 21+) with optimized scheduler settings
+# - Modern GC algorithms (G1GC, ZGC) with version-specific optimizations
+# - Enhanced GC logging with detailed heap and ergonomics information
+# - Preview features support for experimental Java features
+# - Framework-specific virtual threads integration (Spring Boot 3.2+, Quarkus)
+# - Backward compatibility maintained for Java 8+ versions
+
+```
+
+**Script Location:**
+```
+your-project/
+└── run-with-profiler.sh # ← Run main application with the right JVM flags for profiling
+```
+
+**Setup Instructions:**
+1. Copy the **EXACT** bash script content from `run-java-process-for-profiling.sh`
+2. Save it as `run-java-process-for-profiling.sh` in your project root
+3. Make it executable: `chmod +x run-java-process-for-profiling.sh`
+4. **NO MODIFICATIONS** to the script content are needed or allowed
+
+**Purpose:**
+- Configures JVM with profiling-friendly flags
+- Ensures proper async-profiler compatibility
+- Starts your application ready for profiling
+
+**Usage:**
+```bash
+# Start your application with profiling-ready JVM settings
+./run-java-process-for-profiling.sh
+```
+
+#### Step Constraints
+
+- **CRITICAL INSTRUCTION FOR AI ASSISTANTS:**
+- **COPY THE BASH SCRIPT EXACTLY** from the template file
+- **DO NOT MODIFY, INTERPRET, OR ENHANCE** the script content
+- **DO NOT ADD NEW FEATURES** or change the logic
+- **USE THE SCRIPT VERBATIM** - every line, comment, and function exactly as provided
+- The script is already complete and tested - no improvements needed
+
+### Step 2: Setup Interactive Profiling Script
+
+**IMPORTANT**: Use the exact bash script from the template without any modification or interpretation.
+
+```bash
+#!/bin/bash
+
+# java-profile.sh - Automated Java profiling script
+
+set -e
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+BLUE='\033[0;34m'
+NC='\033[0m' # No Color
+
+echo -e "${GREEN}Java Application Profiler v4.4 - Organized by Tool Categories${NC}"
+echo "================================================="
+
+# Step 0: Problem Identification
+echo -e "${YELLOW}Step 0: Problem Identification${NC}"
+echo "-----"
+echo "What specific performance problem are you trying to solve?"
+echo ""
+echo -e "${BLUE}Problem Categories:${NC}"
+echo " 1) Performance Bottlenecks (CPU hotspots, inefficient algorithms, unnecessary allocations, string operations)"
+echo " 2) Memory-Related Problems (memory leaks, heap usage, object retention, off-heap issues)"
+echo " 3) Concurrency/Threading Issues (lock contention, thread pool issues, deadlocks, context switching)"
+echo " 4) Garbage Collection Problems (GC pressure, long pauses, generational issues)"
+echo " 5) I/O and Network Bottlenecks (blocking operations, connection leaks, serialization issues)"
+echo " 0) Not sure / General performance analysis"
+echo ""
+
+while true; do
+ read -p "Select the problem category you're investigating (0-5): " PROBLEM_SELECTION
+
+ if [[ "$PROBLEM_SELECTION" =~ ^[0-5]$ ]]; then
+ case $PROBLEM_SELECTION in
+ 1) PROBLEM_CATEGORY="Performance Bottlenecks"; SUGGESTED_PROFILE="CPU" ;;
+ 2) PROBLEM_CATEGORY="Memory-Related Problems"; SUGGESTED_PROFILE="Memory" ;;
+ 3) PROBLEM_CATEGORY="Concurrency/Threading Issues"; SUGGESTED_PROFILE="Lock/Threading" ;;
+ 4) PROBLEM_CATEGORY="Garbage Collection Problems"; SUGGESTED_PROFILE="GC Analysis" ;;
+ 5) PROBLEM_CATEGORY="I/O and Network Bottlenecks"; SUGGESTED_PROFILE="I/O Analysis" ;;
+ 0) PROBLEM_CATEGORY="General Analysis"; SUGGESTED_PROFILE="CPU" ;;
+ esac
+
+ echo -e "${GREEN}Selected problem category: $PROBLEM_CATEGORY${NC}"
+ echo -e "${GREEN}Suggested profiling approach: $SUGGESTED_PROFILE${NC}"
+ echo ""
+ break
+ else
+ echo -e "${RED}Invalid selection. Please choose a number between 0 and 5.${NC}"
+ fi
+done
+
+# Step 1: List Java processes and handle selection
+echo -e "${YELLOW}Step 1: Available Java Processes${NC}"
+echo "-----"
+
+# Get list of Java processes
+JAVA_PROCESSES=$(jps -l | grep -v "Jps$")
+
+if [ -z "$JAVA_PROCESSES" ]; then
+ echo -e "${RED}No Java processes found running on this system.${NC}"
+ echo "Please start your Spring Boot application first:"
+ echo " cd examples/spring-boot-memory-leak-demo"
+ echo " ./mvnw spring-boot:run"
+ echo ""
+ echo "Then try running this profiler again."
+ exit 1
+fi
+
+# Count the number of processes
+PROCESS_COUNT=$(echo "$JAVA_PROCESSES" | wc -l | xargs)
+
+if [ "$PROCESS_COUNT" -eq 1 ]; then
+ # Only one process found, auto-select it
+ PID=$(echo "$JAVA_PROCESSES" | cut -d' ' -f1)
+ PROCESS_NAME=$(echo "$JAVA_PROCESSES" | cut -d' ' -f2-)
+ echo -e "${GREEN}Found single Java process:${NC}"
+ echo " PID: $PID"
+ echo " Name: $PROCESS_NAME"
+ echo ""
+ read -p "Do you want to profile this process? (y/N): " CONFIRM
+ if [[ ! "$CONFIRM" =~ ^[Yy]$ ]]; then
+ echo "Profiling cancelled by user."
+ exit 0
+ fi
+else
+ # Multiple processes found, provide selection menu
+ echo -e "${GREEN}Found $PROCESS_COUNT Java processes:${NC}"
+ echo ""
+
+ # Create numbered list
+ i=1
+ declare -a PIDS
+ declare -a NAMES
+
+ while IFS= read -r line; do
+ pid=$(echo "$line" | cut -d' ' -f1)
+ name=$(echo "$line" | cut -d' ' -f2-)
+ PIDS[$i]=$pid
+ NAMES[$i]=$name
+ echo " $i) PID: $pid - $name"
+ ((i++))
+ done <<< "$JAVA_PROCESSES"
+
+ echo ""
+ echo "0) Manual PID entry"
+ echo ""
+
+ # Get user selection
+ while true; do
+ read -p "Select a process to profile (0-$PROCESS_COUNT): " SELECTION
+
+ if [[ "$SELECTION" =~ ^[0-9]+$ ]]; then
+ if [ "$SELECTION" -eq 0 ]; then
+ # Manual PID entry
+ read -p "Enter the PID of the Java process to profile: " PID
+ break
+ elif [ "$SELECTION" -ge 1 ] && [ "$SELECTION" -le "$PROCESS_COUNT" ]; then
+ # Valid selection
+ PID=${PIDS[$SELECTION]}
+ PROCESS_NAME=${NAMES[$SELECTION]}
+ echo -e "${GREEN}Selected process:${NC}"
+ echo " PID: $PID"
+ echo " Name: $PROCESS_NAME"
+ break
+ else
+ echo -e "${RED}Invalid selection. Please choose a number between 0 and $PROCESS_COUNT.${NC}"
+ fi
+ else
+ echo -e "${RED}Invalid input. Please enter a number.${NC}"
+ fi
+ done
+fi
+
+# Validate PID
+if ! kill -0 "$PID" 2>/dev/null; then
+ echo -e "${RED}Error: Process $PID does not exist or is not accessible${NC}"
+ exit 1
+fi
+
+# Double-check if it's a Java process (for manual entries)
+if ! jps | grep -q "^$PID "; then
+ echo -e "${RED}Error: Process $PID is not a Java process${NC}"
+ exit 1
+fi
+
+echo -e "${GREEN}Ready to profile process $PID${NC}"
+
+# Step 2: Determine profiler directory (default to current script's parent directory)
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROFILER_DIR="$(dirname "$SCRIPT_DIR")"
+
+echo -e "${YELLOW}Step 2: Profiler Directory Setup${NC}"
+echo "-----"
+echo -e "${GREEN}Using profiler directory: $PROFILER_DIR${NC}"
+
+# Step 3: Detect OS and download profiler
+echo -e "${YELLOW}Step 3: Setting up async-profiler${NC}"
+echo "-----"
+
+detect_os_arch() {
+ OS=$(uname -s | tr '[:upper:]' '[:lower:]')
+ ARCH=$(uname -m)
+
+ case "$OS" in
+ linux*)
+ case "$ARCH" in
+ x86_64) PLATFORM="linux-x64" ;;
+ aarch64|arm64) PLATFORM="linux-arm64" ;;
+ *) echo -e "${RED}Unsupported architecture: $ARCH${NC}"; exit 1 ;;
+ esac
+ ;;
+ darwin*)
+ case "$ARCH" in
+ x86_64|arm64) PLATFORM="macos" ;;
+ *) echo -e "${RED}Unsupported architecture: $ARCH${NC}"; exit 1 ;;
+ esac
+ ;;
+ *)
+ echo -e "${RED}Unsupported operating system: $OS${NC}"
+ exit 1
+ ;;
+ esac
+
+ echo -e "${GREEN}Detected platform: $PLATFORM${NC}"
+}
+
+download_profiler() {
+ local platform=$1
+ local profiler_dir=$2
+ local version="4.1"
+
+ # For macOS, v4.0 uses .zip format, Linux still uses .tar.gz
+ if [[ "$platform" == "macos" ]]; then
+ local filename="async-profiler-$version-$platform.zip"
+ local extract_cmd="unzip -q"
+ else
+ local filename="async-profiler-$version-$platform.tar.gz"
+ local extract_cmd="tar -xzf"
+ fi
+
+ local url="https://github.com/async-profiler/async-profiler/releases/download/v$version/$filename"
+
+ if [ ! -d "$profiler_dir/current" ]; then
+ echo "Downloading async-profiler..."
+ echo "URL: $url"
+ mkdir -p "$profiler_dir"
+ cd "$profiler_dir"
+
+ # Remove any failed downloads
+ rm -f async-profiler-*.tar.gz async-profiler-*.zip 2>/dev/null
+
+ if command -v curl >/dev/null 2>&1; then
+ curl -L -o "$filename" "$url"
+ elif command -v wget >/dev/null 2>&1; then
+ wget -O "$filename" "$url"
+ else
+ echo -e "${RED}Error: Neither curl nor wget is available${NC}"
+ exit 1
+ fi
+
+ # Check if download was successful
+ if [[ ! -f "$filename" ]]; then
+ echo -e "${RED}Download failed: File $filename not found${NC}"
+ exit 1
+ fi
+
+ # Check file size (should be larger than 1MB for a valid download)
+ local file_size
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ file_size=$(stat -f%z "$filename" 2>/dev/null || echo "0")
+ else
+ file_size=$(stat -c%s "$filename" 2>/dev/null || echo "0")
+ fi
+
+ if [[ "$file_size" -lt 100000 ]]; then # Less than 100KB
+ echo -e "${RED}Download failed: File is too small ($file_size bytes). This usually means a redirect or error page was downloaded.${NC}"
+ echo -e "${YELLOW}Contents of downloaded file:${NC}"
+ head -10 "$filename" 2>/dev/null || echo "Cannot read file"
+ rm -f "$filename"
+ exit 1
+ fi
+
+ # Extract the archive
+ echo "Extracting $filename..."
+ $extract_cmd "$filename"
+
+ if [[ $? -ne 0 ]]; then
+ echo -e "${RED}Failed to extract $filename${NC}"
+ rm -f "$filename"
+ exit 1
+ fi
+
+ # Create symbolic link
+ ln -sf "async-profiler-$version-$platform" current
+
+ # Clean up archive
+ rm -f "$filename"
+
+ cd - > /dev/null
+ echo -e "${GREEN}Async-profiler downloaded successfully to $profiler_dir${NC}"
+ else
+ echo -e "${GREEN}Async-profiler already available at $profiler_dir${NC}"
+ fi
+}
+
+detect_os_arch
+download_profiler "$PLATFORM" "$PROFILER_DIR"
+
+# Create results directory if it doesn't exist (inside profiler directory)
+RESULTS_DIR="$PROFILER_DIR/results"
+mkdir -p "$RESULTS_DIR"
+
+# Function to check platform capabilities
+check_platform_capabilities() {
+ echo -e "${BLUE}Platform Capabilities Check:${NC}"
+ echo "-----"
+
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ echo -e "${YELLOW}Platform: macOS${NC}"
+ echo "✅ CPU profiling (limited to user space)"
+ echo "✅ Memory allocation profiling"
+ echo "✅ Native memory profiling (NEW: Full support on macOS in v4.1)"
+ echo "✅ Lock contention profiling"
+ echo "✅ Wall clock profiling"
+ echo "✅ JFR format with OpenTelemetry support"
+ echo "✅ JDK 25 compatibility"
+ echo "❌ Hardware performance counters (Linux only)"
+ echo ""
+ echo -e "${BLUE}Note: macOS profiling is limited to user-space code only${NC}"
+ else
+ echo -e "${YELLOW}Platform: Linux${NC}"
+ echo "✅ Full CPU profiling (user + kernel space)"
+ echo "✅ Memory allocation profiling"
+ echo "✅ Native memory profiling (enhanced with jemalloc support)"
+ echo "✅ Lock contention profiling"
+ echo "✅ Wall clock profiling"
+ echo "✅ Hardware performance counters"
+ echo "✅ JFR format with OpenTelemetry support"
+ echo "✅ JDK 25 compatibility"
+
+ # Check if running as root or with proper permissions
+ if [[ $EUID -eq 0 ]]; then
+ echo "✅ Running with elevated privileges"
+ else
+ echo -e "${YELLOW}⚠️ For optimal profiling, consider running with elevated privileges${NC}"
+ fi
+ fi
+ echo ""
+}
+
+# Check platform capabilities
+check_platform_capabilities
+
+# Function to show profiling menu
+show_profiling_menu() {
+ echo ""
+ echo -e "${BLUE}===========================================${NC}"
+ echo -e "${YELLOW}Profiling Options for PID: $PID${NC}"
+ echo -e "${YELLOW}Process: $PROCESS_NAME${NC}"
+ echo -e "${YELLOW}Problem Category: $PROBLEM_CATEGORY${NC}"
+ echo -e "${YELLOW}Suggested Approach: $SUGGESTED_PROFILE${NC}"
+ echo -e "${BLUE}===========================================${NC}"
+
+ # Show recommended options first based on problem category
+ case $SUGGESTED_PROFILE in
+ "Memory")
+ echo -e "${GREEN}*** RECOMMENDED FOR MEMORY LEAK DETECTION ***${NC}"
+ echo "2. Memory Allocation Profiling (30s) [Async-Profiler] ⭐"
+ echo "8. Memory Leak Detection (5min) [Async-Profiler] ⭐"
+ echo "5. Native Memory Profiling (30s) [Async-Profiler] ⭐"
+ echo "12. All Events Profiling (30s) [Async-Profiler] ⭐"
+ echo ""
+ echo -e "${BLUE}Other Options:${NC}"
+ ;;
+ "CPU")
+ echo -e "${GREEN}*** RECOMMENDED FOR YOUR PROBLEM ***${NC}"
+ echo "1. CPU Profiling (30s) [Async-Profiler] ⭐"
+ echo "7. Custom Duration CPU Profiling [Async-Profiler] ⭐"
+ echo "12. All Events Profiling (30s) [Async-Profiler] ⭐"
+ echo ""
+ echo -e "${BLUE}Other Options:${NC}"
+ ;;
+ "Lock/Threading")
+ echo -e "${GREEN}*** RECOMMENDED FOR YOUR PROBLEM ***${NC}"
+ echo "3. Lock Contention Profiling (30s) [Async-Profiler] ⭐"
+ echo "4. Wall Clock Profiling (30s) [Async-Profiler] ⭐"
+ echo ""
+ echo -e "${BLUE}Other Options:${NC}"
+ ;;
+ "GC Analysis")
+ echo -e "${GREEN}*** RECOMMENDED FOR YOUR PROBLEM ***${NC}"
+ echo "13. Garbage Collection Log (custom duration) [jcmd/jstat] ⭐"
+ echo "5. Interactive Heatmap (60s) [Async-Profiler + JFRconv] ⭐"
+ echo "2. Memory Allocation Profiling (30s) [Async-Profiler] ⭐"
+ echo ""
+ echo -e "${BLUE}Other Options:${NC}"
+ ;;
+ "I/O Analysis")
+ echo -e "${GREEN}*** RECOMMENDED FOR YOUR PROBLEM ***${NC}"
+ echo "4. Wall Clock Profiling (30s) [Async-Profiler] ⭐"
+ echo "5. Interactive Heatmap (60s) [Async-Profiler + JFRconv] ⭐"
+ echo ""
+ echo -e "${BLUE}Other Options:${NC}"
+ ;;
+ esac
+
+ echo ""
+ echo -e "${GREEN}=== ASYNC-PROFILER OPTIONS ===${NC}"
+ echo "1. CPU Profiling (30s) [Async-Profiler]"
+ echo "2. Memory Allocation Profiling (30s) [Async-Profiler]"
+ echo "3. Lock Contention Profiling (30s) [Async-Profiler]"
+ echo "4. Wall Clock Profiling (30s) [Async-Profiler]"
+ echo "5. Native Memory Profiling (30s) [Async-Profiler] - NEW: Full macOS support in v4.1"
+ echo "6. Inverted Flame Graph (30s) [Async-Profiler] - NEW in v4.0"
+ echo "7. Custom Duration CPU Profiling [Async-Profiler]"
+ echo "8. Memory Leak Detection (5min) [Async-Profiler] - Enhanced in v4.1"
+ echo "9. Complete Memory Analysis Workflow [Async-Profiler]"
+ echo "10. JFR Recording (custom duration) [Async-Profiler]"
+ echo "11. Interactive Heatmap (60s) [Async-Profiler + JFRconv] - NEW in v4.0"
+ echo "12. All Events Profiling (30s) [Async-Profiler] - NEW in v4.1"
+ echo "13. OpenTelemetry OTLP Export [Async-Profiler] - NEW in v4.1"
+ echo ""
+ echo -e "${BLUE}=== JCMD OPTIONS ===${NC}"
+ echo "14. Enhanced JFR Memory Profiling (Java 21+) [jcmd] - NEW"
+ echo "15. Java 25 CPU-Time Profiling (Linux only) [jcmd] - NEW"
+ echo "16. Java 25 Method Tracing [jcmd] - NEW"
+ echo "17. Advanced JFR with Custom Events [jcmd] - NEW"
+ echo "18. JFR Memory Leak Analysis with TLAB tracking [jcmd] - NEW"
+ echo "19. Garbage Collection Log (custom duration) [jcmd/jstat]"
+ echo ""
+ echo -e "${YELLOW}=== SYSTEM TOOLS ===${NC}"
+ echo "20. Thread Dump (instant snapshot) [jstack/Async-Profiler]"
+ echo "21. View recent results [Built-in File Explorer]"
+ echo "22. Kill current process [kill command] - NEW"
+ echo "0. Exit profiler"
+ echo -e "${BLUE}===========================================${NC}"
+}
+
+# Function to handle profiling errors
+handle_profiling_error() {
+ local exit_code=$1
+ local profiling_type=$2
+
+ if [[ $exit_code -ne 0 ]]; then
+ echo -e "${RED}Profiling failed with exit code: $exit_code${NC}"
+ echo -e "${YELLOW}Common solutions:${NC}"
+
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ echo "• On macOS, some profiling modes have limitations"
+ echo "• Try using --all-user flag for CPU profiling"
+ echo "• Use allocation profiling instead of native memory profiling"
+ else
+ echo "• Check if you have sufficient permissions:"
+ echo " sudo sysctl kernel.perf_event_paranoid=1"
+ echo " sudo sysctl kernel.kptr_restrict=0"
+ echo "• Try running with sudo for full system profiling"
+ echo "• Use --all-user flag to profile only user-space code"
+ fi
+
+ echo -e "${BLUE}Alternative: Try option 2 (Memory Allocation Profiling) which works on all platforms${NC}"
+ return 1
+ fi
+ return 0
+}
+
+# Function to detect Java version
+detect_java_version() {
+ local pid=$1
+ local java_version
+
+ # Try to get Java version from the running process
+ if command -v jcmd >/dev/null 2>&1; then
+ java_version=$(jcmd "$pid" VM.version 2>/dev/null | grep -E "Java|OpenJDK" | head -1)
+ if [[ "$java_version" =~ ([0-9]+) ]]; then
+ echo "${BASH_REMATCH[1]}"
+ return 0
+ fi
+ fi
+
+ # Fallback: try to detect from process command line
+ if command -v ps >/dev/null 2>&1; then
+ local java_cmd=$(ps -p "$pid" -o args= 2>/dev/null | head -1)
+ if [[ "$java_cmd" =~ java.*/([0-9]+) ]]; then
+ echo "${BASH_REMATCH[1]}"
+ return 0
+ fi
+ fi
+
+ # Default assumption for modern systems
+ echo "17"
+}
+
+# Function to execute profiling
+execute_profiling() {
+ local option=$1
+ local java_version=$(detect_java_version "$PID")
+
+ case $option in
+ 1)
+ echo -e "${GREEN}Starting CPU profiling for 30 seconds...${NC}"
+ echo -e "${BLUE}Enhanced in v4.1: Records which CPU core each sample was taken on${NC}"
+ # On macOS, add --all-user flag to avoid kernel profiling issues
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ "$PROFILER_DIR/current/bin/asprof" -d 30 --all-user -f "$RESULTS_DIR/cpu-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ handle_profiling_error $? "CPU (macOS)"
+ else
+ "$PROFILER_DIR/current/bin/asprof" -d 30 -f "$RESULTS_DIR/cpu-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ handle_profiling_error $? "CPU (Linux)"
+ fi
+ ;;
+ 2)
+ echo -e "${GREEN}Starting memory allocation profiling for 30 seconds...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d 30 -f "$RESULTS_DIR/allocation-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ ;;
+ 3)
+ echo -e "${GREEN}Starting lock contention profiling for 30 seconds...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e lock -d 30 -f "$RESULTS_DIR/lock-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ ;;
+ 4)
+ echo -e "${GREEN}Starting wall clock profiling for 30 seconds...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e wall -d 30 -f "$RESULTS_DIR/wall-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ ;;
+ 5)
+ echo -e "${GREEN}Starting native memory profiling for 30 seconds...${NC}"
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ echo -e "${BLUE}Using native memory profiling with full macOS support (v4.1)...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e nativemem -d 30 -f "$RESULTS_DIR/native-memory-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ else
+ echo -e "${BLUE}Using native memory profiling with jemalloc support...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e nativemem -d 30 -f "$RESULTS_DIR/native-memory-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ fi
+ ;;
+ 6)
+ echo -e "${GREEN}Starting inverted flame graph profiling for 30 seconds...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d 30 --inverted -f "$RESULTS_DIR/inverted-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ ;;
+ 7)
+ read -p "Enter duration in seconds: " DURATION
+ echo -e "${GREEN}Starting CPU profiling for $DURATION seconds...${NC}"
+ # On macOS, add --all-user flag to avoid kernel profiling issues
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ "$PROFILER_DIR/current/bin/asprof" -d "$DURATION" --all-user -f "$RESULTS_DIR/cpu-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ else
+ "$PROFILER_DIR/current/bin/asprof" -d "$DURATION" -f "$RESULTS_DIR/cpu-flamegraph-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ fi
+ ;;
+ 8)
+ echo -e "${GREEN}Starting Memory Leak Detection (5 minutes)...${NC}"
+ echo -e "${YELLOW}This will run for 5 minutes to capture memory leak patterns${NC}"
+ echo -e "${BLUE}Using enhanced leak detection: focusing on allocation patterns over time${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d 300 --alloc 1m -f "$RESULTS_DIR/memory-leak-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ ;;
+ 9)
+ echo -e "${GREEN}Starting Complete Memory Analysis Workflow...${NC}"
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+
+ echo -e "${BLUE}Step 1: Quick memory allocation baseline (30s)...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d 30 -f "$RESULTS_DIR/memory-baseline-$TIMESTAMP.html" "$PID"
+
+ echo -e "${BLUE}Step 2: Heap profiling (60s)...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d 60 -f "$RESULTS_DIR/heap-analysis-$TIMESTAMP.html" "$PID"
+
+ echo -e "${BLUE}Step 3: Memory leak detection (5min) - Long-term allocation analysis...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d 300 --alloc 1m -f "$RESULTS_DIR/memory-leak-complete-$TIMESTAMP.html" "$PID"
+
+ echo -e "${GREEN}Complete memory analysis finished! Check these files:${NC}"
+ echo "- memory-baseline-$TIMESTAMP.html (30s baseline)"
+ echo "- heap-analysis-$TIMESTAMP.html (60s detailed heap)"
+ echo "- memory-leak-complete-$TIMESTAMP.html (5min leak detection)"
+ ;;
+ 10)
+ echo -e "${GREEN}Creating JFR Recording...${NC}"
+ read -p "Enter duration in seconds (default: 60): " JFR_DURATION
+ JFR_DURATION=${JFR_DURATION:-60}
+
+ if ! [[ "$JFR_DURATION" =~ ^[0-9]+$ ]] || [ "$JFR_DURATION" -lt 1 ]; then
+ echo -e "${RED}Invalid duration. Using default 60 seconds.${NC}"
+ JFR_DURATION=60
+ fi
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/recording-${JFR_DURATION}s-$TIMESTAMP.jfr"
+
+ echo -e "${BLUE}Starting JFR recording for $JFR_DURATION seconds...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d "$JFR_DURATION" -o jfr -f "$JFR_FILE" "$PID"
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}JFR recording completed: $JFR_FILE${NC}"
+ echo -e "${YELLOW}You can analyze this JFR file with:${NC}"
+ echo " - JProfiler"
+ echo " - VisualVM"
+ echo " - Mission Control"
+ echo " - jfrconv (included with async-profiler)"
+ echo " - NEW in v4.1: Convert to OTLP format for OpenTelemetry integration"
+ else
+ echo -e "${RED}JFR recording failed${NC}"
+ fi
+ ;;
+ 11)
+ echo -e "${GREEN}Starting interactive heatmap profiling for 60 seconds...${NC}"
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/profile-$TIMESTAMP.jfr"
+ HEATMAP_FILE="$RESULTS_DIR/heatmap-cpu-$TIMESTAMP.html"
+
+ echo -e "${BLUE}Step 1: Generating JFR recording...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d 60 -o jfr -f "$JFR_FILE" "$PID"
+
+ echo -e "${BLUE}Step 2: Converting JFR to heatmap...${NC}"
+ # jfrconv is now a shell script in v4.1
+ bash "$PROFILER_DIR/current/bin/jfrconv" --cpu -o heatmap "$JFR_FILE" "$HEATMAP_FILE"
+ echo -e "${GREEN}Heatmap generated: $HEATMAP_FILE${NC}"
+ ;;
+ 12)
+ echo -e "${GREEN}Starting All Events Profiling for 30 seconds...${NC}"
+ echo -e "${BLUE}This will collect all possible events simultaneously (CPU, alloc, lock, wall)${NC}"
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/all-events-$TIMESTAMP.jfr"
+ HTML_FILE="$RESULTS_DIR/all-events-$TIMESTAMP.html"
+
+ echo -e "${BLUE}Step 1: Recording all events to JFR format...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" --all -d 30 -o jfr -f "$JFR_FILE" "$PID"
+
+ if [ $? -eq 0 ] && [ -f "$JFR_FILE" ]; then
+ echo -e "${BLUE}Step 2: Converting JFR to HTML flame graph...${NC}"
+ # Use the converter tool to create flame graph from JFR
+ if [ -f "$PROFILER_DIR/current/bin/converter.jar" ]; then
+ java -jar "$PROFILER_DIR/current/bin/converter.jar" jfr2flame "$JFR_FILE" "$HTML_FILE"
+ elif [ -f "$PROFILER_DIR/current/bin/jfrconv" ]; then
+ bash "$PROFILER_DIR/current/bin/jfrconv" -o html "$JFR_FILE" "$HTML_FILE"
+ else
+ echo -e "${YELLOW}JFR converter not found, skipping HTML conversion${NC}"
+ echo -e "${BLUE}You can analyze the JFR file with JProfiler, VisualVM, or Mission Control${NC}"
+ fi
+
+ echo -e "${GREEN}All events profiling completed!${NC}"
+ echo -e "${YELLOW}Generated files:${NC}"
+ echo " - JFR recording: $JFR_FILE"
+ if [ -f "$HTML_FILE" ]; then
+ echo " - HTML flame graph: $HTML_FILE"
+ else
+ echo " - HTML conversion skipped (use JFR analysis tools instead)"
+ fi
+ else
+ echo -e "${RED}All events profiling failed${NC}"
+ fi
+ ;;
+ 13)
+ echo -e "${GREEN}Starting OpenTelemetry OTLP Export...${NC}"
+ read -p "Enter duration in seconds (default: 60): " OTLP_DURATION
+ OTLP_DURATION=${OTLP_DURATION:-60}
+
+ if ! [[ "$OTLP_DURATION" =~ ^[0-9]+$ ]] || [ "$OTLP_DURATION" -lt 1 ]; then
+ echo -e "${RED}Invalid duration. Using default 60 seconds.${NC}"
+ OTLP_DURATION=60
+ fi
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ OTLP_FILE="$RESULTS_DIR/otlp-export-$TIMESTAMP.json"
+
+ echo -e "${BLUE}Generating OTLP format for OpenTelemetry integration...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d "$OTLP_DURATION" -o otlp -f "$OTLP_FILE" "$PID"
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}OTLP export completed: $OTLP_FILE${NC}"
+ echo -e "${YELLOW}This file can be imported into OpenTelemetry-compatible systems${NC}"
+ echo -e "${BLUE}File size: $(wc -c < "$OTLP_FILE" 2>/dev/null || echo "unknown") bytes${NC}"
+ else
+ echo -e "${RED}OTLP export failed${NC}"
+ fi
+ ;;
+ 14)
+ echo -e "${GREEN}Starting Enhanced JFR Memory Profiling (Java 21+)...${NC}"
+ if [ "$java_version" -lt 21 ]; then
+ echo -e "${YELLOW}Warning: This feature requires Java 21+. Current version: $java_version${NC}"
+ echo -e "${BLUE}Falling back to standard memory profiling...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d 60 -f "$RESULTS_DIR/memory-fallback-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ return
+ fi
+
+ read -p "Enter duration in seconds (default: 120): " MEMORY_DURATION
+ MEMORY_DURATION=${MEMORY_DURATION:-120}
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/enhanced-memory-$TIMESTAMP.jfr"
+
+ echo -e "${BLUE}Starting enhanced JFR memory profiling with modern events...${NC}"
+
+ # Use jcmd for more precise control over JFR events
+ echo -e "${BLUE}Step 1: Starting JFR recording with memory-specific events...${NC}"
+ jcmd "$PID" JFR.start name=memory-analysis duration="${MEMORY_DURATION}s" \
+ settings=profile \
+ "jdk.ObjectAllocationInNewTLAB#enabled=true" \
+ "jdk.ObjectAllocationOutsideTLAB#enabled=true" \
+ "jdk.ObjectAllocationSample#enabled=true" \
+ "jdk.GCHeapSummary#enabled=true" \
+ "jdk.GCConfiguration#enabled=true" \
+ "jdk.YoungGenerationConfiguration#enabled=true" \
+ "jdk.OldGenerationConfiguration#enabled=true" \
+ "jdk.GCCollectorG1GC#enabled=true" \
+ "jdk.G1HeapRegionInformation#enabled=true" \
+ filename="$JFR_FILE" 2>/dev/null
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}JFR memory recording started successfully${NC}"
+ echo "Recording for $MEMORY_DURATION seconds..."
+
+ # Show progress
+ for i in $(seq 1 $((MEMORY_DURATION/10))); do
+ echo -n "."
+ sleep 10
+ done
+ echo ""
+
+ # Stop recording
+ jcmd "$PID" JFR.stop name=memory-analysis 2>/dev/null
+ echo -e "${GREEN}Enhanced JFR memory profiling completed: $JFR_FILE${NC}"
+ else
+ echo -e "${YELLOW}jcmd failed, falling back to async-profiler...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d "$MEMORY_DURATION" -f "$RESULTS_DIR/memory-enhanced-fallback-$TIMESTAMP.html" "$PID"
+ fi
+ ;;
+ 15)
+ echo -e "${GREEN}Starting Java 25 CPU-Time Profiling (Experimental)...${NC}"
+ if [ "$java_version" -lt 25 ]; then
+ echo -e "${YELLOW}Warning: This feature requires Java 25+. Current version: $java_version${NC}"
+ echo -e "${BLUE}Falling back to standard CPU profiling...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d 60 -f "$RESULTS_DIR/cpu-fallback-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ return
+ fi
+
+ if [[ "$OSTYPE" != "linux"* ]]; then
+ echo -e "${RED}Error: Java 25 CPU-Time profiling is only available on Linux${NC}"
+ echo -e "${BLUE}Falling back to standard CPU profiling...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d 60 -f "$RESULTS_DIR/cpu-fallback-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ return
+ fi
+
+ read -p "Enter duration in seconds (default: 60): " CPU_DURATION
+ CPU_DURATION=${CPU_DURATION:-60}
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/cpu-time-java25-$TIMESTAMP.jfr"
+
+ echo -e "${BLUE}Starting Java 25 CPU-Time sampling (includes native code)...${NC}"
+
+ # Use jcmd to enable the experimental CPU-time sampling
+ jcmd "$PID" JFR.start name=cpu-time-analysis duration="${CPU_DURATION}s" \
+ "jdk.CPUTimeSample#enabled=true" \
+ "jdk.ExecutionSample#enabled=true" \
+ "jdk.NativeMethodSample#enabled=true" \
+ "jdk.ThreadCPULoad#enabled=true" \
+ filename="$JFR_FILE" 2>/dev/null
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Java 25 CPU-time recording started successfully${NC}"
+ echo "Recording CPU time (including native code) for $CPU_DURATION seconds..."
+
+ # Show progress
+ for i in $(seq 1 $((CPU_DURATION/5))); do
+ echo -n "."
+ sleep 5
+ done
+ echo ""
+
+ # Stop recording
+ jcmd "$PID" JFR.stop name=cpu-time-analysis 2>/dev/null
+ echo -e "${GREEN}Java 25 CPU-time profiling completed: $JFR_FILE${NC}"
+ echo -e "${BLUE}This recording includes native code execution time${NC}"
+ else
+ echo -e "${YELLOW}Java 25 CPU-time profiling failed, falling back to async-profiler...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d "$CPU_DURATION" -f "$RESULTS_DIR/cpu-time-fallback-$TIMESTAMP.html" "$PID"
+ fi
+ ;;
+ 16)
+ echo -e "${GREEN}Starting Java 25 Method Tracing...${NC}"
+ if [ "$java_version" -lt 25 ]; then
+ echo -e "${YELLOW}Warning: This feature requires Java 25+. Current version: $java_version${NC}"
+ echo -e "${BLUE}Falling back to standard profiling...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d 60 -f "$RESULTS_DIR/method-fallback-$(date +%Y%m%d-%H%M%S).html" "$PID"
+ return
+ fi
+
+ read -p "Enter duration in seconds (default: 30): " METHOD_DURATION
+ METHOD_DURATION=${METHOD_DURATION:-30}
+
+ echo -e "${YELLOW}Enter method pattern to trace (e.g., 'com.example.*' or '*' for all):${NC}"
+ read -p "Method pattern: " METHOD_PATTERN
+ METHOD_PATTERN=${METHOD_PATTERN:-"*"}
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/method-trace-java25-$TIMESTAMP.jfr"
+
+ echo -e "${BLUE}Starting Java 25 method tracing for pattern: $METHOD_PATTERN${NC}"
+
+ # Use jcmd to enable method tracing events
+ jcmd "$PID" JFR.start name=method-trace-analysis duration="${METHOD_DURATION}s" \
+ "jdk.MethodSample#enabled=true" \
+ "jdk.MethodEntry#enabled=true,threshold=1ms" \
+ "jdk.MethodExit#enabled=true,threshold=1ms" \
+ "jdk.ExecutionSample#enabled=true" \
+ filename="$JFR_FILE" 2>/dev/null
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Java 25 method tracing started successfully${NC}"
+ echo "Tracing methods matching '$METHOD_PATTERN' for $METHOD_DURATION seconds..."
+
+ # Show progress
+ for i in $(seq 1 $METHOD_DURATION); do
+ echo -n "."
+ sleep 1
+ done
+ echo ""
+
+ # Stop recording
+ jcmd "$PID" JFR.stop name=method-trace-analysis 2>/dev/null
+ echo -e "${GREEN}Java 25 method tracing completed: $JFR_FILE${NC}"
+ echo -e "${BLUE}This recording includes detailed method entry/exit timing${NC}"
+ else
+ echo -e "${YELLOW}Java 25 method tracing failed, falling back to async-profiler...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d "$METHOD_DURATION" -f "$RESULTS_DIR/method-trace-fallback-$TIMESTAMP.html" "$PID"
+ fi
+ ;;
+ 17)
+ echo -e "${GREEN}Starting Advanced JFR with Custom Events...${NC}"
+
+ read -p "Enter duration in seconds (default: 120): " CUSTOM_DURATION
+ CUSTOM_DURATION=${CUSTOM_DURATION:-120}
+
+ read -p "Enter max recording size (e.g., 100MB, default: 500MB): " MAX_SIZE
+ MAX_SIZE=${MAX_SIZE:-"500MB"}
+
+ read -p "Enter max age for events (e.g., 5m, 1h, default: 10m): " MAX_AGE
+ MAX_AGE=${MAX_AGE:-"10m"}
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/advanced-custom-$TIMESTAMP.jfr"
+
+ echo -e "${BLUE}Starting advanced JFR recording with custom configuration...${NC}"
+ echo -e "${BLUE}Max size: $MAX_SIZE, Max age: $MAX_AGE${NC}"
+
+ # Create comprehensive JFR recording with all available events
+ jcmd "$PID" JFR.start name=advanced-analysis duration="${CUSTOM_DURATION}s" \
+ disk=true maxsize="$MAX_SIZE" maxage="$MAX_AGE" \
+ "jdk.ObjectAllocationInNewTLAB#enabled=true" \
+ "jdk.ObjectAllocationOutsideTLAB#enabled=true" \
+ "jdk.ObjectAllocationSample#enabled=true,throttle=1000/s" \
+ "jdk.GCHeapSummary#enabled=true" \
+ "jdk.GCConfiguration#enabled=true" \
+ "jdk.ExecutionSample#enabled=true" \
+ "jdk.ThreadCPULoad#enabled=true" \
+ "jdk.ThreadContextSwitchRate#enabled=true" \
+ "jdk.JavaMonitorEnter#enabled=true,threshold=10ms" \
+ "jdk.JavaMonitorWait#enabled=true,threshold=10ms" \
+ "jdk.ThreadPark#enabled=true,threshold=10ms" \
+ "jdk.SocketRead#enabled=true,threshold=10ms" \
+ "jdk.SocketWrite#enabled=true,threshold=10ms" \
+ "jdk.FileRead#enabled=true,threshold=10ms" \
+ "jdk.FileWrite#enabled=true,threshold=10ms" \
+ filename="$JFR_FILE" 2>/dev/null
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Advanced JFR recording started successfully${NC}"
+ echo "Recording comprehensive events for $CUSTOM_DURATION seconds..."
+
+ # Show progress with more frequent updates
+ for i in $(seq 1 $((CUSTOM_DURATION/5))); do
+ echo -n "."
+ sleep 5
+ done
+ echo ""
+
+ # Stop recording
+ jcmd "$PID" JFR.stop name=advanced-analysis 2>/dev/null
+ echo -e "${GREEN}Advanced JFR recording completed: $JFR_FILE${NC}"
+ echo -e "${BLUE}This recording includes I/O, locking, GC, and allocation events${NC}"
+ else
+ echo -e "${YELLOW}Advanced JFR recording failed, falling back to async-profiler...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" --all -d "$CUSTOM_DURATION" -o jfr -f "$RESULTS_DIR/advanced-fallback-$TIMESTAMP.jfr" "$PID"
+ fi
+ ;;
+ 18)
+ echo -e "${GREEN}Starting JFR Memory Leak Analysis with TLAB tracking...${NC}"
+
+ read -p "Enter duration in minutes (default: 10): " LEAK_DURATION_MIN
+ LEAK_DURATION_MIN=${LEAK_DURATION_MIN:-10}
+ LEAK_DURATION=$((LEAK_DURATION_MIN * 60))
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ JFR_FILE="$RESULTS_DIR/memory-leak-tlab-$TIMESTAMP.jfr"
+
+ echo -e "${BLUE}Starting comprehensive memory leak analysis...${NC}"
+ echo -e "${BLUE}Duration: $LEAK_DURATION_MIN minutes${NC}"
+
+ # Start JFR recording focused on memory leak detection
+ jcmd "$PID" JFR.start name=memory-leak-analysis duration="${LEAK_DURATION}s" \
+ disk=true maxsize="1GB" maxage="30m" \
+ "jdk.ObjectAllocationInNewTLAB#enabled=true,stackTrace=true" \
+ "jdk.ObjectAllocationOutsideTLAB#enabled=true,stackTrace=true" \
+ "jdk.ObjectAllocationSample#enabled=true,stackTrace=true" \
+ "jdk.TLABAllocation#enabled=true" \
+ "jdk.TLABWaste#enabled=true" \
+ "jdk.GCHeapSummary#enabled=true,period=10s" \
+ "jdk.GCCollectorG1GC#enabled=true" \
+ "jdk.G1HeapRegionInformation#enabled=true" \
+ "jdk.G1HeapRegionTypeChange#enabled=true" \
+ "jdk.OldObjectSample#enabled=true,cutoff=0ms" \
+ "jdk.ClassLoaderStatistics#enabled=true,period=30s" \
+ filename="$JFR_FILE" 2>/dev/null
+
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Memory leak analysis recording started successfully${NC}"
+ echo "Analyzing memory allocation patterns for $LEAK_DURATION_MIN minutes..."
+ echo -e "${YELLOW}This will track TLAB allocation, object samples, and heap changes${NC}"
+
+ # Show progress with time remaining
+ for i in $(seq 1 $LEAK_DURATION_MIN); do
+ remaining=$((LEAK_DURATION_MIN - i))
+ echo -e "\\r${BLUE}Recording... ${remaining} minutes remaining${NC}"
+ sleep 60
+ done
+ echo ""
+
+ # Stop recording
+ jcmd "$PID" JFR.stop name=memory-leak-analysis 2>/dev/null
+ echo -e "${GREEN}Memory leak analysis completed: $JFR_FILE${NC}"
+ echo -e "${BLUE}This recording includes detailed TLAB and object allocation tracking${NC}"
+ echo -e "${YELLOW}Analyze with JProfiler, VisualVM, or Mission Control for leak detection${NC}"
+ else
+ echo -e "${YELLOW}JFR memory leak recording failed, falling back to async-profiler...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -e alloc -d "$LEAK_DURATION" --alloc 1m -f "$RESULTS_DIR/memory-leak-fallback-$TIMESTAMP.html" "$PID"
+ fi
+ ;;
+ 19)
+ echo -e "${GREEN}Collecting Garbage Collection Logs...${NC}"
+ read -p "Enter duration in seconds (default: 300): " GC_DURATION
+ GC_DURATION=${GC_DURATION:-300}
+
+ if ! [[ "$GC_DURATION" =~ ^[0-9]+$ ]] || [ "$GC_DURATION" -lt 1 ]; then
+ echo -e "${RED}Invalid duration. Using default 300 seconds.${NC}"
+ GC_DURATION=300
+ fi
+
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ GC_LOG_FILE="$RESULTS_DIR/gc-${GC_DURATION}s-$TIMESTAMP.log"
+
+ echo -e "${BLUE}Collecting GC logs for $GC_DURATION seconds...${NC}"
+
+ # Get Java version to determine the best approach
+ JAVA_VERSION=$(jcmd "$PID" VM.version 2>/dev/null | grep -E "Java version|OpenJDK" | head -1)
+
+ # Try different approaches for GC logging
+ GC_SUCCESS=false
+
+ # Method 1: Try jcmd with unified logging (Java 9+)
+ if command -v jcmd >/dev/null 2>&1; then
+ echo -e "${BLUE}Attempting to enable GC logging via jcmd...${NC}"
+
+ # First, try to enable GC logging dynamically
+ jcmd "$PID" VM.log output="$GC_LOG_FILE" what=gc 2>/dev/null
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}GC logging enabled via jcmd${NC}"
+ echo "Collecting logs for $GC_DURATION seconds..."
+ sleep "$GC_DURATION"
+
+ # Disable logging
+ jcmd "$PID" VM.log output=none what=gc 2>/dev/null
+ GC_SUCCESS=true
+ fi
+ fi
+
+ # Method 2: Use jstat if jcmd failed
+ if [ "$GC_SUCCESS" = false ]; then
+ echo -e "${YELLOW}jcmd approach failed, using jstat for GC monitoring...${NC}"
+
+ if command -v jstat >/dev/null 2>&1; then
+ echo -e "${BLUE}Using jstat to collect GC statistics...${NC}"
+
+ # Create header
+ echo "# GC Statistics collected via jstat" > "$GC_LOG_FILE"
+ echo "# Timestamp,S0C,S1C,S0U,S1U,EC,EU,OC,OU,MC,MU,CCSC,CCSU,YGC,YGCT,FGC,FGCT,GCT" >> "$GC_LOG_FILE"
+
+ # Collect GC stats every second for the specified duration
+ END_TIME=$(($(date +%s) + GC_DURATION))
+
+ while [ $(date +%s) -lt $END_TIME ]; do
+ TIMESTAMP_LOG=$(date '+%Y-%m-%d %H:%M:%S')
+ GC_STATS=$(jstat -gc "$PID" 2>/dev/null | tail -1)
+ if [ $? -eq 0 ] && [ ! -z "$GC_STATS" ]; then
+ echo "$TIMESTAMP_LOG,$GC_STATS" >> "$GC_LOG_FILE"
+ fi
+ sleep 1
+ done
+
+ if [ -s "$GC_LOG_FILE" ]; then
+ GC_SUCCESS=true
+ echo -e "${GREEN}GC statistics collection completed${NC}"
+ fi
+ fi
+ fi
+
+ # Report results
+ if [ "$GC_SUCCESS" = true ] && [ -f "$GC_LOG_FILE" ]; then
+ echo -e "${GREEN}GC log collection completed: $GC_LOG_FILE${NC}"
+ echo -e "${YELLOW}Log summary:${NC}"
+ echo " File: $GC_LOG_FILE"
+ echo " Size: $(wc -l < "$GC_LOG_FILE") lines"
+ echo " Duration: $GC_DURATION seconds"
+
+ # Show a few sample lines
+ echo -e "${BLUE}Sample log entries (first 5 lines):${NC}"
+ head -5 "$GC_LOG_FILE" 2>/dev/null || echo " (No content to preview)"
+ else
+ echo -e "${RED}GC log collection failed${NC}"
+ echo -e "${YELLOW}This might happen if:${NC}"
+ echo " - The JVM doesn't have GC logging enabled"
+ echo " - Insufficient permissions to access GC logs"
+ echo " - The Java version doesn't support dynamic GC logging"
+
+ # Clean up empty file
+ [ -f "$GC_LOG_FILE" ] && [ ! -s "$GC_LOG_FILE" ] && rm -f "$GC_LOG_FILE"
+ fi
+ ;;
+ 20)
+ echo -e "${GREEN}Creating Thread Dump...${NC}"
+ TIMESTAMP=$(date +%Y%m%d-%H%M%S)
+ THREAD_DUMP_FILE="$RESULTS_DIR/threaddump-$TIMESTAMP.txt"
+
+ # Try jstack first (if available), then fall back to async-profiler
+ if command -v jstack >/dev/null 2>&1; then
+ echo -e "${BLUE}Using jstack to generate thread dump...${NC}"
+ jstack "$PID" > "$THREAD_DUMP_FILE" 2>&1
+ JSTACK_EXIT_CODE=$?
+
+ if [ $JSTACK_EXIT_CODE -eq 0 ] && [ -s "$THREAD_DUMP_FILE" ]; then
+ echo -e "${GREEN}Thread dump completed: $THREAD_DUMP_FILE${NC}"
+ else
+ echo -e "${YELLOW}jstack failed, trying async-profiler...${NC}"
+ "$PROFILER_DIR/current/bin/asprof" -d 1 -e cpu -o text -f "$THREAD_DUMP_FILE" "$PID" >/dev/null 2>&1
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Thread dump completed: $THREAD_DUMP_FILE${NC}"
+ else
+ echo -e "${RED}Thread dump failed with both methods${NC}"
+ rm -f "$THREAD_DUMP_FILE"
+ fi
+ fi
+ else
+ echo -e "${BLUE}jstack not available, using async-profiler...${NC}"
+ # Use async-profiler to get thread information
+ "$PROFILER_DIR/current/bin/asprof" -d 1 -e cpu -o text -f "$THREAD_DUMP_FILE" "$PID" >/dev/null 2>&1
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Thread dump completed: $THREAD_DUMP_FILE${NC}"
+ else
+ echo -e "${RED}Thread dump failed${NC}"
+ rm -f "$THREAD_DUMP_FILE"
+ fi
+ fi
+
+ if [ -f "$THREAD_DUMP_FILE" ]; then
+ echo -e "${YELLOW}Thread dump summary:${NC}"
+ echo " File: $THREAD_DUMP_FILE"
+ echo " Size: $(wc -l < "$THREAD_DUMP_FILE") lines"
+ echo -e "${BLUE}Use 'cat $THREAD_DUMP_FILE' to view the complete thread dump${NC}"
+ fi
+ ;;
+ 21)
+ echo -e "${YELLOW}Recent profiling results in $RESULTS_DIR:${NC}"
+ echo ""
+
+ # Get all result files and create a numbered list
+ ALL_FILES=($(ls -t "$RESULTS_DIR"/*.html "$RESULTS_DIR"/*.jfr "$RESULTS_DIR"/*.txt "$RESULTS_DIR"/*.log "$RESULTS_DIR"/*.json 2>/dev/null || true))
+
+ if [ ${#ALL_FILES[@]} -eq 0 ]; then
+ echo "No profiling files found"
+ return
+ fi
+
+ # Show numbered list of files
+ echo -e "${GREEN}Select a file to open/explore:${NC}"
+ echo ""
+
+ for i in "${!ALL_FILES[@]}"; do
+ file="${ALL_FILES[$i]}"
+ filename=$(basename "$file")
+ filesize=$(ls -lh "$file" | awk '{print $5}')
+ filetime=$(ls -l "$file" | awk '{print $6, $7, $8}')
+
+ # Determine file type and icon
+ case "$filename" in
+ *.html) filetype="🔥 Flame Graph" ;;
+ *.jfr) filetype="📊 JFR Recording" ;;
+ *.txt) filetype="📋 Thread Dump" ;;
+ *.log) filetype="📜 GC Log" ;;
+ *.json) filetype="🌐 OTLP Export" ;;
+ *) filetype="📄 Unknown" ;;
+ esac
+
+ printf "%2d) %s - %s (%s) - %s\n" $((i+1)) "$filetype" "$filename" "$filesize" "$filetime"
+ done
+
+ echo ""
+ echo -e "${BLUE}File type summary:${NC}"
+ HTML_COUNT=$(ls "$RESULTS_DIR"/*.html 2>/dev/null | wc -l | xargs)
+ JFR_COUNT=$(ls "$RESULTS_DIR"/*.jfr 2>/dev/null | wc -l | xargs)
+ TXT_COUNT=$(ls "$RESULTS_DIR"/*.txt 2>/dev/null | wc -l | xargs)
+ LOG_COUNT=$(ls "$RESULTS_DIR"/*.log 2>/dev/null | wc -l | xargs)
+ JSON_COUNT=$(ls "$RESULTS_DIR"/*.json 2>/dev/null | wc -l | xargs)
+
+ echo " 🔥 HTML files (flame graphs): $HTML_COUNT"
+ echo " 📊 JFR files (recordings): $JFR_COUNT"
+ echo " 📋 TXT files (thread dumps): $TXT_COUNT"
+ echo " 📜 LOG files (GC logs): $LOG_COUNT"
+ echo " 🌐 JSON files (OTLP exports): $JSON_COUNT"
+ echo ""
+ echo "0) Return to main menu"
+ echo ""
+
+ # Get user selection
+ while true; do
+ read -p "Select a file to open (0-${#ALL_FILES[@]}): " FILE_SELECTION
+
+ if [[ "$FILE_SELECTION" =~ ^[0-9]+$ ]]; then
+ if [ "$FILE_SELECTION" -eq 0 ]; then
+ echo "Returning to main menu..."
+ return
+ elif [ "$FILE_SELECTION" -ge 1 ] && [ "$FILE_SELECTION" -le "${#ALL_FILES[@]}" ]; then
+ SELECTED_FILE="${ALL_FILES[$((FILE_SELECTION-1))]}"
+ SELECTED_FILENAME=$(basename "$SELECTED_FILE")
+
+ echo -e "${GREEN}Selected: $SELECTED_FILENAME${NC}"
+ echo ""
+
+ # Handle different file types
+ case "$SELECTED_FILENAME" in
+ *.html)
+ echo -e "${BLUE}Opening flame graph in browser...${NC}"
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ open "$SELECTED_FILE"
+ elif [[ "$OSTYPE" == "linux"* ]]; then
+ if command -v xdg-open >/dev/null 2>&1; then
+ xdg-open "$SELECTED_FILE"
+ else
+ echo -e "${YELLOW}Please open this file manually in a browser: $SELECTED_FILE${NC}"
+ fi
+ else
+ echo -e "${YELLOW}Please open this file manually in a browser: $SELECTED_FILE${NC}"
+ fi
+ ;;
+ *.jfr)
+ echo -e "${BLUE}JFR Recording: $SELECTED_FILENAME${NC}"
+ echo -e "${YELLOW}File size: $(ls -lh "$SELECTED_FILE" | awk '{print $5}')${NC}"
+ echo ""
+ echo -e "${BLUE}You can analyze this JFR file with:${NC}"
+ echo " 1. JDK Mission Control"
+ echo " 2. JProfiler"
+ echo " 3. VisualVM"
+ echo " 4. Convert to flame graph using jfrconv"
+ echo ""
+ echo -e "${YELLOW}Would you like to convert it to a flame graph? (y/N):${NC}"
+ read -p "" CONVERT_JFR
+ if [[ "$CONVERT_JFR" =~ ^[Yy]$ ]]; then
+ FLAME_FILE="${SELECTED_FILE%.jfr}.html"
+ echo -e "${BLUE}Converting JFR to flame graph...${NC}"
+ if [ -f "$PROFILER_DIR/current/bin/jfrconv" ]; then
+ bash "$PROFILER_DIR/current/bin/jfrconv" -o html "$SELECTED_FILE" "$FLAME_FILE" 2>/dev/null
+ if [ $? -eq 0 ]; then
+ echo -e "${GREEN}Flame graph created: $(basename "$FLAME_FILE")${NC}"
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ echo -e "${BLUE}Opening flame graph in browser...${NC}"
+ open "$FLAME_FILE"
+ fi
+ else
+ echo -e "${RED}Failed to convert JFR file${NC}"
+ fi
+ else
+ echo -e "${YELLOW}jfrconv not found. JFR file location: $SELECTED_FILE${NC}"
+ fi
+ else
+ echo -e "${YELLOW}JFR file location: $SELECTED_FILE${NC}"
+ fi
+ ;;
+ *.txt)
+ echo -e "${BLUE}Thread Dump: $SELECTED_FILENAME${NC}"
+ echo -e "${YELLOW}File size: $(ls -lh "$SELECTED_FILE" | awk '{print $5}')${NC}"
+ echo ""
+ echo -e "${YELLOW}Would you like to view the thread dump? (y/N):${NC}"
+ read -p "" VIEW_DUMP
+ if [[ "$VIEW_DUMP" =~ ^[Yy]$ ]]; then
+ echo -e "${BLUE}Thread dump content:${NC}"
+ echo "===================="
+ cat "$SELECTED_FILE"
+ echo "===================="
+ else
+ echo -e "${YELLOW}Thread dump location: $SELECTED_FILE${NC}"
+ fi
+ ;;
+ *.log)
+ echo -e "${BLUE}GC Log: $SELECTED_FILENAME${NC}"
+ echo -e "${YELLOW}File size: $(ls -lh "$SELECTED_FILE" | awk '{print $5}')${NC}"
+ echo ""
+ echo -e "${YELLOW}Would you like to view the last 50 lines of the GC log? (y/N):${NC}"
+ read -p "" VIEW_LOG
+ if [[ "$VIEW_LOG" =~ ^[Yy]$ ]]; then
+ echo -e "${BLUE}GC log (last 50 lines):${NC}"
+ echo "===================="
+ tail -50 "$SELECTED_FILE"
+ echo "===================="
+ else
+ echo -e "${YELLOW}GC log location: $SELECTED_FILE${NC}"
+ fi
+ ;;
+ *.json)
+ echo -e "${BLUE}OTLP Export: $SELECTED_FILENAME${NC}"
+ echo -e "${YELLOW}File size: $(ls -lh "$SELECTED_FILE" | awk '{print $5}')${NC}"
+ echo ""
+ echo -e "${BLUE}This is an OpenTelemetry OTLP export file${NC}"
+ echo -e "${YELLOW}You can import this into OpenTelemetry-compatible systems${NC}"
+ echo -e "${YELLOW}File location: $SELECTED_FILE${NC}"
+ ;;
+ *)
+ echo -e "${YELLOW}File location: $SELECTED_FILE${NC}"
+ ;;
+ esac
+
+ echo ""
+ echo -e "${BLUE}Press Enter to return to file selection...${NC}"
+ read -p ""
+ return
+
+ else
+ echo -e "${RED}Invalid selection. Please choose a number between 0 and ${#ALL_FILES[@]}.${NC}"
+ fi
+ else
+ echo -e "${RED}Invalid input. Please enter a number.${NC}"
+ fi
+ done
+ ;;
+ 22)
+ echo -e "${YELLOW}Kill Current Process${NC}"
+ echo "============================="
+ echo -e "${BLUE}Current process being profiled:${NC}"
+ echo " PID: $PID"
+ if [ ! -z "${PROCESS_NAME:-}" ]; then
+ echo " Name: $PROCESS_NAME"
+ fi
+ echo ""
+
+ # Verify process is still running
+ if ! kill -0 "$PID" 2>/dev/null; then
+ echo -e "${YELLOW}Process $PID is not running or not accessible${NC}"
+ return
+ fi
+
+ # Get process information
+ if command -v ps >/dev/null 2>&1; then
+ PROCESS_INFO=$(ps -p "$PID" -o pid,ppid,user,comm,args 2>/dev/null | tail -1)
+ if [ ! -z "$PROCESS_INFO" ]; then
+ echo -e "${BLUE}Process details:${NC}"
+ echo " $PROCESS_INFO"
+ echo ""
+ fi
+ fi
+
+ # Warning and confirmation
+ echo -e "${RED}⚠️ WARNING: This will terminate the Java process!${NC}"
+ echo -e "${YELLOW}This action cannot be undone and may cause data loss.${NC}"
+ echo ""
+ echo -e "${BLUE}Kill options:${NC}"
+ echo " 1) Graceful shutdown (SIGTERM) - Recommended"
+ echo " 2) Force kill (SIGKILL) - Use if process is unresponsive"
+ echo " 0) Cancel - Return to main menu"
+ echo ""
+
+ while true; do
+ read -p "Select kill method (0-2): " KILL_METHOD
+
+ case "$KILL_METHOD" in
+ 0)
+ echo "Kill operation cancelled"
+ return
+ ;;
+ 1)
+ echo -e "${BLUE}Sending SIGTERM to process $PID...${NC}"
+ if kill -TERM "$PID" 2>/dev/null; then
+ echo -e "${GREEN}SIGTERM sent successfully${NC}"
+ echo "Waiting for process to terminate gracefully..."
+
+ # Wait up to 10 seconds for graceful shutdown
+ for i in {1..10}; do
+ if ! kill -0 "$PID" 2>/dev/null; then
+ echo -e "${GREEN}Process $PID terminated gracefully${NC}"
+ echo ""
+ echo -e "${YELLOW}Note: You may need to restart your application to continue profiling${NC}"
+ return
+ fi
+ echo -n "."
+ sleep 1
+ done
+ echo ""
+
+ # Process still running after 10 seconds
+ if kill -0 "$PID" 2>/dev/null; then
+ echo -e "${YELLOW}Process is still running after 10 seconds${NC}"
+ echo -e "${BLUE}Would you like to force kill it? (y/N):${NC}"
+ read -p "" FORCE_KILL
+ if [[ "$FORCE_KILL" =~ ^[Yy]$ ]]; then
+ echo -e "${BLUE}Sending SIGKILL to process $PID...${NC}"
+ if kill -KILL "$PID" 2>/dev/null; then
+ echo -e "${GREEN}Process $PID force killed${NC}"
+ else
+ echo -e "${RED}Failed to force kill process $PID${NC}"
+ echo "You may need elevated privileges or the process may already be dead"
+ fi
+ else
+ echo "Process left running"
+ fi
+ fi
+ else
+ echo -e "${RED}Failed to send SIGTERM to process $PID${NC}"
+ echo "You may need elevated privileges or the process may not exist"
+ fi
+ return
+ ;;
+ 2)
+ echo -e "${RED}⚠️ FORCE KILL WARNING ⚠️${NC}"
+ echo -e "${YELLOW}This will immediately terminate the process without cleanup!${NC}"
+ echo -e "${BLUE}Are you sure you want to force kill process $PID? (y/N):${NC}"
+ read -p "" CONFIRM_FORCE
+ if [[ "$CONFIRM_FORCE" =~ ^[Yy]$ ]]; then
+ echo -e "${BLUE}Sending SIGKILL to process $PID...${NC}"
+ if kill -KILL "$PID" 2>/dev/null; then
+ echo -e "${GREEN}Process $PID force killed${NC}"
+ echo ""
+ echo -e "${YELLOW}Note: You may need to restart your application to continue profiling${NC}"
+ else
+ echo -e "${RED}Failed to force kill process $PID${NC}"
+ echo "You may need elevated privileges or the process may not exist"
+ fi
+ else
+ echo "Force kill cancelled"
+ fi
+ return
+ ;;
+ *)
+ echo -e "${RED}Invalid selection. Please choose 0, 1, or 2.${NC}"
+ ;;
+ esac
+ done
+ ;;
+ 0)
+ echo -e "${GREEN}Exiting profiler. Goodbye!${NC}"
+ exit 0
+ ;;
+ *)
+ echo -e "${RED}Invalid option${NC}"
+ return
+ esac
+
+ if [ $option -ne 20 ] && [ $option -ne 21 ] && [ $option -ne 22 ] && [ $option -ne 0 ]; then
+ echo -e "${GREEN}Profiling completed!${NC}"
+ echo -e "${YELLOW}Generated files in $RESULTS_DIR:${NC}"
+ ls -lat "$RESULTS_DIR"/*.html "$RESULTS_DIR"/*.jfr "$RESULTS_DIR"/*.txt "$RESULTS_DIR"/*.log "$RESULTS_DIR"/*.json 2>/dev/null | head -5 || echo "No profiling files found"
+
+ # Automatically open the latest file if on macOS
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ LATEST_HTML=$(ls -t "$RESULTS_DIR"/*.html 2>/dev/null | head -1)
+ if [ ! -z "$LATEST_HTML" ]; then
+ echo -e "${BLUE}Opening latest result in browser...${NC}"
+ open "$LATEST_HTML"
+ fi
+ fi
+ fi
+}
+
+# Interactive profiling loop
+while true; do
+ # Verify process is still running
+ if ! kill -0 "$PID" 2>/dev/null; then
+ echo -e "${RED}Warning: Process $PID is no longer running!${NC}"
+ echo "The process may have been terminated or restarted."
+
+ # Try to find the same process again
+ NEW_PID=$(jps -l | grep "$PROCESS_NAME" | head -1 | cut -d' ' -f1)
+ if [ ! -z "$NEW_PID" ] && [ "$NEW_PID" != "$PID" ]; then
+ echo -e "${YELLOW}Found similar process with new PID: $NEW_PID${NC}"
+ read -p "Switch to the new PID? (y/N): " SWITCH_CONFIRM
+ if [[ "$SWITCH_CONFIRM" =~ ^[Yy]$ ]]; then
+ PID=$NEW_PID
+ echo -e "${GREEN}Switched to PID: $PID${NC}"
+ else
+ echo "Please restart the profiler with the correct PID."
+ exit 1
+ fi
+ else
+ echo "Please restart the profiler when your application is running."
+ exit 1
+ fi
+ fi
+
+ show_profiling_menu
+ read -p "Select profiling option (0-22): " PROFILE_TYPE
+ execute_profiling "$PROFILE_TYPE"
+
+ echo ""
+ echo -e "${BLUE}Press Enter to continue or Ctrl+C to exit...${NC}"
+ read -p ""
+done
+
+```
+
+**Script Location:**
+```
+└── profiler/ # ← All profiling-related files
+├── scripts/ # ← Profiling scripts and tools
+│ └── profile-java-process.sh # ← Copy exact script from template
+```
+
+**Setup Instructions:**
+1. Copy the **EXACT** bash script from the template
+2. Save it as `profiler/scripts/profile-java-process.sh` in your project root
+3. Make it executable: `chmod +x profiler/scripts/profile-java-process.sh`
+4. **NO MODIFICATIONS** to the script content are needed or allowed
+
+**Purpose:**
+- Detects running Java processes automatically
+- Downloads and configures async-profiler v4.0
+- Provides interactive menu for different profiling scenarios
+- Generates flamegraphs and analysis reports
+
+**Usage:**
+```bash
+# Execute the interactive profiling script
+./profiler/scripts/profile-java-process.sh
+```
+
+#### Step Constraints
+
+- **CRITICAL INSTRUCTION FOR AI ASSISTANTS:**
+- **COPY THE BASH SCRIPT EXACTLY** from the template file
+- **DO NOT MODIFY, INTERPRET, OR ENHANCE** the script content
+- **DO NOT ADD NEW FEATURES** or change the logic
+- **USE THE SCRIPT VERBATIM** - every line, comment, and function exactly as provided
+- The script is already complete and tested - no improvements needed
+- Create the profiler directory structure: `mkdir -p profiler/scripts profiler/results`
+- Copy the **EXACT** bash script content from `java-profiling-script-template.md`
+- Save it as `profiler/scripts/profile-java-process.sh`
+- Make it executable: `chmod +x profiler/scripts/profile-java-process.sh`
+- **NO MODIFICATIONS** to the script content are needed or allowed
+
+### Step 3: Explain how to use the scripts
+
+- Run the script to start the application with profiling support
+- Run the script to start the interactive profiling script
+
+## Output Format
+
+- Set up automated Java profiling environment with async-profiler v4.0 and enhanced JFR capabilities
+- Create interactive profiling scripts for problem-driven data collection with Java 25 JFR improvements
+- Generate targeted profiling data based on specific performance issues (CPU, memory, threading, GC, I/O) with cooperative sampling
+- Leverage JEP 518 (JFR Cooperative Sampling) for reduced overhead and JEP 520 (Method Timing & Tracing) for enhanced accuracy
+- Organize profiling results in structured directory hierarchy with timestamped files and improved JFR recordings
+- Provide flamegraph and heatmap visualizations for performance analysis with enhanced method-level tracing
+- Enable systematic measurement and detection of Java application performance bottlenecks with minimal profiling impact
+
+## Safeguards
+
+- Always use the exact bash script templates without modification or interpretation
+- Ensure proper JVM flags are applied for profiling compatibility before data collection
+- Verify Java processes are running and accessible before attempting to attach profiler
+- Create organized directory structure for profiling results with timestamped files
+- Validate async-profiler v4.0 installation and compatibility with target Java version
\ No newline at end of file
diff --git a/skills/162-java-profiling-analyze/SKILL.md b/skills/162-java-profiling-analyze/SKILL.md
new file mode 100644
index 00000000..904ab78f
--- /dev/null
+++ b/skills/162-java-profiling-analyze/SKILL.md
@@ -0,0 +1,34 @@
+---
+name: 162-java-profiling-analyze
+description: Use when you need to analyze Java profiling data collected during the detection phase — including interpreting flamegraphs, memory allocation patterns, CPU hotspots, threading issues, systematic problem categorization, evidence documentation with profiling-problem-analysis and profiling-solutions markdown files, or prioritizing fixes using Impact/Effort scoring. Part of the skills-for-java project
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 2 / Analyze profiling data
+
+Analyze profiling results systematically: inventory results (flamegraphs, JFR, GC logs, thread dumps), identify problems (memory leaks, CPU hotspots, threading issues), document findings using standardized templates (profiling-problem-analysis-YYYYMMDD.md, profiling-solutions-YYYYMMDD.md), prioritize using Impact/Effort scores, and correlate multiple profiling files for validation.
+
+**What is covered in this Skill?**
+
+- Inventory: scan profiler/results/ for allocation-flamegraph, heatmap-cpu, memory-leak, *.jfr, *.log, *.txt
+- Problem identification: memory (leaks, excessive allocations, GC pressure), performance (CPU hotspots, blocking), threading (deadlocks, contention, pool saturation)
+- Documentation: docs/profiling-problem-analysis-YYYYMMDD.md, docs/profiling-solutions-YYYYMMDD.md
+- Prioritization: Impact (1–5) / Effort (1–5), focus on high priority first
+- Tools: async-profiler, JFR, JProfiler/YourKit, GCViewer, flamegraphs, heatmaps
+
+**Scope:** Validate profiling results represent realistic load scenarios. Cross-reference multiple files. Include quantitative metrics.
+
+## Constraints
+
+Validate profiling results represent realistic load before analysis. Document assumptions and limitations. Cross-reference multiple files.
+
+- **VALIDATE**: Ensure profiling results represent realistic load scenarios before analysis
+- **DOCUMENT**: Record assumptions and limitations in analysis reports
+- **CROSS-REFERENCE**: Use multiple profiling files to validate findings
+- **BEFORE APPLYING**: Read the reference for problem analysis and solutions templates
+
+## Reference
+
+For detailed guidance, examples, and constraints, see [references/162-java-profiling-analyze.md](references/162-java-profiling-analyze.md).
diff --git a/skills/162-java-profiling-analyze/references/162-java-profiling-analyze.md b/skills/162-java-profiling-analyze/references/162-java-profiling-analyze.md
new file mode 100644
index 00000000..e65cb4f7
--- /dev/null
+++ b/skills/162-java-profiling-analyze/references/162-java-profiling-analyze.md
@@ -0,0 +1,251 @@
+---
+name: 162-java-profiling-analyze
+description: Use when you need to analyze Java profiling data collected during the detection phase — including interpreting flamegraphs, memory allocation patterns, CPU hotspots, threading issues, systematic problem categorization, evidence documentation, or prioritizing fixes using Impact/Effort scoring.
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 2 / Analyze profiling data
+
+## Role
+
+You are a Senior software engineer with extensive experience in Java software development
+
+## Goal
+
+This cursor rule provides a systematic approach to analyzing Java profiling data collected during the detection phase. It serves as the second step in the structured profiling workflow, focusing on interpreting profiling results, identifying root causes, and documenting findings with actionable solutions.
+
+The rule establishes a comprehensive analysis framework that transforms raw profiling data into meaningful insights. It guides users through systematically examining flamegraphs, memory allocation patterns, CPU hotspots, and threading issues to identify performance bottlenecks and their underlying causes.
+
+Key capabilities include:
+- **Systematic Analysis Framework**: Structured approach to examining different types of profiling data (CPU, memory, threading, GC)
+- **Problem Categorization**: Clear methodology for classifying issues by severity, impact, and type (memory leaks, CPU hotspots, threading problems)
+- **Evidence Documentation**: Standardized templates for documenting findings with specific references to profiling files and quantitative metrics
+- **Solution Development**: Framework for creating prioritized, actionable recommendations with implementation effort estimates
+- **Cross-Correlation Analysis**: Techniques for correlating multiple profiling results and identifying patterns across different time periods
+- **Impact Assessment**: Scoring system for prioritizing fixes based on performance impact and implementation effort
+
+The rule ensures that profiling data is analyzed consistently and thoroughly, with findings documented in a format that enables effective communication with development teams and systematic tracking of performance improvements.
+
+## Project Organization
+
+The profiling setup uses a clean folder structure with everything contained in the profiler directory:
+
+```
+your-project/
+└── profiler/ # ← All profiling-related files
+├── scripts/ # ← Profiling scripts and tools
+│ └── java-profile.sh # ← Main profiling script
+├── results/ # Generated profiling output
+│ ├── *.html # Flamegraph files
+│ ├── *.jfr # JFR recording files
+│ ├── *.log # Garbage Collection Log files
+│ └── *.txt # Thread Dump files
+├── current/ # ← Symlink to current profiler version
+└── async-profiler-*/ # ← Downloaded profiler binaries
+```
+
+## Profiling Results Analysis Workflow
+
+When analyzing profiling results, follow this systematic approach:
+
+This comprehensive approach ensures thorough analysis of profiling data and actionable solutions for performance optimization.
+
+## Steps
+
+### Step 1: Inventory Available Results
+
+Scan the following directories for profiling results:
+- `*/profiler/results/` - Example-specific profiling results
+
+Look for these file types:
+- `allocation-flamegraph-*.html` - Memory allocation patterns
+- `heatmap-cpu-*.html` - CPU hotspot visualization
+- `inverted-flamegraph-*.html` - Bottom-up call analysis
+- `memory-leak-*.html` - Memory leak detection reports
+- `*.jfr` - Java Flight Recorder files
+- `*.txt` - Thread dump files
+- `*.log` - GC log files
+
+### Step 2: Problem Identification Process
+
+For each profiling result file, analyze and document:
+
+**Memory-Related Issues:**
+- **Memory Leaks**: Look for continuously growing memory usage patterns
+- **Excessive Allocations**: Identify high-frequency object creation
+- **Memory Fragmentation**: Check for inefficient memory usage patterns
+- **GC Pressure**: Analyze garbage collection frequency and duration
+
+**Performance Issues:**
+- **CPU Hotspots**: Identify methods consuming excessive CPU time
+- **Blocking Operations**: Look for synchronization bottlenecks
+- **Inefficient Algorithms**: Spot methods with unexpected complexity
+- **Resource Contention**: Identify thread contention issues
+
+**Threading Issues:**
+- **Deadlocks**: Look for thread synchronization problems
+- **Context Switching**: Excessive thread switching overhead
+- **Thread Pool Saturation**: Insufficient thread pool sizing
+
+### Step 3: Problem Identification Process
+
+Generate the following documents in the `*/profiler/docs/` (According with Project organization) folder:
+
+**Problem Analysis Document:**
+Create: `docs/profiling-problem-analysis-YYYYMMDD.md`
+
+Structure:
+
+```markdown
+# Profiling Problem Analysis - [Date]
+
+## Executive Summary
+- Brief overview of identified issues
+- Severity classification
+- Impact assessment
+
+## Detailed Findings
+
+### [Problem Category 1]
+- **Description**: What was found
+- **Evidence**: Reference to specific profiling files
+- **Impact**: Performance/resource impact
+- **Root Cause**: Technical explanation
+
+### [Problem Category 2]
+[Repeat structure]
+
+## Methodology
+- Profiling tools used
+- Data collection approach
+- Analysis techniques applied
+
+## Recommendations Priority
+1. Critical issues requiring immediate attention
+2. High-impact optimizations
+3. Long-term improvements
+```
+
+#### Solutions Document
+Create: `docs/profiling-solutions-YYYYMMDD.md`
+
+Structure:
+
+```markdown
+# Profiling Solutions and Recommendations - [Date]
+
+## Quick Wins (Low effort, High impact)
+### Solution 1
+- **Problem**: Reference to specific issue
+- **Solution**: Concrete implementation steps
+- **Expected Impact**: Quantified improvement
+- **Implementation Effort**: Time/complexity estimate
+- **Code Changes**: Specific files/methods to modify
+
+## Medium-term Improvements
+### Solution 2
+[Repeat structure]
+
+## Long-term Optimizations
+### Solution 3
+[Repeat structure]
+
+## Implementation Plan
+1. **Phase 1**: Critical fixes (Week 1-2)
+2. **Phase 2**: Performance optimizations (Week 3-4)
+3. **Phase 3**: Architecture improvements (Month 2+)
+
+## Monitoring and Validation
+- Key metrics to track
+- Testing approach
+- Success criteria
+```
+
+### Step 4: Problem Identification Process
+
+**File Naming Convention:**
+- Problem analysis: `profiling-problem-analysis-YYYYMMDD.md`
+- Solutions: `profiling-solutions-YYYYMMDD.md`
+- Summary reports: `profiling-summary-YYYYMMDD.md`
+
+**Data Correlation:**
+- Cross-reference multiple profiling files
+- Compare different time periods
+- Correlate with application logs
+- Consider load testing scenarios
+
+**Evidence Documentation:**
+- Include specific flamegraph screenshots
+- Reference exact file names and timestamps
+- Provide quantitative metrics
+- Document reproduction steps
+
+### Step 5: Solution Prioritization Framework
+
+Rate each identified problem using:
+
+**Impact Score (1-5):**
+- 5: Critical performance degradation
+- 4: Significant resource waste
+- 3: Moderate performance impact
+- 2: Minor inefficiency
+- 1: Cosmetic optimization
+
+**Effort Score (1-5):**
+- 1: Configuration change
+- 2: Single method optimization
+- 3: Class-level refactoring
+- 4: Module restructuring
+- 5: Architecture change
+
+**Priority = Impact / Effort**
+Focus on high-priority items first.
+
+**Integration with Development Workflow**
+
+**Before Analysis:**
+1. Ensure all profiling results are recent and relevant
+2. Verify results represent realistic load scenarios
+3. Document test conditions and environment
+
+**During Analysis:**
+1. Use systematic approach for consistency
+2. Document assumptions and limitations
+3. Validate findings with additional data points
+
+**After Analysis:**
+1. Share findings with development team
+2. Create implementation tickets/issues
+3. Plan validation and monitoring approach
+4. Schedule follow-up profiling sessions
+
+**Tools and Techniques:**
+
+**Analysis Tools:**
+- **async-profiler**: Flamegraph generation and CPU/memory profiling
+- **JFR (Java Flight Recorder)**: Low-overhead continuous profiling
+- **JProfiler/YourKit**: Commercial profiling tools for deep analysis
+- **GCViewer**: Garbage collection log analysis
+
+**Visualization Techniques:**
+- **Flamegraphs**: Call stack visualization
+- **Heatmaps**: Temporal analysis of hotspots
+- **Allocation tracking**: Memory usage patterns
+- **Thread dumps**: Concurrency analysis
+
+## Output Format
+
+- Systematically analyze Java profiling data to identify performance bottlenecks and optimization opportunities
+- Create structured documentation following the problem analysis and solutions templates
+- Prioritize findings using the Impact/Effort scoring framework
+- Generate actionable recommendations with clear implementation steps and expected outcomes
+- Correlate multiple profiling results to identify patterns and validate findings
+
+## Safeguards
+
+- Always validate profiling results represent realistic load scenarios before analysis
+- Document all assumptions and limitations in analysis reports
+- Cross-reference multiple profiling files to validate findings
+- Include quantitative metrics and specific evidence in all recommendations
\ No newline at end of file
diff --git a/skills/163-java-profiling-refactor/SKILL.md b/skills/163-java-profiling-refactor/SKILL.md
new file mode 100644
index 00000000..fc3a82b0
--- /dev/null
+++ b/skills/163-java-profiling-refactor/SKILL.md
@@ -0,0 +1,32 @@
+---
+name: 163-java-profiling-refactor
+description: Use when you need to refactor Java code based on profiling analysis findings — including reviewing docs/profiling-problem-analysis and docs/profiling-solutions, identifying specific performance bottlenecks, and implementing targeted code changes to address CPU, memory, or threading issues. Part of the skills-for-java project
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 3 / Refactor code to fix issues
+
+Implement refactoring based on profiling analysis: review profiling-problem-analysis-YYYYMMDD.md and profiling-solutions-YYYYMMDD.md, identify specific performance bottlenecks, and refactor code to fix them. Ensure all tests pass after changes.
+
+**What is covered in this Skill?**
+
+- Review analysis notes: docs/profiling-problem-analysis-YYYYMMDD.md, docs/profiling-solutions-YYYYMMDD.md
+- Identify specific bottlenecks from the documented findings
+- Refactor code to address CPU hotspots, memory leaks, threading issues, or other performance problems
+- Run verification: ./mvnw clean verify or mvn clean verify
+
+**Scope:** Changes must pass all tests. Apply fixes incrementally and verify after each significant change.
+
+## Constraints
+
+Verify that changes pass all tests before considering the refactoring complete.
+
+- **MANDATORY**: Run `./mvnw clean verify` or `mvn clean verify` after applying refactoring
+- **SAFETY**: If tests fail, fix issues before proceeding
+- **BEFORE APPLYING**: Read the analysis and solutions documents for specific recommendations
+
+## Reference
+
+For detailed guidance, examples, and constraints, see [references/163-java-profiling-refactor.md](references/163-java-profiling-refactor.md).
diff --git a/skills/163-java-profiling-refactor/references/163-java-profiling-refactor.md b/skills/163-java-profiling-refactor/references/163-java-profiling-refactor.md
new file mode 100644
index 00000000..d748ea7d
--- /dev/null
+++ b/skills/163-java-profiling-refactor/references/163-java-profiling-refactor.md
@@ -0,0 +1,34 @@
+---
+name: 163-java-profiling-refactor
+description: Use when you need to refactor Java code based on profiling analysis findings — including reviewing profiling-problem-analysis and profiling-solutions documents, identifying specific performance bottlenecks, and implementing targeted code changes to address them.
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 3 / Refactor code to fix issues
+
+## Role
+
+You are a Senior software engineer with extensive experience in Java software development
+
+## Goal
+
+This cursor rule provides a systematic approach to refactoring Java code based on profiling analysis findings. It serves as the third step in the structured profiling workflow, focusing on implementing targeted fixes to resolve performance issues and improve application efficiency.
+
+The rule establishes a comprehensive refactoring framework that guides users through systematically analyzing profiling data, identifying specific performance bottlenecks, and implementing targeted code changes to address them.
+
+## Steps
+
+### Step 1: Review notes from the analysis step
+
+Review the notes from the analysis step to identify the specific performance bottlenecks.
+
+The files to review are: `docs/profiling-problem-analysis-YYYYMMDD.md` and `docs/profiling-solutions-YYYYMMDD.md`
+### Step 2: Refactor the code to fix the performance bottlenecks
+
+Refactor the code to fix the performance bottlenecks.
+
+## Safeguards
+
+- Verify that changes pass all tests with `./mvnw clean verify` or `mvn clean verify`
\ No newline at end of file
diff --git a/skills/164-java-profiling-verify/SKILL.md b/skills/164-java-profiling-verify/SKILL.md
new file mode 100644
index 00000000..3039042c
--- /dev/null
+++ b/skills/164-java-profiling-verify/SKILL.md
@@ -0,0 +1,35 @@
+---
+name: 164-java-profiling-verify
+description: Use when you need to verify Java performance optimizations by comparing profiling results before and after refactoring — including baseline validation, post-refactoring report generation, quantitative before/after metrics comparison, side-by-side flamegraph analysis, regression detection, or creating profiling-comparison-analysis and profiling-final-results documentation. Part of the skills-for-java project
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 4 / Verify results
+
+Verify performance optimizations through rigorous before/after comparison: ensure baseline and post-refactoring profiling data use identical test conditions, generate post-refactoring reports, compare metrics (memory, CPU, GC, threading), perform side-by-side flamegraph analysis, document findings in profiling-comparison-analysis-YYYYMMDD.md and profiling-final-results-YYYYMMDD.md, and validate success criteria.
+
+**What is covered in this Skill?**
+
+- Pre-refactoring baseline: run profiler with same load before changes
+- Post-refactoring: generate new reports with identical test conditions
+- Comparison: memory (leaks, allocations, GC), CPU (hotspots, contention), visual flamegraph comparison
+- Documentation: profiler/docs/profiling-comparison-analysis-YYYYMMDD.md, profiler/docs/profiling-final-results-YYYYMMDD.md
+- File naming: baseline/after suffixes, timestamp-based organization
+- Validation: verify reports exist, compare metrics, identify regressions
+
+**Scope:** Identical test conditions are critical. Document test scenarios. Validate application runs with refactored code before generating new reports.
+
+## Constraints
+
+Use identical test conditions between baseline and post-refactoring. Verify both report sets are complete. Document test scenarios.
+
+- **CONSISTENCY**: Use identical test conditions and load patterns for baseline and post-refactoring
+- **VALIDATE**: Ensure both baseline and post-refactoring reports exist and are non-empty before comparison
+- **DOCUMENT**: Record test scenarios and load patterns for reproduction
+- **BEFORE APPLYING**: Read the reference for comparison templates and validation steps
+
+## Reference
+
+For detailed guidance, examples, and constraints, see [references/164-java-profiling-verify.md](references/164-java-profiling-verify.md).
diff --git a/skills/164-java-profiling-verify/references/164-java-profiling-verify.md b/skills/164-java-profiling-verify/references/164-java-profiling-verify.md
new file mode 100644
index 00000000..899b80bc
--- /dev/null
+++ b/skills/164-java-profiling-verify/references/164-java-profiling-verify.md
@@ -0,0 +1,347 @@
+---
+name: 164-java-profiling-verify
+description: Use when you need to verify Java performance optimizations by comparing profiling results before and after refactoring — including baseline validation, post-refactoring report generation, quantitative before/after metrics comparison, side-by-side flamegraph analysis, or creating profiling-comparison-analysis and profiling-final-results documentation.
+license: Apache-2.0
+metadata:
+ author: Juan Antonio Breña Moral
+ version: 0.13.0-SNAPSHOT
+---
+# Java Profiling Workflow / Step 4 / Verify results
+
+## Role
+
+You are a Senior software engineer with extensive experience in Java software development
+
+## Goal
+
+This cursor rule provides a comprehensive methodology for comparing Java profiling results before and after performance optimizations or refactoring efforts. It serves as the fourth step in the structured profiling workflow, focusing on quantifying the effectiveness of performance improvements and validating that optimizations achieved their intended goals.
+
+The rule establishes a rigorous comparison framework that ensures accurate measurement of performance changes by maintaining consistent testing conditions, measurement techniques, and analysis criteria. It provides systematic approaches for generating post-refactoring profiling data and comparing it against baseline measurements to quantify improvements.
+
+Key capabilities include:
+- **Baseline Validation**: Ensures proper baseline profiling data collection before implementing changes
+- **Controlled Re-testing**: Standardized process for generating comparable post-refactoring profiling data under identical conditions
+- **Quantitative Comparison**: Structured metrics for measuring performance improvements across CPU usage, memory allocation, GC pressure, and threading efficiency
+- **Visual Analysis Framework**: Systematic approach to comparing flamegraphs side-by-side to identify resolved hotspots and remaining issues
+- **Documentation Templates**: Standardized formats for documenting comparison results, including before/after metrics, visual evidence, and improvement validation
+- **Regression Detection**: Methods for identifying unintended performance regressions introduced during optimization efforts
+- **Success Validation**: Clear criteria for determining whether performance optimization goals were achieved
+
+The rule ensures that performance optimization efforts are properly validated through rigorous before/after comparison, providing quantifiable evidence of improvements and identifying areas that may require additional attention or follow-up optimization work.
+
+### Project Organization
+
+The profiling setup uses a clean folder structure with everything contained in the profiler directory:
+
+```
+your-project/
+└── profiler/ # All profiling-related files
+├── scripts/ # Profiling scripts and tools
+│ └── java-profile.sh # Main profiling script
+├── results/ # Generated profiling output
+│ ├── *.html # Flamegraph files
+│ ├── *.jfr # JFR recording files
+│ ├── *.log # Garbage Collection Log files
+│ └── *.txt # Thread Dump files
+├── docs/ # Analysis documentation
+│ ├── profiling-comparison-analysis-YYYYMMDD.md
+│ └── profiling-final-results-YYYYMMDD.md
+├── current/ # Symlink to current profiler version
+└── async-profiler-*/ # Downloaded profiler binaries
+```
+
+## Steps
+
+### Step Pre-Refactoring Baseline (Already Done)
+
+Ask the user if they have captured baseline results yet.
+For this purpose use the script `profiler/run-with-profiler.sh` to run the application with the right JVM flags for profiling.
+
+**Example:**
+```bash
+cd profiler
+./run-with-profiler.sh --mode (cpu, alloc, wall, lock)
+```
+
+After this task, send load to the application to capture the new behavior afte the refactoring.
+### Step Post-Refactoring Report Generation (CRITICAL STEP)
+
+Ask the user if they have generated the post-refactoring data from profiling tools yet.
+
+**Step 1: Validate Your Refactoring Changes**
+```bash
+# Ensure your code changes are applied
+git status
+git diff HEAD~1 # Review recent changes
+```
+
+**Step 2: Generate New Profiling Reports**
+cd profiler/scripts
+./java-profile.sh --mode memory --duration 60
+
+**Step 3: Verify the reports were generated**
+review the files in the `profiler/results` directory.
+
+```bash
+# Check that you have both sets of reports
+echo "=== BASELINE REPORTS ==="
+ls -la profiler/results/*baseline* || ls -la profiler/results/*before*
+
+echo "=== POST-REFACTORING REPORTS ==="
+ls -la profiler/results/*after* || ls -la profiler/results/*$(date +%Y%m%d)*
+
+# Verify reports are not empty
+wc -c profiler/results/*.html
+```
+### Step Performance Metrics Comparison
+
+
+**Memory Analysis Checklist**
+- [ ] **Memory leak detection**: Compare heap usage patterns between before/after flamegraphs
+- [ ] **Allocation patterns**: Analyze allocation flamegraphs for reduced object creation
+- [ ] **GC pressure**: Compare garbage collection frequency and duration
+- [ ] **Peak memory usage**: Identify improvements in maximum heap utilization
+- [ ] **Stack depth**: Compare flamegraph complexity (canvas height, levels)
+
+**CPU Performance Checklist**
+- [ ] **Hot spots identification**: Compare CPU flamegraphs to identify resolved bottlenecks
+- [ ] **Method execution time**: Analyze improvements in critical path performance
+- [ ] **Thread contention**: Look for reduced blocking or improved concurrency
+
+**Visual Comparison Process**
+```bash
+# Open reports side by side for comparison
+# Browser tab 1: Baseline report
+open profiler/results/memory-leak-*baseline*.html
+
+# Browser tab 2: After refactoring report
+open profiler/results/memory-leak-*after*.html
+
+# Compare:
+# - Canvas height (lower = better)
+# - Stack depth (fewer levels = better)
+# - Hot spot patterns (reduced width = better)
+```
+
+### Step Documentation Creation
+
+**Note**: All documentation files are created with date suffixes (YYYYMMDD format) to enable tracking multiple profiling sessions and maintain historical analysis records.
+
+**Step 1: Create Comparison Analysis Document**
+```bash
+# Create the comparison analysis document with date suffix
+DATE_SUFFIX=$(date +%Y%m%d)
+touch profiler/docs/profiling-comparison-analysis-${DATE_SUFFIX}.md
+```
+
+Template for `profiler/docs/profiling-comparison-analysis-YYYYMMDD.md`:
+
+```markdown
+# Profiling Comparison Analysis - [Date]
+
+## Executive Summary
+- **Refactoring Objective**: [What was being fixed]
+- **Overall Result**: [Success/Partial/Failed]
+- **Key Improvements**: [List main improvements]
+
+## Methodology
+- **Baseline Date**: [Timestamp of baseline reports]
+- **Post-Refactoring Date**: [Timestamp of after reports]
+- **Test Scenarios**: [What endpoints/operations were tested]
+- **Duration**: [How long profiling ran]
+
+## Before/After Metrics
+| Metric | Before | After | Improvement |
+|-----|-----|----|----|
+| Stack Depth (levels) | X | Y | Z% |
+| Canvas Height (px) | X | Y | Z% |
+| Memory Allocations | X | Y | Z% |
+| Thread Count | X | Y | Z% |
+
+## Key Findings
+### Resolved Issues
+- [ ] Issue 1: [Description and evidence]
+- [ ] Issue 2: [Description and evidence]
+
+### Remaining Concerns
+- [ ] Concern 1: [Description and next steps]
+
+## Visual Evidence
+- **Baseline Reports**: `../results/[filename]`
+- **After Reports**: `../results/[filename]`
+- **Key Differences**: [Describe what to look for in flamegraphs]
+
+## Recommendations
+1. [Next step 1]
+2. [Next step 2]
+```
+
+**Step 2: Create Final Results Summary**
+```bash
+# Create the final results summary document with date suffix
+DATE_SUFFIX=$(date +%Y%m%d)
+touch profiler/docs/profiling-final-results-${DATE_SUFFIX}.md
+```
+
+Template for `profiler/docs/profiling-final-results-YYYYMMDD.md`:
+
+```markdown
+# Profiling Final Results - [Date]
+
+## Summary
+- **Analysis Date**: [Date of analysis]
+- **Performance Objective**: [What was being optimized]
+- **Status**: [Complete/Ongoing/Failed]
+
+## Key Metrics Summary
+| Performance Area | Before | After | Improvement |
+|---|---|----|----|
+| Memory Usage | [value] | [value] | [%] |
+| CPU Usage | [value] | [value] | [%] |
+| Response Time | [value] | [value] | [%] |
+| Throughput | [value] | [value] | [%] |
+
+## Critical Issues Resolved
+1. **[Issue Name]**: [Description and resolution]
+2. **[Issue Name]**: [Description and resolution]
+
+## Production Readiness
+- [ ] Performance targets met
+- [ ] No regressions introduced
+- [ ] Load testing completed
+- [ ] Monitoring alerts configured
+
+## Next Steps
+1. [Action item 1]
+2. [Action item 2]
+
+## Related Documents
+- Analysis: `profiling-comparison-analysis-YYYYMMDD.md`
+- Reports: `../results/[report-files]`
+```
+
+### Phase 5: Troubleshooting Missing Reports
+
+#### Problem: No New Reports Generated
+```bash
+# 1. Check if profiler is working
+cd profiler/scripts
+./java-profile.sh --help
+
+# 2. Verify application is running
+curl http://localhost:8080/actuator/health
+jps | grep -i spring
+
+# 3. Check profiler permissions
+ls -la profiler/current/bin/asprof
+chmod +x profiler/current/bin/asprof
+
+# 4. Manual profiling attempt
+java -jar profiler/current/lib/async-profiler.jar --help
+```
+
+#### Problem: Reports Are Empty or Corrupted
+```bash
+# 1. Check file sizes
+ls -la profiler/results/*.html | tail -5
+
+# 2. Verify HTML structure
+head -20 profiler/results/[latest-report].html
+
+# 3. Re-run with verbose logging
+cd profiler/scripts
+./java-profile.sh --mode memory --duration 30 --verbose
+```
+
+**Problem: Can't Compare - Different Test Scenarios**
+```bash
+# 1. Document your test scenarios
+echo "Test scenarios used:" > profiler/docs/test-scenarios.md
+
+# 2. Re-run with identical load patterns
+# Use same curl commands, same timing, same duration
+
+# 3. Automate test scenarios
+cat > profiler/scripts/load-test.sh << 'EOF'
+#!/bin/bash
+for i in {1..10}; do
+curl http://localhost:8080/api/v1/objects/create
+curl http://localhost:8080/api/v1/threads/create
+sleep 5
+done
+EOF
+chmod +x profiler/scripts/load-test.sh
+```
+
+**File Naming Conventions**
+
+**Recommended Naming Pattern**
+```bash
+# Before refactoring (baseline)
+memory-leak-baseline-YYYYMMDD-HHMMSS.html
+allocation-flamegraph-baseline-YYYYMMDD-HHMMSS.html
+
+# After refactoring
+memory-leak-after-refactoring-YYYYMMDD-HHMMSS.html
+allocation-flamegraph-after-refactoring-YYYYMMDD-HHMMSS.html
+
+# Alternative: timestamp-based (if baseline already exists)
+memory-leak-YYYYMMDD-HHMMSS.html (earlier = baseline, later = after)
+```
+
+**File Organization Validation**
+```bash
+# Verify you have complete sets
+ls profiler/results/ | grep -E "(baseline|before)" | wc -l # Should be > 0
+ls profiler/results/ | grep -E "(after|refactoring)" | wc -l # Should be > 0
+```
+
+**Deliverables Checklist**
+
+**Pre-Comparison Validation**
+- [ ] Baseline profiling results exist and are complete
+- [ ] Code refactoring has been implemented and tested
+- [ ] Application runs successfully with refactored code
+- [ ] New profiling reports have been generated post-refactoring
+- [ ] Both sets of reports use identical test scenarios
+
+**Comparison Analysis Completed**
+- [ ] Side-by-side flamegraph comparison performed
+- [ ] Quantitative metrics extracted and compared
+- [ ] Key improvements and regressions identified
+- [ ] Visual evidence documented with specific file references
+
+**Documentation Deliverables**
+- [ ] `profiler/docs/profiling-comparison-analysis-YYYYMMDD.md` created
+- [ ] `profiler/docs/profiling-final-results-YYYYMMDD.md` created
+- [ ] Quantified performance improvements documented
+- [ ] Recommendations for production deployment provided
+- [ ] Ongoing monitoring strategy defined
+
+**Validation Commands**
+```bash
+# Final validation that comparison is complete
+echo "=== COMPARISON READINESS CHECK ==="
+echo "Baseline reports: $(ls profiler/results/*baseline* 2>/dev/null | wc -l)"
+echo "After reports: $(ls profiler/results/*after* 2>/dev/null | wc -l)"
+echo "Analysis docs: $(ls profiler/docs/profiling-comparison-analysis-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].md 2>/dev/null | wc -l)"
+echo "Final results: $(ls profiler/docs/profiling-final-results-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].md 2>/dev/null | wc -l)"
+```
+
+
+## Output Format
+
+- Generate post-refactoring profiling reports using identical test conditions as baseline
+- Perform systematic side-by-side comparison of before/after flamegraphs and metrics
+- Create comprehensive comparison analysis documentation with quantified improvements
+- Validate that performance optimization goals were achieved through rigorous measurement
+- Identify any performance regressions introduced during optimization efforts
+- Provide clear recommendations for production deployment and ongoing monitoring
+
+## Safeguards
+
+- Always ensure identical test conditions between baseline and post-refactoring profiling sessions
+- Verify that both baseline and post-refactoring reports are complete and non-empty before comparison
+- Document all test scenarios and load patterns used for consistent reproduction
+- Validate that application is fully operational with refactored code before generating new reports
+- Create dated documentation files to maintain historical analysis tracking
\ No newline at end of file
diff --git a/system-prompts-generator/src/main/resources/system-prompts/126-java-observability-logging.xml b/system-prompts-generator/src/main/resources/system-prompts/126-java-observability-logging.xml
index 7c35ba0b..b3a3e8f9 100644
--- a/system-prompts-generator/src/main/resources/system-prompts/126-java-observability-logging.xml
+++ b/system-prompts-generator/src/main/resources/system-prompts/126-java-observability-logging.xml
@@ -6,6 +6,7 @@
0.13.0-SNAPSHOT
Apache-2.0
Java Logging Best Practices
+ Use when you need to implement or improve Java logging and observability — including selecting SLF4J with Logback/Log4j2, applying proper log levels, parameterized logging, secure logging without sensitive data exposure, environment-specific configuration, log aggregation and monitoring, or validating logging through tests.
You are a Senior software engineer with extensive experience in Java software development
diff --git a/system-prompts-generator/src/main/resources/system-prompts/151-java-performance-jmeter.xml b/system-prompts-generator/src/main/resources/system-prompts/151-java-performance-jmeter.xml
index ad8ed4fc..cd777bb2 100644
--- a/system-prompts-generator/src/main/resources/system-prompts/151-java-performance-jmeter.xml
+++ b/system-prompts-generator/src/main/resources/system-prompts/151-java-performance-jmeter.xml
@@ -6,6 +6,7 @@
0.13.0-SNAPSHOT
Apache-2.0
Run performance tests based on JMeter
+ Use when you need to set up JMeter performance testing for a Java project — including creating the run-jmeter.sh script from the exact template, configuring load tests with loops, threads, and ramp-up, or running performance tests from the project root.
You are a Senior software engineer with extensive experience in Java software development and performance testing
diff --git a/system-prompts-generator/src/main/resources/system-prompts/161-java-profiling-detect.xml b/system-prompts-generator/src/main/resources/system-prompts/161-java-profiling-detect.xml
index 624f3b4f..594de70a 100644
--- a/system-prompts-generator/src/main/resources/system-prompts/161-java-profiling-detect.xml
+++ b/system-prompts-generator/src/main/resources/system-prompts/161-java-profiling-detect.xml
@@ -6,6 +6,7 @@
0.13.0-SNAPSHOT
Apache-2.0
Java Profiling Workflow / Step 1 / Collect data to measure potential issues
+ Use when you need to set up Java application profiling to detect and measure performance issues — including automated async-profiler v4.0 setup, problem-driven profiling (CPU, memory, threading, GC, I/O), interactive profiling scripts, or collecting profiling data with flamegraphs and JFR recordings.
You are a Senior software engineer with extensive experience in Java software development
diff --git a/system-prompts-generator/src/main/resources/system-prompts/162-java-profiling-analyze.xml b/system-prompts-generator/src/main/resources/system-prompts/162-java-profiling-analyze.xml
index ba29a390..d387ba16 100644
--- a/system-prompts-generator/src/main/resources/system-prompts/162-java-profiling-analyze.xml
+++ b/system-prompts-generator/src/main/resources/system-prompts/162-java-profiling-analyze.xml
@@ -6,6 +6,7 @@
0.13.0-SNAPSHOT
Apache-2.0
Java Profiling Workflow / Step 2 / Analyze profiling data
+ Use when you need to analyze Java profiling data collected during the detection phase — including interpreting flamegraphs, memory allocation patterns, CPU hotspots, threading issues, systematic problem categorization, evidence documentation, or prioritizing fixes using Impact/Effort scoring.
You are a Senior software engineer with extensive experience in Java software development
diff --git a/system-prompts-generator/src/main/resources/system-prompts/163-java-profiling-refactor.xml b/system-prompts-generator/src/main/resources/system-prompts/163-java-profiling-refactor.xml
index b4b8cb88..d7dae463 100644
--- a/system-prompts-generator/src/main/resources/system-prompts/163-java-profiling-refactor.xml
+++ b/system-prompts-generator/src/main/resources/system-prompts/163-java-profiling-refactor.xml
@@ -6,6 +6,7 @@
0.13.0-SNAPSHOT
Apache-2.0
Java Profiling Workflow / Step 3 / Refactor code to fix issues
+ Use when you need to refactor Java code based on profiling analysis findings — including reviewing profiling-problem-analysis and profiling-solutions documents, identifying specific performance bottlenecks, and implementing targeted code changes to address them.
You are a Senior software engineer with extensive experience in Java software development
diff --git a/system-prompts-generator/src/main/resources/system-prompts/164-java-profiling-verify.xml b/system-prompts-generator/src/main/resources/system-prompts/164-java-profiling-verify.xml
index 45786317..c1bef852 100644
--- a/system-prompts-generator/src/main/resources/system-prompts/164-java-profiling-verify.xml
+++ b/system-prompts-generator/src/main/resources/system-prompts/164-java-profiling-verify.xml
@@ -6,6 +6,7 @@
0.13.0-SNAPSHOT
Apache-2.0
Java Profiling Workflow / Step 4 / Verify results
+ Use when you need to verify Java performance optimizations by comparing profiling results before and after refactoring — including baseline validation, post-refactoring report generation, quantitative before/after metrics comparison, side-by-side flamegraph analysis, or creating profiling-comparison-analysis and profiling-final-results documentation.
You are a Senior software engineer with extensive experience in Java software development