Agent skill

rts-log-analyzer

Analyze OSS-Fuzz incremental build and JVM RTS (Regression Test Selection) test logs. Use when user mentions RTS logs, test failures, build failures, log analysis, or when working with *.log files from JVM RTS tests. Extracts errors, failed test classes, and categorizes issues.

Stars 163
Forks 31

Install this agent skill to your Project

npx add-skill https://github.com/majiayu000/claude-skill-registry/tree/main/skills/development/rts-log-analyzer

SKILL.md

RTS Log Analyzer Skill

This skill analyzes OSS-Fuzz incremental build and RTS (Regression Test Selection) test logs to identify errors, failures, and issues.

CRITICAL REQUIREMENT: DETAILED ERROR REPORTING

DO NOT summarize errors as generic categories like "Test failures in baseline".

YOU MUST extract and report:

  1. Exact failed test method names (e.g., testUnreadableFileInput)
  2. Failed test class names (e.g., org.apache.zookeeper.ZKUtilTest)
  3. Exception types (e.g., java.lang.IllegalAccessException, UnsatisfiedLinkError)
  4. Error messages (e.g., expected: not <null>, cannot open shared object file)
  5. Source file and line numbers when available (e.g., ZKUtilTest.java:91)

When to Use This Skill

Activate this skill when:

  • User asks to analyze RTS test logs
  • User mentions "log analysis", "test failures", "build failures"
  • Working with *.log files from JVM RTS tests
  • User mentions summary.txt or failed projects
  • User wants to understand why a build or test failed

Efficient Analysis Tips

Tip 1: Analyze Bottom-Up (Reverse Order)

The most efficient approach is to start from the end of the log:

bash
# Step 1: Check final result first
grep "BUILD FAILURE\|BUILD SUCCESS" *.log

# Step 2: Check for docker start failures
grep "docker start command has failed" *.log

When docker start command has failed appears, the actual error is 20-30 lines ABOVE it. Always look up, not down.

Tip 2: Priority Order for Error Patterns

Search in this order (most common to least):

bash
# Priority 1: Test failures (most common)
grep "<<< FAILURE!\|<<< ERROR!" *.log

# Priority 2: Maven build errors
grep "\[ERROR\].*Failed to execute goal" *.log

# Priority 3: Clover instrumentation failures
grep "Clover has failed" *.log

# Priority 4: Missing dependencies/files
grep "cannot find\|cannot open\|No such file" *.log

Tip 3: Use Reactor Summary to Find Root Cause

In multi-module Maven builds, the first FAILURE module is the root cause. Others are cascading failures.

Apache Kylin - HBase Storage ... FAILURE  <-- ROOT CAUSE
Apache Kylin - Spark Engine .... SKIPPED  <-- Cascading
Apache Kylin - Hive Source ..... FAILURE  <-- Cascading

Focus your analysis on the first failed module only.

Tip 4: Extract Test Failure Details Efficiently

bash
# Find failed test with context (error message follows the failure line)
grep -B2 "<<< ERROR!" *.log

# Extract exception type and message in one command
grep -A3 "<<< ERROR!" *.log | grep -E "Exception:|Error:|at org\."

Tip 5: Handle Large Log Files

When log files exceed 10MB (Read tool limit ~256KB):

bash
# Use offset to read end of file
# Read tool with offset parameter

# Or use grep to extract only relevant sections
grep -n "FAILURE\|ERROR\|Exception" logfile.log

Tip 6: Common Error Patterns Quick Reference

Pattern Root Cause Quick Fix
cannot find.*\.jar Build artifact missing Check build order, run full build
Clover has failed to instrument OpenClover instrumentation error Fix upstream test failure first
InaccessibleObjectException JDK module access restriction Add --add-opens JVM args
RatCheckException Apache license header missing Add -Drat.skip=true
Not supported surefire version jcgeks requires surefire >= 2.13 Update surefire version
请在配置文件config.properties jcgeks empty config Skip RTS for this module

Tip 7: Focus on First Error

When multiple errors appear:

  1. First error is usually the root cause
  2. Subsequent errors are often cascading failures
  3. Fix the first error, then re-run to see if others resolve

Analysis Workflow

Step 1: Check Summary

Look for summary.txt in the current or specified directory:

bash
cat summary.txt

The summary.txt contains lines like:

  • project-name: passed
  • project-name: failed
  • project-name: warning (projects that need investigation)

Report:

  • Total projects tested
  • Passed/Failed/Warning counts
  • List of failed project names
  • List of warning project names (projects with "warning" in summary.txt should also be investigated)

Step 2: Categorize Errors

Use these grep patterns to classify errors:

Build Failures:

bash
grep -l "BUILD FAILURE" *.log

Test Failures:

bash
grep -l "Tests run:.*Failures: [1-9]" *.log
grep -l "FAILURE! - in" *.log

OSS-Patch Errors:

bash
grep -oP "OSS-Patch \| ERROR \| .*" *.log | sort -u

Maven Lifecycle Errors:

bash
grep -l "Unknown lifecycle phase" *.log

Step 3: Extract DETAILED Errors (CRITICAL)

For EACH failed log file, you MUST extract these specific details:

3.1 Extract Failed Test Methods with Error Messages:

bash
# Get test failures with context showing the error
grep -A5 "<<< FAILURE!" <logfile>
grep -A5 "<<< ERROR!" <logfile>

3.2 Extract Exception Types and Messages:

bash
# Extract full exception information
grep -E "^(java\.|org\.)[a-zA-Z.]+Exception:|^[a-zA-Z.]+Error:" <logfile>
grep -B1 -A3 "AssertionFailedError|AssertionError" <logfile>
grep -B1 -A3 "IllegalAccessException|InaccessibleObjectException" <logfile>
grep -B1 -A3 "UnsatisfiedLinkError|NoClassDefFoundError" <logfile>

3.3 Extract Source Location:

bash
# Get file:line references
grep -oE "\([A-Za-z0-9_]+\.java:[0-9]+\)" <logfile> | sort -u
grep -E "at [a-zA-Z0-9_.]+\([A-Za-z0-9_]+\.java:[0-9]+\)" <logfile> | head -20

3.4 Extract Test Summary Line with Class:

bash
# Get the test class and failure counts
grep -E "FAILURE! - in [a-zA-Z0-9_.]+" <logfile>
grep -E "Tests run:.*Failures: [1-9]" <logfile>

3.5 For Missing Library Errors:

bash
grep -B2 -A3 "cannot open shared object|No such file or directory|libfreetype" <logfile>

3.6 For Maven/Plugin Errors:

bash
grep -B2 -A3 "Failed to execute goal" <logfile>
grep "Unknown lifecycle phase" <logfile>

3.7 For Docker Start Command Failures (CRITICAL - DO NOT JUST SAY "docker start failed"):

When you see docker start command has failed, you MUST look ABOVE that line to find the actual error:

bash
# Find the actual error before "docker start command has failed"
grep -B30 "docker start command has failed" <logfile> | grep -E "\[ERROR\]|Exception|Error:|FAILURE"

Common errors hidden inside docker start failures:

a) RAT License Check:

bash
grep -B20 "docker start command has failed" <logfile> | grep -i "RatCheckException\|unapproved license"

b) Surefire Version Error:

bash
grep -B20 "docker start command has failed" <logfile> | grep -i "Not supported surefire version"

c) No pom.xml Found:

bash
grep -B20 "docker start command has failed" <logfile> | grep -i "No pom.xml"

d) Plugin Container Exception:

bash
grep -B20 "docker start command has failed" <logfile> | grep -i "PluginContainerException\|realm ="

e) Maven Goal Execution Failure:

bash
grep -B20 "docker start command has failed" <logfile> | grep "Failed to execute goal"

3.8 For jcgeks RTS No Tests Selected (Empty Affected Classes):

When jcgeks outputs the Chinese message 请在配置文件config.properties中指定需要处理的jar包或目录列表 (meaning "Please specify the list of jar files or directories to process in config.properties"), this indicates no tests will be skipped by RTS because the plugin cannot determine affected classes.

bash
# Check for jcgeks empty affected classes message
grep -l "请在配置文件config.properties中指定需要处理的jar包或目录列表" <logfile>

# Verify by checking for empty Affected Classes output
grep -A2 "Affected Classes:" <logfile> | grep -E "^\s+\[\]$"

When this appears, you will see:

Agent Mode : JUNIT5EXTENSION -javaagent:/root/.m2/repository/org/jcgeks/org.jcgeks.core/1.0.0/org.jcgeks.core-1.0.0.jar=...
请在配置文件config.properties中指定需要处理的jar包或目录列表
Affected Classes:
  []
Non Affected Classes:
  []

Root Cause: jcgeks cannot find any jar files or directories to analyze for test selection. This means:

  • No tests will be skipped (all tests run)
  • RTS optimization is not working for this module
  • The project may be a parent POM or missing compiled classes

Suggested Fix:

  • Ensure the module has target/classes directory with compiled classes
  • Skip RTS for parent/aggregator modules that have no actual code

3.9 For OpenClover RTS Success Verification:

When using OpenClover for RTS, verify that test optimization is working correctly by checking these key messages:

bash
# Check if OpenClover test optimization is working
grep "Clover included.*test classes in this run" <logfile>
grep "Clover estimates having saved" <logfile>

Key success indicators:

Message Pattern Meaning
Clover included N test classes in this run (total # test classes : M) where N < M ✅ RTS is working - some tests were skipped
Clover estimates having saved around X second on this optimized test run ✅ RTS saved time by skipping tests
Saving snapshot to: .../.clover/clover.snapshot ✅ Snapshot saved successfully
Updating snapshot '...' against Clover database ✅ Snapshot updated for next run

First run vs subsequent runs:

On the first run (no snapshot exists):

[INFO] Clover is not optimizing this test run as no test snapshot file was found
[INFO] Clover included 43 test classes in this run (total # test classes : 43)

→ All tests run because no snapshot exists yet (expected behavior)

On subsequent runs (snapshot exists):

[INFO] Clover estimates having saved around 1 second on this optimized test run. The full test run takes approx. 3 seconds
[INFO] Clover included 40 test classes in this run (total # test classes : 43)

→ RTS is working - only 40 of 43 test classes run based on code changes

How to verify OpenClover RTS is working:

bash
# Extract test class counts across all runs
grep "Clover included.*test classes" <logfile> | while read line; do
  echo "$line"
done

# Check if test count decreased after first run
# First run: N = total (no optimization)
# Subsequent runs: N < total (optimization active)

When OpenClover RTS is NOT working:

  • All runs show Clover included N test classes (total: N) with N = total
  • No "estimates having saved" messages after first run
  • Snapshot file not being created/updated

3.10 Distinguishing RTS Behavior: "No Skip Possible" vs "Skip Success"

When analyzing RTS logs, you may encounter two cases where the test count doesn't decrease. Use these patterns to distinguish them:


Case 1: Tests Cannot Be Skipped (Too Few Tests)

Example: htmlunit - only 1 test class exists

Characteristics:

  • Baseline tests = RTS tests (same count)
  • Clover was unable to save any time message appears
  • Only 1 test class exists in total

Verification commands:

bash
# 1. Compare baseline vs RTS test counts
grep "Baseline tests run:" <log>
grep "Tests run (cpv" <log>

# 2. Check for Clover optimization failure message
grep "Clover was unable to save any time" <log>

# 3. Check total test class count
grep "total # test classes" <log>

Log example:

Baseline tests run: 24
Tests run (cpv_0): 24        # Same = no skip
Clover was unable to save any time on this optimized test run.
Clover included 1 test class in this run (total # test classes : 1)
WARNING: RTS did not reduce test count - same as baseline!

Interpretation: RTS is working correctly, but there's only 1 test class, so nothing can be skipped. This is expected behavior, not an error.


Case 2: RTS Successfully Skipped Tests

Example: olingo - 51 test classes, all skipped

Characteristics:

  • Baseline tests > RTS tests (count decreased)
  • No tests to run message appears
  • Multiple test classes exist but most/all are skipped

Verification commands:

bash
# 1. Compare baseline vs RTS test counts
grep "Baseline tests run:" <log>
grep "Tests run (cpv" <log>

# 2. Check for "no tests" message
grep "No tests to run" <log>

# 3. Check Clover included test class counts
grep "Clover included.*test class" <log>

Log example:

Baseline tests run: 288
Tests run (cpv_0): 0         # Decreased = skip occurred!
No tests to run.
Clover included 0 test classes in this run (total # test classes : 27)
WARNING: RTS selected 0 tests - all tests were skipped!

Interpretation: RTS determined that the patch doesn't affect any tests, so all were skipped. This is a successful optimization.


Quick Differentiation Checklist:

Check Item Case 1 (Cannot Skip) Case 2 (Skip Success)
Baseline tests run vs Tests run (cpv) Same Decreased
Clover was unable to save any time Present Absent
No tests to run Absent Present
total # test classes 1 Multiple
WARNING: RTS did not reduce test count Present Absent
WARNING: RTS selected 0 tests Absent Present (normal)

One-liner command for quick judgment:

bash
# Compare test counts (key indicator!)
grep -E "Baseline tests run:|Tests run \(cpv" <log> | tail -4

# Interpretation:
# - Same numbers → Case 1 (too few tests originally)
# - Decreased numbers → Case 2 (RTS skip successful)

Case 3: OpenClover Instrumentation Failure (False Skip)

Example: olingo - Clover claims to select tests but none run

Characteristics:

  • Baseline tests > 0, but RTS tests = 0
  • No Clover instrumentation done on source files message appears
  • Clover reports "included N test classes" but actual Tests run: 0

Verification commands:

bash
# 1. Check for instrumentation failure
grep "No Clover instrumentation done" <log>

# 2. Check discrepancy between Clover selection and actual execution
grep "Clover included.*test class" <log>
grep "Tests run: 0" <log>

# 3. Verify test file pattern matching
grep "no matching sources files found" <log>

Log example:

Clover all over. Instrumented 1 file (1 package).
No Clover instrumentation done on source files in: [.../src/test/java] as no matching sources files found
Clover included 1 test class in this run (total # test classes : 25)
...
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

Interpretation: This is NOT a successful RTS skip! OpenClover failed to instrument test files, so tests cannot run at all. This is a configuration error.

Root Cause: Test source files don't match Clover's expected patterns, or the test directory structure is non-standard.

Suggested Fix:

  • Check if test files follow standard naming: *Test.java, Test*.java, *TestCase.java
  • Verify test source directory is correctly configured in pom.xml
  • Add explicit test file patterns to Clover plugin configuration

Complete Differentiation Table (3 Cases):

Check Item Case 1 (Cannot Skip) Case 2 (Skip Success) Case 3 (Instrumentation Fail)
Baseline tests N M (M > 0) M (M > 0)
RTS tests N (same) 0 or < M 0
No Clover instrumentation done Absent Absent Present
Clover was unable to save any time Present Absent Absent
No tests to run Absent Present May be absent
Clover included N test classes N = total N < total N > 0 but tests = 0
Actual issue Normal (few tests) Normal (optimization) Bug/Config error

Step 4: Identify Specific Error Patterns

Pattern to Search Error Type Detailed Cause
apache-rat-plugin.*unapproved license RAT Check Files missing license headers
RatCheckException.*Too many files RAT Check Multiple files missing Apache license
Unknown lifecycle phase "**/*.java" Lifecycle Error EXCLUDE_TESTS uses glob syntax incorrectly
libfreetype.so.6: cannot open Missing Library Container missing libfreetype6 package
InaccessibleObjectException.*module java.base JDK Module Java module system blocking reflection
IllegalAccessException.*cannot access JDK Access Reflection access denied in newer JDK
ObjenesisException.*InvocationTargetException Mocking Library PowerMock/Objenesis JDK incompatibility
AssertionFailedError: expected: Test Assertion Test assertion failure with specific value
UnsatisfiedLinkError Native Library Missing native .so/.dll library
NoClassDefFoundError Classpath Missing class at runtime
Not supported surefire version Surefire Version jcgeks requires surefire >= 2.13
No pom.xml files found Project Structure Project path incorrect or pom.xml missing
PluginContainerException Plugin Error Maven plugin initialization failed
realm =.*plugin> Plugin Classloader Plugin classloader conflict
docker start command has failed Docker Error MUST look above for actual cause
请在配置文件config.properties中指定需要处理的jar包或目录列表 jcgeks Empty Config No jar/directories configured, RTS will not skip tests
Affected Classes:.*\[\] + Non Affected Classes:.*\[\] jcgeks No Analysis Empty affected classes, no test selection possible

Step 5: Output Format (MUST BE DETAILED)

IMPORTANT: Each project MUST include specific error details, not generic summaries.

For each failed project, report in this format:

## [Project Name]

**Error Type:** <specific classification>

**Failed Tests:**
| Test Class | Test Method | Exception | Error Message |
|------------|-------------|-----------|---------------|
| org.example.FooTest | testBar | AssertionFailedError | expected: <5> but was: <3> |
| org.example.BazTest | testQux | IllegalAccessException | cannot access member of class java.util.EnumSet |

**Stack Trace Snippets:**

java.lang.IllegalAccessException: class X cannot access a member of class Y at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(...)


**Root Cause:** <specific identified cause with actionable fix suggestion>

Example Good Output:

## atlanta-jackson-databind-delta-01

**Error Type:** JDK Module Access Restrictions

**Failed Tests:**
| Test Class | Test Method | Exception | Error Message |
|------------|-------------|-----------|---------------|
| ClassUtilTest | testFindEnumType | IllegalAccessException | cannot access member of class java.util.EnumSet with modifiers "final transient" |
| StackTraceElementTest | testCustomStackTraceDeser | InvalidDefinitionException | Cannot construct instance of StackTraceElement |
| ClassNameIdResolverTest | initializationError | ObjenesisException | InvocationTargetException |
| ArrayDelegatorCreatorForCollectionTest | testUnmodifiable | InaccessibleObjectException | module java.base does not "opens java.util" to unnamed module |
| TestTypeFactoryWithClassLoader | initializationError | ObjenesisException | InvocationTargetException |

**Root Cause:** Tests use reflection to access JDK internal classes. Requires `--add-opens java.base/java.util=ALL-UNNAMED` JVM argument.

**Suggested Fix:** Add JVM args to maven-surefire-plugin or skip these specific tests.

Example BAD Output (DO NOT DO THIS):

## atlanta-jackson-databind-delta-01

**Error Type:** Test failures in baseline  <-- TOO VAGUE

**Root Cause:** Flaky tests need to be skipped  <-- NOT ACTIONABLE
## atlanta-olingo-delta-01

**Error Type:** docker start command has failed  <-- TOO VAGUE, USELESS

**Root Cause:** Docker error  <-- COMPLETELY UNHELPFUL

Example GOOD Output for Docker Start Failures:

## atlanta-olingo-delta-01

**Error Type:** RAT License Check Failure (during docker start)

**Error Details:**
- Plugin: `org.apache.rat:apache-rat-plugin`
- Exception: `RatCheckException`
- Message: `Too many files with unapproved license: 1`
- Report location: `/built-src/src/cp-java-olingo-src/target/rat.txt`

**Root Cause:** Source files are missing Apache license headers. The RAT (Release Audit Tool) plugin checks for license compliance.

**Suggested Fix:** Add `-Drat.skip=true` to MVN_SKIP_ARGS in build.sh or test.sh
## atlanta-fuzzy-delta-01

**Error Type:** Unsupported Surefire Version (during docker start)

**Error Details:**
- Plugin: `org.jcgeks:jcgeks-maven-plugin:1.0.0:select`
- Project: `fuzzywuzzy-build`
- Message: `Not supported surefire version; version has to be 2.13 or higher`

**Root Cause:** The jcgeks RTS plugin requires maven-surefire-plugin version >= 2.13

**Suggested Fix:** Update surefire plugin version in pom.xml or skip this project
## atlanta-snappy-java-delta-01

**Error Type:** Missing pom.xml (during docker start)

**Error Details:**
- Message: `No pom.xml files found in project`
- Project path: `/built-src/snappy-java`

**Root Cause:** The RTS initialization cannot find pom.xml. Project structure may be incorrect.

**Suggested Fix:** Check project path configuration or ensure pom.xml exists at expected location

Summary Statistics

At the end, provide:

  • Count by specific error type (not generic categories)
  • Most common specific exceptions
  • Projects grouped by root cause (with details)

Example Usage

User: "Analyze the RTS logs in this directory" Action: Run through all steps above, provide DETAILED categorized report with specific error messages

User: "Why did atlanta-jackson-databind fail?" Action: Focus on that specific log file, extract ALL failed test methods, exception types, and error messages

User: "Which test classes are failing?" Action: Extract all failed test class names AND their specific test methods across all logs

Step 6: Generate CSV Report

After analysis, ALWAYS generate a CSV report file named rts_analysis_results.csv in the analysis directory.

CSV Format

The CSV must include these columns:

project_name,status,error_category,error_message,failed_tests,suggested_fix

Column Definitions

Column Description Example Values
project_name Name of the project from log file atlanta-jackson-databind-delta-01
status Test result status passed, failed, warning
error_category Specific error classification (not generic) See Error Categories below
error_message Detailed error message (escaped for CSV) IllegalAccessException: cannot access member...
failed_tests Comma-separated list of failed test classes/methods ClassUtilTest.testFindEnumType;ZKUtilTest.testBar
suggested_fix Recommended fix action Add --add-opens java.base/java.util=ALL-UNNAMED

Error Categories

Create descriptive error categories based on the actual error found. Categories should be:

  • Specific and descriptive - Describe the actual root cause
  • Language-agnostic when possible - Work for both C and Java projects
  • Consistent - Use similar naming for similar errors across projects

Examples of good categories:

  • BUILD_FAILURE - General build failure
  • TEST_FAILURE - Test execution failure
  • COMPILATION_ERROR - Source code compilation error
  • MISSING_DEPENDENCY - Missing library or dependency
  • MISSING_FILE - Required file not found
  • TIMEOUT - Build or test timeout
  • CONFIGURATION_ERROR - Build/test configuration issue
  • SEGFAULT - Segmentation fault (C/C++)
  • MEMORY_ERROR - Memory-related errors (C/C++)
  • LINKER_ERROR - Linking errors (C/C++)
  • MODULE_ACCESS_ERROR - JDK module access issues (Java)
  • LICENSE_CHECK_FAILURE - License validation failure (Java)
  • PLUGIN_ERROR - Build plugin errors (Java)
  • JCGEKS_EMPTY_CONFIG - jcgeks RTS plugin found no jar/directories to analyze (Java)
  • OPENCLOVER_RTS_SUCCESS - OpenClover RTS working correctly, tests skipped (Java)
  • OPENCLOVER_RTS_NO_SNAPSHOT - OpenClover first run, no snapshot yet (Java)
  • OPENCLOVER_RTS_NO_OPTIMIZATION - OpenClover not skipping tests despite snapshot (Java)

DO NOT use overly generic categories like:

  • ERROR - Too vague
  • FAILURE - Too vague
  • UNKNOWN - Only as last resort

CSV Generation Example

python
# Use Python to generate CSV with proper escaping
import csv

results = [
    {
        "project_name": "atlanta-jackson-databind-delta-01",
        "status": "failed",
        "error_category": "MODULE_ACCESS_ERROR",
        "error_message": "IllegalAccessException: cannot access member of class java.util.EnumSet",
        "failed_tests": "ClassUtilTest.testFindEnumType;StackTraceElementTest.testCustomStackTraceDeser",
        "suggested_fix": "Add --add-opens java.base/java.util=ALL-UNNAMED to surefire argLine"
    },
    {
        "project_name": "atlanta-json-c-delta-01",
        "status": "failed",
        "error_category": "SEGFAULT",
        "error_message": "Segmentation fault in json_object_get_string",
        "failed_tests": "",
        "suggested_fix": "Check null pointer handling"
    },
    {
        "project_name": "atlanta-zookeeper-delta-01",
        "status": "passed",
        "error_category": "",
        "error_message": "",
        "failed_tests": "",
        "suggested_fix": ""
    }
]

with open('rts_analysis_results.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.DictWriter(f, fieldnames=["project_name", "status", "error_category", "error_message", "failed_tests", "suggested_fix"])
    writer.writeheader()
    writer.writerows(results)

CSV Output Rules

  1. ALWAYS escape special characters - Use proper CSV escaping for quotes and commas in error messages
  2. Use semicolon (;) separator for multiple failed tests within the failed_tests column
  3. Empty values for passed projects - Leave error_category, error_message, failed_tests, suggested_fix empty for passed projects
  4. Include ALL projects - Both passed and failed projects should be in the CSV
  5. Sort by status - Failed projects first, then warnings, then passed

Sample CSV Output

csv
project_name,status,error_category,error_message,failed_tests,suggested_fix
atlanta-jackson-databind-delta-01,failed,MODULE_ACCESS_ERROR,"IllegalAccessException: cannot access member of class java.util.EnumSet",ClassUtilTest.testFindEnumType;StackTraceElementTest.testCustomStackTraceDeser,Add --add-opens java.base/java.util=ALL-UNNAMED
atlanta-json-c-delta-01,failed,SEGFAULT,"Segmentation fault in json_object_get_string","",Check null pointer handling
atlanta-libxml2-delta-01,failed,MISSING_DEPENDENCY,"cannot find -lz: No such file or directory","",Install zlib-dev package
atlanta-zookeeper-delta-01,passed,,"",,
atlanta-commons-io-delta-01,passed,,"",,

When to Generate CSV

  • Generate CSV at the END of analysis, after all logs have been processed
  • Save to the same directory as the log files
  • Report the CSV file path to the user
  • If a previous rts_analysis_results.csv exists, overwrite it with fresh results

Didn't find tool you were looking for?

Be as detailed as possible for better results