Skip to content

feature/fix course save 500 error#190

Closed
unam98 wants to merge 17 commits intodevfrom
feature/fix-course-save-500-error
Closed

feature/fix course save 500 error#190
unam98 wants to merge 17 commits intodevfrom
feature/fix-course-save-500-error

Conversation

@unam98
Copy link
Copy Markdown
Collaborator

@unam98 unam98 commented Apr 29, 2026

  • chore: update prod CodeDeploy app and deployment group name
  • fix: skip tests in prod CI to avoid DB dependency
  • fix: update S3 bucket name to new account bucket
  • chore: redeploy to trigger CodeDeploy
  • fix: update appspec.yml for Amazon Linux (ec2-user)
  • fix: rewrite deploy.sh for Amazon Linux single-profile deployment
  • chore: redeploy with fixed application.yml
  • fix: exclude plain.jar from deployment
  • trigger: redeploy with fixed secret

Summary by CodeRabbit

  • Chores
    • Optimized CI build process by skipping tests to reduce build time.
    • Updated deployment infrastructure targets and S3 bucket configuration.
    • Simplified deployment strategy from blue/green to direct jar deployment.
    • Updated deployment user and file permissions configuration.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 29, 2026

📝 Walkthrough

Walkthrough

The deployment configuration and scripts are updated to shift from a blue/green deployment strategy to a direct jar deployment model. Changes include updating the S3 bucket and CodeDeploy targets in GitHub Actions workflows, modifying appspec.yml to use ec2-user instead of ubuntu, and refactoring deploy.sh to discover, kill existing processes, and start jars directly with health checks.

Changes

Cohort / File(s) Summary
GitHub Actions Workflows
.github/workflows/prod-cd.yml, .github/workflows/prod-ci.yml
S3 bucket name updated in prod-cd.yml for CodeDeploy artifact uploads; prod-ci.yml modified to skip test execution in Gradle build step (-x test flag added).
AWS Deployment Configuration
appspec.yml
Deployment target user changed from ubuntu to ec2-user; installation directory updated to /home/ec2-user/app; file permissions and hook execution now run under ec2-user account.
Deployment Script
scripts/deploy.sh
Refactored from blue/green strategy to direct jar deployment: discovers jar from app directory (excluding plain variant), terminates existing jar processes globally, starts jar with nohup, performs health checks against localhost:8080/actuator/health, and unconditionally starts/enables Nginx.

Sequence Diagram

sequenceDiagram
    participant CodeDeploy
    participant EC2 as EC2 Instance
    participant JarProc as Jar Process
    participant Health as Health Endpoint
    participant Nginx

    CodeDeploy->>EC2: Trigger deploy.sh
    EC2->>EC2: Discover jar from /home/ec2-user/app
    EC2->>JarProc: Kill existing jar processes
    JarProc-->>EC2: Process terminated
    EC2->>JarProc: Start jar with nohup
    JarProc-->>EC2: Process started
    EC2->>Health: Check /actuator/health
    Health-->>EC2: Return status
    EC2->>EC2: Verify "status":"UP"
    EC2->>Nginx: Start/enable Nginx
    Nginx-->>EC2: Nginx running
    EC2-->>CodeDeploy: Deployment complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Blue/green is gone, now direct we go,
Jars spring to life with a swift nohup flow,
Health checks confirm all is UP and bright,
Nginx stands ready, deployment done right! 🚀

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title 'feature/fix course save 500 error' does not match the actual changes in the changeset, which focus on deployment infrastructure updates (S3 bucket, CodeDeploy configuration, Amazon Linux migration) and CI/CD modifications, not course save error fixes. Update the PR title to accurately reflect the main changes, such as 'Migrate deployment to Amazon Linux and update CI/CD infrastructure' or 'Update CodeDeploy configuration and deployment scripts for Amazon Linux'.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/fix-course-save-500-error

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 60 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (1)
.github/workflows/prod-cd.yml (1)

11-12: Reuse S3_BUCKET_NAME in create-deployment.

The upload step uses $S3_BUCKET_NAME, but Line 97 hardcodes the bucket again. Keeping both copies in sync by hand is brittle; use the env var in --s3-location too.

Suggested fix
-        --s3-location bucket=runnect-prod-bucket-604764467555-ap-northeast-2-an,bundleType=zip,key=runnect_prod_server.zip
+        --s3-location bucket=$S3_BUCKET_NAME,bundleType=zip,key=runnect_prod_server.zip

Also applies to: 93-97

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/prod-cd.yml around lines 11 - 12, Replace the hardcoded
bucket string used in the create-deployment step with the environment variable
S3_BUCKET_NAME so the upload uses the same value as the workflow env;
specifically, locate the create-deployment step that passes the --s3-location
flag (and any other occurrences around lines referenced) and change the literal
bucket name to use the workflow env variable reference ($S3_BUCKET_NAME) so both
the upload step and --s3-location share the single S3_BUCKET_NAME source.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/prod-ci.yml:
- Around line 43-44: The workflow currently runs "./gradlew build -x test" which
skips all tests; change the "Build with Gradle" step to run unit tests while
excluding DB-bound/integration tests by using your Gradle test split/profile
(for example run "./gradlew test" or "./gradlew check" and pass the
integration-test toggle you already use, e.g. "-DskipIntegrationTests=true" or
"--exclude-task integrationTest"), or invoke the unit-test task directly and
keep integration tests behind a separate "integration" job; update the step that
currently runs "./gradlew build -x test" to one of these commands so unit tests
remain enforced in CI.

In `@appspec.yml`:
- Around line 9-13: The permissions block currently sets permissions.object to /
which changes ownership of the entire root filesystem; change the
permissions.object value to the app deployment directory (e.g.,
/home/ec2-user/app or your deployment target) so pattern: "**" only applies
under the app directory; update the permissions entry (permissions, object,
pattern, owner, group) in appspec.yml to reference the correct deployment path
and verify owner/group remain ec2-user.

In `@scripts/deploy.sh`:
- Around line 41-43: Remove the "|| true" that masks failures for the nginx
startup and enable steps: update the lines that run "systemctl start nginx" and
"systemctl enable nginx" so they allow errors to propagate (or explicitly
validate via "systemctl is-active nginx" and exit non‑zero on failure); ensure
the deployment script checks the service state after invoking systemctl and
fails the deployment if nginx is not active rather than swallowing errors.
- Around line 12-39: Replace the blind sleep with an explicit wait for the old
process to exit (use CURRENT_PID and poll ps -p $CURRENT_PID until it no longer
exists, with a timeout), then start the new JVM with nohup and capture the child
PID via $! into a NEW_PID variable, persist or export that PID, and validate
NEW_PID is running and has bound port 8080 (use ps/ss/lsof against NEW_PID)
before proceeding to the actuator health loop; fail the deploy if the old PID
doesn't exit in time or the new PID isn't running/bound.
- Around line 8-13: The script currently uses pgrep -f '\.jar' (CURRENT_PID)
which will match any .jar on the host; narrow the match to only this app by
changing the pgrep pattern to include $APP_DIR (e.g. pgrep -f "$APP_DIR.*\.jar")
or, better, implement a PIDFILE for this service: write the JVM PID to $PIDFILE
when starting and read that file here instead of pgrep; update references to
CURRENT_PID, pgrep -f '\.jar', $APP_DIR and add PIDFILE handling so only the
intended process is killed.
- Around line 4-6: The current JAR selection is nondeterministic; update the
APP_DIR/JAR_PATH logic to be deterministic by listing JARs matching
"$APP_DIR/*.jar" while excluding "*plain*", then if none are found log a clear
error and exit non‑zero, and if multiple are found either fail with an error
asking for exactly one deployable JAR or pick the newest file explicitly (e.g.
by choosing the most recently modified file) and assign that path to JAR_PATH so
deployments are repeatable.
- Around line 19-21: The java launcher option -Duser.timezone=Asia/Seoul is
placed after the -jar token causing the JAR path ($JAR_PATH) to be
misinterpreted; update the nohup java invocation (the line with "nohup java
-jar" and the "-Duser.timezone=Asia/Seoul" token) so that
"-Duser.timezone=Asia/Seoul" appears before "-jar" and $JAR_PATH follows "-jar",
ensuring the process starts correctly and the subsequent health check loop can
succeed.

---

Nitpick comments:
In @.github/workflows/prod-cd.yml:
- Around line 11-12: Replace the hardcoded bucket string used in the
create-deployment step with the environment variable S3_BUCKET_NAME so the
upload uses the same value as the workflow env; specifically, locate the
create-deployment step that passes the --s3-location flag (and any other
occurrences around lines referenced) and change the literal bucket name to use
the workflow env variable reference ($S3_BUCKET_NAME) so both the upload step
and --s3-location share the single S3_BUCKET_NAME source.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9232c8e3-3e43-4fb0-9968-3633e2e5ee4f

📥 Commits

Reviewing files that changed from the base of the PR and between 4065d6d and d3052de.

📒 Files selected for processing (4)
  • .github/workflows/prod-cd.yml
  • .github/workflows/prod-ci.yml
  • appspec.yml
  • scripts/deploy.sh

Comment on lines +43 to +44
- name: Build with Gradle
run: ./gradlew build -x test
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep some automated test coverage in this PR workflow.

Line 44 now publishes a green build from completely untested code, so regressions can merge unnoticed. A safer fix is to split DB-bound tests behind a separate task/profile and keep unit tests enabled here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/prod-ci.yml around lines 43 - 44, The workflow currently
runs "./gradlew build -x test" which skips all tests; change the "Build with
Gradle" step to run unit tests while excluding DB-bound/integration tests by
using your Gradle test split/profile (for example run "./gradlew test" or
"./gradlew check" and pass the integration-test toggle you already use, e.g.
"-DskipIntegrationTests=true" or "--exclude-task integrationTest"), or invoke
the unit-test task directly and keep integration tests behind a separate
"integration" job; update the step that currently runs "./gradlew build -x test"
to one of these commands so unit tests remain enforced in CI.

Comment thread appspec.yml
Comment on lines 9 to +13
permissions:
- object: /
pattern: "**"
owner: ubuntu
group: ubuntu
owner: ec2-user
group: ec2-user
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

AWS CodeDeploy appspec.yml permissions object pattern recursive

💡 Result:

The AWS CodeDeploy appspec.yml permissions section uses a list of object configurations for EC2/On-Premises deployments. Each object is defined as: permissions: - object: /path/to/target pattern: "" # Optional; "" applies recursively to all matching items except: [] # Optional; list of exceptions (no path separators) owner: username group: groupname mode: 755 # Octal mode like chmod type: - directory # or 'file'; 'directory' with "" recurses into subdirectories # Optional: acls, context (SELinux) Key recursive patterns: - pattern: "" matches everything recursively under object. - type: directory with pattern "" recursively applies permissions to all directories anywhere in the object (not object itself). - Multiple objects can target different levels for fine control (e.g., one for top-level files, one for recursive subdirs). - Permissions applied after files are copied during Install event. Examples from docs: permissions: - object: /tmp/my-app pattern: "" owner: ec2-user mode: 777 type: directory # Recurses all subdirectories - object: /tmp/my-app pattern: "*-3.txt" owner: adm mode: 464 type: file This structure allows recursive permission setting via "" pattern + directory type. For complex cases, use multiple entries or hooks scripts.

Citations:


🌐 Web query:

CodeDeploy appspec.yml permissions pattern ** asterisk syntax

💡 Result:

In AWS CodeDeploy's AppSpec.yml file, the permissions section uses glob-style pattern matching with support for asterisk () wildcards and for recursive matching. Key syntax from official AWS documentation: - pattern: Optional string specifying a glob pattern to apply permissions to files/directories matching the object. - "" matches files in the current directory. - "" (special case): If pattern is not specified or set to "", permissions apply to ALL matching files/directories under the object (recursive). - Examples: - pattern: "" → Applies to everything under object (recursive, excludes except patterns). - pattern: ".txt" → Matches all .txt files. - pattern: "/.txt" → Matches all .txt files recursively. - except: ["/path/to/exclude.txt", "*.log"] → Comma-separated list in brackets to exclude specific patterns. Full structure: permissions: - object: /path/to/target pattern: "" except: ["/ReadMe.txt"] owner: bob group: writers mode: 644 type: - file - directory This applies to EC2/On-Premises deployments only (Amazon Linux, Ubuntu, RHEL). Patterns follow standard shell glob syntax with for recursion. Use hooks for complex chmod if needed.

Citations:


🌐 Web query:

AWS CodeDeploy AppSpec permissions pattern empty string "" vs ** glob

💡 Result:

In CodeDeploy EC2/On-Premises AppSpec permissions, there’s no special meaning for setting pattern to an empty string "". The official behavior is described in terms of whether pattern is provided or not, and whether it’s set to the special glob "". 1) pattern is omitted OR pattern is "" - If the permissions entry’s pattern is not specified, OR it is specified with "", CodeDeploy applies the permissions to all matching files/directories (depending on type: file vs directory). 2) pattern = "" (empty string) - The AWS docs for the permissions pattern field do not define empty-string semantics. So you should not rely on pattern: "" to mean “apply to everything”. - Treat pattern: "" as effectively “no valid glob”, and instead use the documented forms: - omit pattern entirely, or - use pattern: "". Practical guidance - To apply permissions recursively across the object subtree: - use pattern: "" and type: [file] or type: [directory] as needed. - To target only immediate children without recursion: - use type: file or directory with pattern omitted or with a narrower glob (for example, "*.txt"). Note: This is for the EC2/On-Premises AppSpec permissions section only; other AppSpec sections (for ECS/Lambda) are different.

Citations:


🌐 Web query:

CodeDeploy AppSpec.yml permissions pattern "**" two asterisks documentation

💡 Result:

In CodeDeploy EC2/On-Premises AppSpec files, the permissions section supports a pattern field where "" means “match recursively” (apply permissions to all matching files/directories within the object, including in nested subdirectories). What the docs say (permissions.pattern): - The permissions section applies special permissions after CodeDeploy copies files (EC2/On-Premises only). - pattern is optional. If pattern is not specified or is specified as "", the permissions are applied to all matching files or directories depending on the object type. - type controls whether permissions are applied to files or recursively to directories anywhere under the object. Example from the AWS docs (pattern: ""): - object: /tmp/my-app/my-folder-1 pattern: "" type: - file (and except can be used to exclude specific paths) So, if you set: - permissions: - object: /var/www/APPLICATION_NAME/tmpfiles pattern: "" type: - file …the agent will apply the specified owner/mode (and acls/context, if provided) to all files under /var/www/APPLICATION_NAME/tmpfiles (recursively), and you can use except to exclude specific files. Note: This permissions.pattern/apply-recursively behavior is for EC2/On-Premises deployments; it is not used for AWS Lambda or Amazon ECS deployments.

Citations:


🌐 Web query:

CodeDeploy permissions pattern field glob asterisk double ** syntax valid

💡 Result:

Yes, the double asterisk ( ) syntax is valid in the 'pattern' field of the AWS CodeDeploy AppSpec file's 'permissions' section for EC2/On-Premises deployments. It is a special glob pattern that matches all files and directories recursively under the specified 'object'. Official AWS documentation explicitly states: "If not specified or specified with the special characters '', the permissions are applied to all matching files or directories, depending on the 'type'." Multiple examples use pattern: "", such as: - object: /tmp/my-app/my-folder-1 pattern: "" except: [my-file-4.txt, my-file-5.txt] ... - object: /var/www/html/WordPress pattern: "" except: [/var/www/html/WordPress/ReadMe.txt] ... This applies during the Install lifecycle event after files are copied to the instance, for Amazon Linux, Ubuntu, and RHEL instances. Note: This is specific to the AppSpec 'permissions' pattern field, not IAM policies for CodeDeploy (which use single * wildcards in ARNs).

Citations:


🌐 Web query:

AWS CodeDeploy security best practice permissions object scope root filesystem

💡 Result:

AWS CodeDeploy security best practices for permissions focus on two main areas: IAM roles with least privilege and AppSpec file permissions for deployed files, plus securing the agent deployment-root directory. Avoid broad root filesystem access. IAM Permissions (Service Role and Instance Profile): - Use AWS managed policies like AWSCodeDeployRole for the service role (least privilege for EC2/On-Premises). Attach to allow CodeDeploy to manage instances, Auto Scaling, ELB, SNS, etc., without excessive perms. - For EC2 instances, use an IAM instance profile with minimal S3 read access to deployment buckets (e.g., AmazonEC2RoleforAWSCodeDeploy). - Limit user/deployer policies to specific CodeDeploy actions and pass-role for service role. Avoid "*" resources where possible; use ARNs. AppSpec 'permissions' Section (EC2/On-Premises only): - Define precise file/directory permissions post-deployment using owner, group, mode (e.g., 755 for dirs, 644 for files), ACLs, SELinux contexts. - Target specific objects (e.g., /var/www/myapp), use patterns/except for scope. Do not apply to entire root (/); scope to deployment paths only. Example: permissions: - object: /var/www/myapp owner: www-data group: www-data mode: 755 type: directory - object: /var/www/myapp/src mode: 644 type: file Agent Deployment-Root (/opt/codedeploy-agent/deployment-root): - Stores revisions/logs; configurable via :root_dir. - Best practice: Run agent as non-root user (e.g., ec2-user). Chown the directory to that user. - Limit retained revisions (:max_revisions: 5 default) to control disk usage. - Secure directory perms (e.g., 700 owned by agent user) to prevent unauthorized access. General: Follow IAM least-privilege, use conditions/MFA, enable CloudTrail. No sources recommend root filesystem perms; always scope narrowly to prevent over-privileging.

Citations:


Scope the permissions block to the app directory.

In CodeDeploy, permissions.object is the instance path that ownership changes are applied to, and pattern: "**" matches recursively beneath that object. With object: /, this owner/group change affects the entire root filesystem rather than just /home/ec2-user/app. Scope it to the deployment directory per AWS best practices.

Suggested fix
 permissions:
-  - object: /
+  - object: /home/ec2-user/app
     pattern: "**"
     owner: ec2-user
     group: ec2-user
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@appspec.yml` around lines 9 - 13, The permissions block currently sets
permissions.object to / which changes ownership of the entire root filesystem;
change the permissions.object value to the app deployment directory (e.g.,
/home/ec2-user/app or your deployment target) so pattern: "**" only applies
under the app directory; update the permissions entry (permissions, object,
pattern, owner, group) in appspec.yml to reference the correct deployment path
and verify owner/group remain ec2-user.

Comment thread scripts/deploy.sh
Comment on lines +4 to +6
APP_DIR=/home/ec2-user/app
JAR_PATH=$(ls $APP_DIR/*.jar | grep -v plain | head -1)
echo "> JAR 파일: $JAR_PATH"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t pick the first matching JAR.

Line 5 is nondeterministic and fails opaquely when no non-plain JAR exists. Because /home/ec2-user/app is a persistent directory, stale JARs can survive across deployments and be relaunched here. Fail unless exactly one deployable JAR is present, or select the newest file explicitly.

Suggested fix
 APP_DIR=/home/ec2-user/app
-JAR_PATH=$(ls $APP_DIR/*.jar | grep -v plain | head -1)
+mapfile -t JARS < <(find "$APP_DIR" -maxdepth 1 -type f -name '*.jar' ! -name '*plain*.jar' | sort)
+if [ "${`#JARS`[@]}" -ne 1 ]; then
+  echo "> deployable JAR count mismatch: ${`#JARS`[@]}"
+  exit 1
+fi
+JAR_PATH="${JARS[0]}"
 echo "> JAR 파일: $JAR_PATH"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
APP_DIR=/home/ec2-user/app
JAR_PATH=$(ls $APP_DIR/*.jar | grep -v plain | head -1)
echo "> JAR 파일: $JAR_PATH"
APP_DIR=/home/ec2-user/app
mapfile -t JARS < <(find "$APP_DIR" -maxdepth 1 -type f -name '*.jar' ! -name '*plain*.jar' | sort)
if [ "${`#JARS`[@]}" -ne 1 ]; then
echo "> deployable JAR count mismatch: ${`#JARS`[@]}"
exit 1
fi
JAR_PATH="${JARS[0]}"
echo "> JAR 파일: $JAR_PATH"
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 5-5: Don't use ls | grep. Use a glob or a for loop with a condition to allow non-alphanumeric filenames.

(SC2010)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/deploy.sh` around lines 4 - 6, The current JAR selection is
nondeterministic; update the APP_DIR/JAR_PATH logic to be deterministic by
listing JARs matching "$APP_DIR/*.jar" while excluding "*plain*", then if none
are found log a clear error and exit non‑zero, and if multiple are found either
fail with an error asking for exactly one deployable JAR or pick the newest file
explicitly (e.g. by choosing the most recently modified file) and assign that
path to JAR_PATH so deployments are repeatable.

Comment thread scripts/deploy.sh
Comment on lines +8 to +13
echo "> 실행 중인 애플리케이션 종료"
CURRENT_PID=$(pgrep -f '\.jar' || true)
if [ -n "$CURRENT_PID" ]; then
echo "> 종료: $CURRENT_PID"
kill -15 $CURRENT_PID
sleep 5
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Narrow the process match before killing.

Line 9 matches every *.jar process on the host, not just this service. On a shared instance, that can terminate unrelated Java workloads. Match only processes started from $APP_DIR, or keep a pidfile for this app.

Suggested fix
-CURRENT_PID=$(pgrep -f '\.jar' || true)
+CURRENT_PID=$(pgrep -f "$APP_DIR/.*\.jar" || true)
🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 12-12: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/deploy.sh` around lines 8 - 13, The script currently uses pgrep -f
'\.jar' (CURRENT_PID) which will match any .jar on the host; narrow the match to
only this app by changing the pgrep pattern to include $APP_DIR (e.g. pgrep -f
"$APP_DIR.*\.jar") or, better, implement a PIDFILE for this service: write the
JVM PID to $PIDFILE when starting and read that file here instead of pgrep;
update references to CURRENT_PID, pgrep -f '\.jar', $APP_DIR and add PIDFILE
handling so only the intended process is killed.

Comment thread scripts/deploy.sh
Comment on lines +12 to 39
kill -15 $CURRENT_PID
sleep 5
else
echo "> 일치하는 Profile이 없습니다. Profile: $CURRENT_PROFILE"
echo "> set1을 할당합니다. IDLE_PROFILE: set1"
IDLE_PROFILE=set1
IDLE_PORT=8081
echo "> 실행 중인 애플리케이션 없음"
fi

echo "> application.jar 교체"
IDLE_APPLICATION=$IDLE_PROFILE-Runnect-Spring-Boot-Server.jar
IDLE_APPLICATION_PATH=$DEPLOY_PATH$IDLE_APPLICATION

ln -Tfs $DEPLOY_PATH$JAR_NAME $IDLE_APPLICATION_PATH

echo "> $IDLE_PROFILE 에서 구동중인 애플리케이션 pid 확인"
IDLE_PID=$(pgrep -f $IDLE_APPLICATION)

if [ -z $IDLE_PID ]
then
echo "> 현재 구동중인 애플리케이션이 없으므로 종료하지 않습니다."
else
echo "> kill -15 $IDLE_PID"
kill -15 $IDLE_PID
sleep 20
fi
echo "> 애플리케이션 시작"
nohup java -jar \
-Duser.timezone=Asia/Seoul \
$JAR_PATH \
>> /home/ec2-user/app/nohup.out 2>&1 &

echo "> $IDLE_PROFILE 배포"
nohup java -jar -Duser.timezone=Asia/Seoul -Dspring.profiles.active=$IDLE_PROFILE $IDLE_APPLICATION_PATH >> /home/ubuntu/app/nohup.out 2>&1 &
echo "> 15초 후 헬스체크 시작"
sleep 15

echo "> $IDLE_PROFILE 10초 후 Health check 시작"
echo "> curl -s http://localhost:$IDLE_PORT/health "
sleep 10

for retry_count in {1..10}
do
response=$(curl -s http://localhost:$IDLE_PORT/actuator/health)
up_count=$(echo $response | grep 'UP' | wc -l)

if [ $up_count -ge 1 ]
then # $up_count >= 1 ("UP" 문자열이 있는지 검증)
echo "> Health check 성공"
break
else
echo "> Health check의 응답을 알 수 없거나 혹은 status가 UP이 아닙니다."
echo "> Health check: ${response}"
for i in {1..10}; do
RESPONSE=$(curl -s http://localhost:8080/actuator/health || true)
if echo "$RESPONSE" | grep -q '"status":"UP"'; then
echo "> 헬스체크 성공"
break
fi

if [ $retry_count -eq 10 ]
then
echo "> Health check 실패. "
echo "> Nginx에 연결하지 않고 배포를 종료합니다."
echo "> 헬스체크 실패($i/10): $RESPONSE"
if [ $i -eq 10 ]; then
echo "> 배포 실패"
exit 1
fi

echo "> Health check 연결 실패. 재시도..."
sleep 10
done
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Wait for the old PID to exit and track the new one.

The fixed sleep 5 does not guarantee the previous process has released port 8080. If the old process is still serving, the health check can return UP even when the new JAR failed to bind, so CodeDeploy gets a false green. Wait for the old PID to disappear, capture the new PID from nohup, and validate that PID before accepting the endpoint check.

🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 12-12: Double quote to prevent globbing and word splitting.

(SC2086)


[info] 21-21: Double quote to prevent globbing and word splitting.

(SC2086)


[info] 34-34: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/deploy.sh` around lines 12 - 39, Replace the blind sleep with an
explicit wait for the old process to exit (use CURRENT_PID and poll ps -p
$CURRENT_PID until it no longer exists, with a timeout), then start the new JVM
with nohup and capture the child PID via $! into a NEW_PID variable, persist or
export that PID, and validate NEW_PID is running and has bound port 8080 (use
ps/ss/lsof against NEW_PID) before proceeding to the actuator health loop; fail
the deploy if the old PID doesn't exit in time or the new PID isn't
running/bound.

Comment thread scripts/deploy.sh
Comment on lines +19 to +21
nohup java -jar \
-Duser.timezone=Asia/Seoul \
$JAR_PATH \
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n scripts/deploy.sh | head -30

Repository: Runnect/Runnect-Spring-Boot-Server

Length of output: 934


Move -Duser.timezone before -jar.

The Java launcher syntax is java [options] -jar jarfile [args...]. With the current code, -Duser.timezone=Asia/Seoul is placed where the JAR path should go, so startup fails before the health check loop can ever succeed.

Suggested fix
-nohup java -jar \
-  -Duser.timezone=Asia/Seoul \
-  $JAR_PATH \
+nohup java \
+  -Duser.timezone=Asia/Seoul \
+  -jar "$JAR_PATH" \
   >> /home/ec2-user/app/nohup.out 2>&1 &
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
nohup java -jar \
-Duser.timezone=Asia/Seoul \
$JAR_PATH \
nohup java \
-Duser.timezone=Asia/Seoul \
-jar "$JAR_PATH" \
>> /home/ec2-user/app/nohup.out 2>&1 &
🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 21-21: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/deploy.sh` around lines 19 - 21, The java launcher option
-Duser.timezone=Asia/Seoul is placed after the -jar token causing the JAR path
($JAR_PATH) to be misinterpreted; update the nohup java invocation (the line
with "nohup java -jar" and the "-Duser.timezone=Asia/Seoul" token) so that
"-Duser.timezone=Asia/Seoul" appears before "-jar" and $JAR_PATH follows "-jar",
ensuring the process starts correctly and the subsequent health check loop can
succeed.

Comment thread scripts/deploy.sh
Comment on lines +41 to +43
echo "> Nginx 시작"
sudo systemctl start nginx || true
sudo systemctl enable nginx || true
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t mask nginx startup failures.

|| true makes the deployment report success even if nginx never starts or fails to enable, which can leave the app unreachable with a green deployment status. Let these commands fail, or check systemctl is-active nginx explicitly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/deploy.sh` around lines 41 - 43, Remove the "|| true" that masks
failures for the nginx startup and enable steps: update the lines that run
"systemctl start nginx" and "systemctl enable nginx" so they allow errors to
propagate (or explicitly validate via "systemctl is-active nginx" and exit
non‑zero on failure); ensure the deployment script checks the service state
after invoking systemctl and fails the deployment if nginx is not active rather
than swallowing errors.

@unam98 unam98 closed this Apr 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants