S
Serhat Ozdursun
Guest
I believe the best way to learn is by doing. But in a busy job, you don’t always get to try new tech on company time. That can’t be an excuse—our field moves fast, and AI is one of the biggest shifts right now.
My day-to-day didn’t include AI work, so I turned to my personal website repo as a safe sandbox: serhatozdursun/resume. It’s the perfect “lab mouse.” I wanted to explore AI-assisted quality gates: the things we already rely on in QA—static analysis, unit tests, code reviews—augmented with an assistant that understands the diff and helps us ship better.
This post walks through the pipeline I built: SonarCloud + Jest coverage + Danger + OpenAI, wired together with Secrets so I can tune behavior without touching YAML.
SonarCloud for static analysis + PR decoration + coverage dashboards.
A coverage gate on changed code (e.g., 80% on lines you touched).
AI unit test suggestions generated directly from the diff (copy-paste-ready Jest/TS).
AI code review that posts a concise diff snippet (old → new) with an explanation.
(No fragile line numbers; we still add inline notes when mapping is reliable.)
Secret-driven knobs so you can change thresholds/toggles from CI Secrets—no PRs to change policy.
I keep one file so everything lives in one place. The workflow triggers on PRs and on pushes to main, but the AI step is gated so it only runs on PRs.
The

Prompt instructs the model: “You’re a senior TS/React dev. Output one Markdown code block: a runnable Jest TS test file. Keep it minimal but cover branches and edge cases implied by the diff. Use a file name like src/tests/.auto.test.ts.”
You can always generate suggestions, or only when coverage dips below the threshold—toggle with Secrets.


Continue reading...
My day-to-day didn’t include AI work, so I turned to my personal website repo as a safe sandbox: serhatozdursun/resume. It’s the perfect “lab mouse.” I wanted to explore AI-assisted quality gates: the things we already rely on in QA—static analysis, unit tests, code reviews—augmented with an assistant that understands the diff and helps us ship better.
This post walks through the pipeline I built: SonarCloud + Jest coverage + Danger + OpenAI, wired together with Secrets so I can tune behavior without touching YAML.
What you get




(No fragile line numbers; we still add inline notes when mapping is reliable.)

Architecture (bird’s eye)
- Tests & coverage: Jest runs and writes coverage/lcov.info.
- SonarCloud: runs as usual for code smells and coverage on new code.
- Danger + OpenAI (same job):
- Reads the PR diff.
- Computes new-lines coverage per changed file from LCOV.
- If below MIN_FILE_COVERAGE, fails the PR and posts AI test ideas.
- Regardless of coverage, asks OpenAI to:
- propose copy-pasteable Jest tests, and
- do a short code review with actionable notes and a tiny diff snippet.
- All behavior is toggled by environment variables you define as secrets.
I keep one file so everything lives in one place. The workflow triggers on PRs and on pushes to main, but the AI step is gated so it only runs on PRs.
Code:
name: Code Quality (Tests + SonarCloud + AI Review)
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
push:
branches: [main]
permissions:
contents: read
pull-requests: write
statuses: write
# Optional hardening: avoid duplicate runs while a PR is updated
concurrency:
group: qg-${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test-and-analyze:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with: { fetch-depth: 0 }
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'yarn'
- name: Install deps
run: yarn install --frozen-lockfile
- name: Run unit tests (with coverage)
run: yarn test --coverage
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@v5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
- name: Upload coverage report (PRs only)
if: ${{ github.event_name == 'pull_request' }}
uses: actions/upload-artifact@v4
with:
name: coverage-html
path: coverage/lcov-report/
- name: Install Danger deps
if: ${{ github.event_name == 'pull_request' }}
run: yarn add -D danger lcov-parse openai
- name: Run AI coverage-aware review
if: ${{ github.event_name == 'pull_request' }}
env:
# Required
DANGER_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# Optional knobs (set via Secrets to avoid editing YAML)
# MIN_FILE_COVERAGE: ${{ secrets.MIN_FILE_COVERAGE }} # default 80
# MAX_FILES_TO_ANALYZE: ${{ secrets.MAX_FILES_TO_ANALYZE }}# default 3
# OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }} # default gpt-4o-mini
# DANGER_ALWAYS_NEW_COMMENT: ${{ secrets.DANGER_ALWAYS_NEW_COMMENT }} # default off
# AI_REVIEW_ENABLED: ${{ secrets.AI_REVIEW_ENABLED }} # default 1
# AI_REVIEW_INLINE: ${{ secrets.AI_REVIEW_INLINE }} # default 1
# AI_REVIEW_BLOCK_ON_FINDINGS: ${{ secrets.AI_REVIEW_BLOCK_ON_FINDINGS }} # default off
# AI_REVIEW_MAX_CHARS: ${{ secrets.AI_REVIEW_MAX_CHARS }} # default 100000
# AI_REVIEW_MAX_FINDINGS: ${{ secrets.AI_REVIEW_MAX_FINDINGS }} # default 8
run: npx danger ci --dangerfile dangerfile.ts
Why this pattern?
- Single file keeps maintenance low.
- AI job runs only on PRs (if
, so merging to main doesn’t re-comment PRs.
- You still get tests + Sonar on pushes to main.
The dangerfile.ts
— what it actually does
1) Coverage gate on changed code
- We parse coverage/lcov.info using lcov-parse.
- We fetch the PR diff and compute coverage on new/modified instrumented lines.
- If that effective coverage is below MIN_FILE_COVERAGE (default 80), we:
- fail the PR,
- post a table showing per-file new-lines coverage and whole-file coverage, and
- ask the AI for test ideas (see next).

2) AI unit test suggestions (copy-paste ready)
Prompt instructs the model: “You’re a senior TS/React dev. Output one Markdown code block: a runnable Jest TS test file. Keep it minimal but cover branches and edge cases implied by the diff. Use a file name like src/tests/.auto.test.ts.”
You can always generate suggestions, or only when coverage dips below the threshold—toggle with Secrets.

3) AI code review (snippet-first + optional inline)
- We enumerate added lines per file and ask the AI to return JSON findings:
Code:
{
"file": "src/pages/index.tsx",
"index": 0,
"severity": "warn",
"title": "...",
"body": "..."
}
- For each finding, we create a concise diff snippet centered around that added line and post it as a file-scoped comment, with explanation and suggested fix.
- (This avoids wrong line numbers; the snippet shows the exact old → new change.)
- If mapping is available, we also add an inline note (secondary signal).
- If you set AI_REVIEW_BLOCK_ON_FINDINGS="1", any fail severity will fail the PR.

Continue reading...