Compare commits

..

58 Commits

Author SHA1 Message Date
Moritz
10d3031093 Update capa/ida/plugin/form.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-05 19:03:29 +01:00
mr-tz
e9b02b372a update ida-settings api 2025-11-05 16:52:43 +00:00
Matthew Haigh
503c34b8f9 added mailinglist cta (#2744)
* added mailinglist cta

* Update README.md

Added mailto: link for better user experience

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: Matt Haigh <matthaigh@google.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-03 09:11:14 -07:00
dependabot[bot]
888295b37a build(deps): bump types-protobuf from 6.30.2.20250516 to 6.32.1.20250918 (#2733)
Bumps [types-protobuf](https://github.com/typeshed-internal/stub_uploader) from 6.30.2.20250516 to 6.32.1.20250918.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-protobuf
  dependency-version: 6.32.1.20250918
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-03 09:07:26 -07:00
dependabot[bot]
5f9c908315 build(deps): bump pip from 25.2 to 25.3 (#2741)
Bumps [pip](https://github.com/pypa/pip) from 25.2 to 25.3.
- [Changelog](https://github.com/pypa/pip/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/pip/compare/25.2...25.3)

---
updated-dependencies:
- dependency-name: pip
  dependency-version: '25.3'
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2025-11-03 09:07:02 -07:00
Willi Ballenthin
cb2e2323f9 explorer: add support for IDA 9.2 (#2723)
* ida: add Qt compatibility layer for PyQt5 and PySide6

Introduce a new module `qt_compat.py` providing a unified import
interface and API compatibility for Qt modules. It handles differences between
PyQt5 (used in IDA <9.2) and PySide6 (used in IDA >=9.2). Update all
plugin modules to import Qt components via this compatibility layer
instead of directly importing from PyQt5. This enhances plugin
compatibility across different IDA versions.

thanks @mike-hunhoff!

changelog

* qt_compat: use __all__ rather than noqa

---------

Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2025-11-03 13:29:06 +01:00
Willi Ballenthin
5ea63770ba Merge pull request #2724 from HexRays-plugin-contributions/ida-plugin-json
add `ida-plugin.json`
2025-10-29 17:55:49 +01:00
Capa Bot
6795813fbe Sync capa rules submodule 2025-10-28 15:21:05 +00:00
Capa Bot
ca708ca52e Sync capa-testfiles submodule 2025-10-28 15:15:42 +00:00
Capa Bot
68cf74d60c Sync capa rules submodule 2025-10-28 13:12:29 +00:00
Moritz
5a0c47419f Merge pull request #2735 from mandiant/dependabot/npm_and_yarn/web/explorer/vite-6.4.1
build(deps-dev): bump vite from 6.4.0 to 6.4.1 in /web/explorer
2025-10-24 12:32:50 +02:00
Moritz
4dbdd9dcfa Merge branch 'master' into dependabot/npm_and_yarn/web/explorer/vite-6.4.1 2025-10-24 12:30:15 +02:00
Moritz
82cbfd33db Merge pull request #2732 from xusheng6/test_fix_binja_crash
binja: fix crash in binja feature extraction when MLIL is unavailable…
2025-10-24 12:29:51 +02:00
dependabot[bot]
5906bb3ecf build(deps-dev): bump vite from 6.4.0 to 6.4.1 in /web/explorer
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 6.4.0 to 6.4.1.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/create-vite@6.4.1/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.4.1
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-21 04:19:00 +00:00
Moritz
08319f598f Merge pull request #2730 from mandiant/dependabot/npm_and_yarn/web/explorer/vite-6.4.0
build(deps-dev): bump vite from 6.3.4 to 6.4.0 in /web/explorer
2025-10-20 17:28:58 +02:00
Capa Bot
e6df6ad0cd Sync capa rules submodule 2025-10-20 15:27:46 +00:00
Capa Bot
add09df061 Sync capa-testfiles submodule 2025-10-20 15:18:32 +00:00
Mike Hunhoff
acb34e88d6 Update CHANGELOG.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-20 09:05:30 -06:00
Xusheng
0099e75704 binja: fix crash in binja feature extraction when MLIL is unavailable. Fix https://github.com/mandiant/capa/issues/2714 2025-10-20 18:46:53 +08:00
dependabot[bot]
da0803b671 build(deps-dev): bump vite from 6.3.4 to 6.4.0 in /web/explorer
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 6.3.4 to 6.4.0.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/create-vite@6.4.0/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.4.0
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-16 10:58:41 +00:00
Moritz
789747282d Merge pull request #2728 from mandiant/dependabot/pip/rich-14.2.0
build(deps): bump rich from 14.0.0 to 14.2.0
2025-10-16 12:57:18 +02:00
Capa Bot
3bc2d9915c Sync capa-testfiles submodule 2025-10-13 18:52:26 +00:00
dependabot[bot]
5974440ab7 build(deps): bump rich from 14.0.0 to 14.2.0
Bumps [rich](https://github.com/Textualize/rich) from 14.0.0 to 14.2.0.
- [Release notes](https://github.com/Textualize/rich/releases)
- [Changelog](https://github.com/Textualize/rich/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Textualize/rich/compare/v14.0.0...v14.2.0)

---
updated-dependencies:
- dependency-name: rich
  dependency-version: 14.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-13 14:25:06 +00:00
dependabot[bot]
b9d517a70b build(deps): bump pip from 25.1.1 to 25.2 (#2717)
Bumps [pip](https://github.com/pypa/pip) from 25.1.1 to 25.2.
- [Changelog](https://github.com/pypa/pip/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/pip/compare/25.1.1...25.2)

---
updated-dependencies:
- dependency-name: pip
  dependency-version: '25.2'
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Hunhoff <mike.hunhoff@gmail.com>
2025-10-06 08:32:13 -06:00
dependabot[bot]
e5b8788620 build(deps): bump humanize from 4.12.0 to 4.13.0 (#2716)
Bumps [humanize](https://github.com/python-humanize/humanize) from 4.12.0 to 4.13.0.
- [Release notes](https://github.com/python-humanize/humanize/releases)
- [Commits](https://github.com/python-humanize/humanize/compare/4.12.0...4.13.0)

---
updated-dependencies:
- dependency-name: humanize
  dependency-version: 4.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Hunhoff <mike.hunhoff@gmail.com>
2025-10-06 08:31:46 -06:00
axelmierczuk
ec411f1552 Update pyproject.toml
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-01 19:00:26 +02:00
axelmierczuk
6871adc9dc Pin ida-settings version to 2.1.0 2025-10-01 19:00:26 +02:00
Capa Bot
07880c1418 Sync capa rules submodule 2025-09-23 20:18:16 +00:00
Capa Bot
5a6c8ca7c1 Sync capa rules submodule 2025-09-09 19:22:11 +00:00
Capa Bot
3bd8371d0c Sync capa rules submodule 2025-09-03 16:27:26 +00:00
dependabot[bot]
d0c87ef32c build(deps): bump markdown-it-py from 3.0.0 to 4.0.0 (#2711)
Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 3.0.0 to 4.0.0.
- [Release notes](https://github.com/executablebooks/markdown-it-py/releases)
- [Changelog](https://github.com/executablebooks/markdown-it-py/blob/master/CHANGELOG.md)
- [Commits](https://github.com/executablebooks/markdown-it-py/compare/v3.0.0...v4.0.0)

---
updated-dependencies:
- dependency-name: markdown-it-py
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-03 10:11:25 -06:00
dependabot[bot]
bd2731f87f build(deps): bump pytest-sugar from 1.0.0 to 1.1.1 (#2710)
Bumps [pytest-sugar](https://github.com/Teemu/pytest-sugar) from 1.0.0 to 1.1.1.
- [Release notes](https://github.com/Teemu/pytest-sugar/releases)
- [Changelog](https://github.com/Teemu/pytest-sugar/blob/main/CHANGES.rst)
- [Commits](https://github.com/Teemu/pytest-sugar/compare/v1.0.0...v1.1.1)

---
updated-dependencies:
- dependency-name: pytest-sugar
  dependency-version: 1.1.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-03 10:10:32 -06:00
Capa Bot
4a167d7188 Sync capa rules submodule 2025-09-03 16:08:58 +00:00
Capa Bot
c01bc346fc Sync capa rules submodule 2025-09-03 16:05:36 +00:00
Capa Bot
826330f511 Sync capa-testfiles submodule 2025-09-03 15:58:45 +00:00
Capa Bot
40e5095577 Sync capa-testfiles submodule 2025-09-03 15:55:29 +00:00
Capa Bot
c7eede3c53 Sync capa-testfiles submodule 2025-09-03 15:51:51 +00:00
Capa Bot
1a5f50195a Sync capa rules submodule 2025-08-25 19:08:17 +00:00
Capa Bot
aafca2e00a Sync capa-testfiles submodule 2025-08-25 18:59:27 +00:00
Capa Bot
3a24fabeb6 Sync capa rules submodule 2025-08-22 14:58:24 +00:00
Capa Bot
2f81bb79f9 Sync capa rules submodule 2025-08-21 14:57:07 +00:00
Capa Bot
fc83b7b0a1 Sync capa rules submodule 2025-08-21 14:56:48 +00:00
dependabot[bot]
d430aea04e build(deps): bump dnfile from 0.15.0 to 0.16.4 (#2700)
---
updated-dependencies:
- dependency-name: dnfile
  dependency-version: 0.16.4
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Hunhoff <mike.hunhoff@gmail.com>
2025-08-20 15:11:17 -06:00
dependabot[bot]
1eb42599cf build(deps): bump mypy from 1.16.0 to 1.17.1 (#2704)
Bumps [mypy](https://github.com/python/mypy) from 1.16.0 to 1.17.1.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.16.0...v1.17.1)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.17.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Hunhoff <mike.hunhoff@gmail.com>
2025-08-20 15:10:52 -06:00
dependabot[bot]
618ae2111b build(deps): bump form-data from 4.0.0 to 4.0.4 in /web/explorer (#2702)
Bumps [form-data](https://github.com/form-data/form-data) from 4.0.0 to 4.0.4.
- [Release notes](https://github.com/form-data/form-data/releases)
- [Changelog](https://github.com/form-data/form-data/blob/master/CHANGELOG.md)
- [Commits](https://github.com/form-data/form-data/compare/v4.0.0...v4.0.4)

---
updated-dependencies:
- dependency-name: form-data
  dependency-version: 4.0.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Hunhoff <mike.hunhoff@gmail.com>
2025-08-20 12:11:46 -06:00
Mike Hunhoff
42b6d8106a binja: update core version info check (#2709) 2025-08-20 11:56:56 -06:00
Capa Bot
78a020e1ac Sync capa rules submodule 2025-08-20 16:04:49 +00:00
Capa Bot
a80f85aab4 Sync capa-testfiles submodule 2025-08-20 15:57:15 +00:00
Capa Bot
f94f554d15 Sync capa-testfiles submodule 2025-08-20 15:32:08 +00:00
Capa Bot
d456d52e81 Sync capa rules submodule 2025-08-14 20:59:31 +00:00
Capa Bot
2a18b08a80 Sync capa rules submodule 2025-08-14 15:11:56 +00:00
Capa Bot
dd2e350a1a Sync capa-testfiles submodule 2025-08-14 15:08:18 +00:00
Capa Bot
164a7bdfb5 Sync capa rules submodule 2025-08-13 14:40:23 +00:00
Capa Bot
d7c896bbc6 Sync capa rules submodule 2025-08-12 16:21:29 +00:00
Capa Bot
8185ac4dde Sync capa rules submodule 2025-08-12 15:43:50 +00:00
Capa Bot
92a6ddff99 Sync capa rules submodule 2025-08-12 15:42:57 +00:00
Capa Bot
af87fae036 Sync capa-testfiles submodule 2025-08-12 15:38:12 +00:00
Capa Bot
c774db26f0 Sync capa-testfiles submodule 2025-08-12 15:37:46 +00:00
25 changed files with 444 additions and 1087 deletions

22
.bumpversion.toml Normal file
View File

@@ -0,0 +1,22 @@
[tool.bumpversion]
current_version = "9.2.1"
[[tool.bumpversion.files]]
filename = "capa/version.py"
search = '__version__ = "{current_version}"'
replace = '__version__ = "{new_version}"'
[[tool.bumpversion.files]]
filename = "capa/ida/plugin/ida-plugin.json"
search = '"version": "{current_version}"'
replace = '"version": "{new_version}"'
[[tool.bumpversion.files]]
filename = "capa/ida/plugin/ida-plugin.json"
search = '"flare-capa=={current_version}"'
replace = '"flare-capa=={new_version}"'
[[tool.bumpversion.files]]
filename = "CHANGELOG.md"
search = "v{current_version}...master"
replace = "{current_version}...{new_version}"

View File

@@ -1,304 +0,0 @@
name: '💬 Gemini CLI'
on:
pull_request_review_comment:
types:
- 'created'
pull_request_review:
types:
- 'submitted'
issue_comment:
types:
- 'created'
concurrency:
group: '${{ github.workflow }}-${{ github.event.issue.number }}'
cancel-in-progress: |-
${{ github.event.sender.type == 'User' && ( github.event.issue.author_association == 'OWNER' || github.event.issue.author_association == 'MEMBER' || github.event.issue.author_association == 'COLLABORATOR') }}
defaults:
run:
shell: 'bash'
permissions:
contents: 'write'
id-token: 'write'
pull-requests: 'write'
issues: 'write'
jobs:
gemini-cli:
# This condition is complex to ensure we only run when explicitly invoked.
if: |-
github.event_name == 'workflow_dispatch' ||
(
github.event_name == 'issues' && github.event.action == 'opened' &&
contains(github.event.issue.body, '@gemini-cli') &&
!contains(github.event.issue.body, '@gemini-cli /review') &&
!contains(github.event.issue.body, '@gemini-cli /triage') &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.issue.author_association)
) ||
(
(
github.event_name == 'issue_comment' ||
github.event_name == 'pull_request_review_comment'
) &&
contains(github.event.comment.body, '@gemini-cli') &&
!contains(github.event.comment.body, '@gemini-cli /review') &&
!contains(github.event.comment.body, '@gemini-cli /triage') &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association)
) ||
(
github.event_name == 'pull_request_review' &&
contains(github.event.review.body, '@gemini-cli') &&
!contains(github.event.review.body, '@gemini-cli /review') &&
!contains(github.event.review.body, '@gemini-cli /triage') &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.review.author_association)
)
timeout-minutes: 10
runs-on: 'ubuntu-latest'
steps:
- name: 'Generate GitHub App Token'
id: 'generate_token'
if: |-
${{ vars.APP_ID }}
uses: 'actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e' # ratchet:actions/create-github-app-token@v2
with:
app-id: '${{ vars.APP_ID }}'
private-key: '${{ secrets.APP_PRIVATE_KEY }}'
- name: 'Get context from event'
id: 'get_context'
env:
EVENT_NAME: '${{ github.event_name }}'
EVENT_PAYLOAD: '${{ toJSON(github.event) }}'
run: |-
set -euo pipefail
USER_REQUEST=""
ISSUE_NUMBER=""
IS_PR="false"
if [[ "${EVENT_NAME}" == "issues" ]]; then
USER_REQUEST=$(echo "${EVENT_PAYLOAD}" | jq -r .issue.body)
ISSUE_NUMBER=$(echo "${EVENT_PAYLOAD}" | jq -r .issue.number)
elif [[ "${EVENT_NAME}" == "issue_comment" ]]; then
USER_REQUEST=$(echo "${EVENT_PAYLOAD}" | jq -r .comment.body)
ISSUE_NUMBER=$(echo "${EVENT_PAYLOAD}" | jq -r .issue.number)
if [[ $(echo "${EVENT_PAYLOAD}" | jq -r .issue.pull_request) != "null" ]]; then
IS_PR="true"
fi
elif [[ "${EVENT_NAME}" == "pull_request_review" ]]; then
USER_REQUEST=$(echo "${EVENT_PAYLOAD}" | jq -r .review.body)
ISSUE_NUMBER=$(echo "${EVENT_PAYLOAD}" | jq -r .pull_request.number)
IS_PR="true"
elif [[ "${EVENT_NAME}" == "pull_request_review_comment" ]]; then
USER_REQUEST=$(echo "${EVENT_PAYLOAD}" | jq -r .comment.body)
ISSUE_NUMBER=$(echo "${EVENT_PAYLOAD}" | jq -r .pull_request.number)
IS_PR="true"
fi
# Clean up user request
USER_REQUEST=$(echo "${USER_REQUEST}" | sed 's/.*@gemini-cli//' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
{
echo "user_request=${USER_REQUEST}"
echo "issue_number=${ISSUE_NUMBER}"
echo "is_pr=${IS_PR}"
} >> "${GITHUB_OUTPUT}"
- name: 'Set up git user for commits'
run: |-
git config --global user.name 'gemini-cli[bot]'
git config --global user.email 'gemini-cli[bot]@users.noreply.github.com'
- name: 'Checkout PR branch'
if: |-
${{ steps.get_context.outputs.is_pr == 'true' }}
uses: 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683' # ratchet:actions/checkout@v4
with:
token: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
repository: '${{ github.repository }}'
ref: 'refs/pull/${{ steps.get_context.outputs.issue_number }}/head'
fetch-depth: 0
- name: 'Checkout main branch'
if: |-
${{ steps.get_context.outputs.is_pr == 'false' }}
uses: 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683' # ratchet:actions/checkout@v4
with:
token: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
repository: '${{ github.repository }}'
fetch-depth: 0
- name: 'Acknowledge request'
env:
GITHUB_ACTOR: '${{ github.actor }}'
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
ISSUE_NUMBER: '${{ steps.get_context.outputs.issue_number }}'
REPOSITORY: '${{ github.repository }}'
REQUEST_TYPE: '${{ steps.get_context.outputs.request_type }}'
run: |-
set -euo pipefail
MESSAGE="@${GITHUB_ACTOR} I've received your request and I'm working on it now! 🤖"
if [[ -n "${MESSAGE}" ]]; then
gh issue comment "${ISSUE_NUMBER}" \
--body "${MESSAGE}" \
--repo "${REPOSITORY}"
fi
- name: 'Get description'
id: 'get_description'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
IS_PR: '${{ steps.get_context.outputs.is_pr }}'
ISSUE_NUMBER: '${{ steps.get_context.outputs.issue_number }}'
run: |-
set -euo pipefail
if [[ "${IS_PR}" == "true" ]]; then
DESCRIPTION=$(gh pr view "${ISSUE_NUMBER}" --json body --template '{{.body}}')
else
DESCRIPTION=$(gh issue view "${ISSUE_NUMBER}" --json body --template '{{.body}}')
fi
{
echo "description<<EOF"
echo "${DESCRIPTION}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: 'Get comments'
id: 'get_comments'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
IS_PR: '${{ steps.get_context.outputs.is_pr }}'
ISSUE_NUMBER: '${{ steps.get_context.outputs.issue_number }}'
run: |-
set -euo pipefail
if [[ "${IS_PR}" == "true" ]]; then
COMMENTS=$(gh pr view "${ISSUE_NUMBER}" --json comments --template '{{range .comments}}{{.author.login}}: {{.body}}{{"\n"}}{{end}}')
else
COMMENTS=$(gh issue view "${ISSUE_NUMBER}" --json comments --template '{{range .comments}}{{.author.login}}: {{.body}}{{"\n"}}{{end}}')
fi
{
echo "comments<<EOF"
echo "${COMMENTS}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: 'Run Gemini'
id: 'run_gemini'
uses: 'google-github-actions/run-gemini-cli@v0'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
REPOSITORY: '${{ github.repository }}'
USER_REQUEST: '${{ steps.get_context.outputs.user_request }}'
ISSUE_NUMBER: '${{ steps.get_context.outputs.issue_number }}'
IS_PR: '${{ steps.get_context.outputs.is_pr }}'
with:
gemini_api_key: '${{ secrets.GEMINI_API_KEY }}'
gcp_workload_identity_provider: '${{ vars.GCP_WIF_PROVIDER }}'
gcp_project_id: '${{ vars.GOOGLE_CLOUD_PROJECT }}'
gcp_location: '${{ vars.GOOGLE_CLOUD_LOCATION }}'
gcp_service_account: '${{ vars.SERVICE_ACCOUNT_EMAIL }}'
use_vertex_ai: '${{ vars.GOOGLE_GENAI_USE_VERTEXAI }}'
use_gemini_code_assist: '${{ vars.GOOGLE_GENAI_USE_GCA }}'
settings: |-
{
"maxSessionTurns": 50,
"telemetry": {
"enabled": false,
"target": "gcp"
}
}
prompt: |-
## Role
You are a helpful AI assistant invoked via a CLI interface in a GitHub workflow. You have access to tools to interact with the repository and respond to the user.
## Context
- **Repository**: `${{ github.repository }}`
- **Triggering Event**: `${{ github.event_name }}`
- **Issue/PR Number**: `${{ steps.get_context.outputs.issue_number }}`
- **Is this a PR?**: `${{ steps.get_context.outputs.is_pr }}`
- **Issue/PR Description**:
`${{ steps.get_description.outputs.description }}`
- **Comments**:
`${{ steps.get_comments.outputs.comments }}`
## User Request
The user has sent the following request:
`${{ steps.get_context.outputs.user_request }}`
## How to Respond to Issues, PR Comments, and Questions
This workflow supports three main scenarios:
1. **Creating a Fix for an Issue**
- Carefully read the user request and the related issue or PR description.
- Use available tools to gather all relevant context (e.g., `gh issue view`, `gh pr view`, `gh pr diff`, `cat`, `head`, `tail`).
- Identify the root cause of the problem before proceeding.
- **Show and maintain a plan as a checklist**:
- At the very beginning, outline the steps needed to resolve the issue or address the request and post them as a checklist comment on the issue or PR (use GitHub markdown checkboxes: `- [ ] Task`).
- Example:
```
### Plan
- [ ] Investigate the root cause
- [ ] Implement the fix in `file.py`
- [ ] Add/modify tests
- [ ] Update documentation
- [ ] Verify the fix and close the issue
```
- Use: `gh pr comment "${ISSUE_NUMBER}" --body "<plan>"` or `gh issue comment "${ISSUE_NUMBER}" --body "<plan>"` to post the initial plan.
- As you make progress, keep the checklist visible and up to date by editing the same comment (check off completed tasks with `- [x]`).
- To update the checklist:
1. Find the comment ID for the checklist (use `gh pr comment list "${ISSUE_NUMBER}"` or `gh issue comment list "${ISSUE_NUMBER}"`).
2. Edit the comment with the updated checklist:
- For PRs: `gh pr comment --edit <comment-id> --body "<updated plan>"`
- For Issues: `gh issue comment --edit <comment-id> --body "<updated plan>"`
3. The checklist should only be maintained as a comment on the issue or PR. Do not track or update the checklist in code files.
- If the fix requires code changes, determine which files and lines are affected. If clarification is needed, note any questions for the user.
- Make the necessary code or documentation changes using the available tools (e.g., `write_file`). Ensure all changes follow project conventions and best practices. Reference all shell variables as `"${VAR}"` (with quotes and braces) to prevent errors.
- Run any relevant tests or checks to verify the fix works as intended. If possible, provide evidence (test output, screenshots, etc.) that the issue is resolved.
- **Branching and Committing**:
- **NEVER commit directly to the `main` branch.**
- If you are working on a **pull request** (`IS_PR` is `true`), the correct branch is already checked out. Simply commit and push to it.
- `git add .`
- `git commit -m "feat: <describe the change>"`
- `git push`
- If you are working on an **issue** (`IS_PR` is `false`), create a new branch for your changes. A good branch name would be `issue/${ISSUE_NUMBER}/<short-description>`.
- `git checkout -b issue/${ISSUE_NUMBER}/my-fix`
- `git add .`
- `git commit -m "feat: <describe the fix>"`
- `git push origin issue/${ISSUE_NUMBER}/my-fix`
- After pushing, you can create a pull request: `gh pr create --title "Fixes #${ISSUE_NUMBER}: <short title>" --body "This PR addresses issue #${ISSUE_NUMBER}."`
- Summarize what was changed and why in a markdown file: `write_file("response.md", "<your response here>")`
- Post the response as a comment:
- For PRs: `gh pr comment "${ISSUE_NUMBER}" --body-file response.md`
- For Issues: `gh issue comment "${ISSUE_NUMBER}" --body-file response.md`
2. **Addressing Comments on a Pull Request**
- Read the specific comment and the context of the PR.
- Use tools like `gh pr view`, `gh pr diff`, and `cat` to understand the code and discussion.
- If the comment requests a change or clarification, follow the same process as for fixing an issue: create a checklist plan, implement, test, and commit any required changes, updating the checklist as you go.
- **Committing Changes**: The correct PR branch is already checked out. Simply add, commit, and push your changes.
- `git add .`
- `git commit -m "fix: address review comments"`
- `git push`
- If the comment is a question, answer it directly and clearly, referencing code or documentation as needed.
- Document your response in `response.md` and post it as a PR comment: `gh pr comment "${ISSUE_NUMBER}" --body-file response.md`
3. **Answering Any Question on an Issue**
- Read the question and the full issue context using `gh issue view` and related tools.
- Research or analyze the codebase as needed to provide an accurate answer.
- If the question requires code or documentation changes, follow the fix process above, including creating and updating a checklist plan and **creating a new branch for your changes as described in section 1.**
- Write a clear, concise answer in `response.md` and post it as an issue comment: `gh issue comment "${ISSUE_NUMBER}" --body-file response.md`
## Guidelines
- **Be concise and actionable.** Focus on solving the user's problem efficiently.
- **Always commit and push your changes if you modify code or documentation.**
- **If you are unsure about the fix or answer, explain your reasoning and ask clarifying questions.**
- **Follow project conventions and best practices.**

View File

@@ -1,130 +0,0 @@
name: '🏷️ Gemini Automated Issue Triage'
on:
issues:
types:
- 'opened'
- 'reopened'
issue_comment:
types:
- 'created'
workflow_dispatch:
inputs:
issue_number:
description: 'issue number to triage'
required: true
type: 'number'
concurrency:
group: '${{ github.workflow }}-${{ github.event.issue.number }}'
cancel-in-progress: true
defaults:
run:
shell: 'bash'
permissions:
contents: 'read'
id-token: 'write'
issues: 'write'
statuses: 'write'
jobs:
triage-issue:
if: |-
github.event_name == 'issues' ||
github.event_name == 'workflow_dispatch' ||
(
github.event_name == 'issue_comment' &&
contains(github.event.comment.body, '@gemini-cli /triage') &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association)
)
timeout-minutes: 5
runs-on: 'ubuntu-latest'
steps:
- name: 'Checkout repository'
uses: 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683' # ratchet:actions/checkout@v4
- name: 'Generate GitHub App Token'
id: 'generate_token'
if: |-
${{ vars.APP_ID }}
uses: 'actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e' # ratchet:actions/create-github-app-token@v2
with:
app-id: '${{ vars.APP_ID }}'
private-key: '${{ secrets.APP_PRIVATE_KEY }}'
- name: 'Run Gemini Issue Triage'
uses: 'google-github-actions/run-gemini-cli@v0'
id: 'gemini_issue_triage'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
ISSUE_TITLE: '${{ github.event.issue.title }}'
ISSUE_BODY: '${{ github.event.issue.body }}'
ISSUE_NUMBER: '${{ github.event.issue.number }}'
REPOSITORY: '${{ github.repository }}'
with:
gemini_cli_version: '${{ vars.GEMINI_CLI_VERSION }}'
gcp_workload_identity_provider: '${{ vars.GCP_WIF_PROVIDER }}'
gcp_project_id: '${{ vars.GOOGLE_CLOUD_PROJECT }}'
gcp_location: '${{ vars.GOOGLE_CLOUD_LOCATION }}'
gcp_service_account: '${{ vars.SERVICE_ACCOUNT_EMAIL }}'
gemini_api_key: '${{ secrets.GEMINI_API_KEY }}'
use_vertex_ai: '${{ vars.GOOGLE_GENAI_USE_VERTEXAI }}'
use_gemini_code_assist: '${{ vars.GOOGLE_GENAI_USE_GCA }}'
settings: |-
{
"maxSessionTurns": 25,
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh label list)",
"run_shell_command(gh issue edit)"
],
"telemetry": {
"enabled": false,
"target": "gcp"
}
}
prompt: |-
## Role
You are an issue triage assistant. Analyze the current GitHub issue
and apply the most appropriate existing labels. Use the available
tools to gather information; do not ask for information to be
provided.
## Steps
1. Run: `gh label list` to get all available labels.
2. Review the issue title and body provided in the environment
variables: "${ISSUE_TITLE}" and "${ISSUE_BODY}".
3. Select the most relevant labels from the existing labels. If
available, set labels that follow the `kind/*`, `area/*`, and
`priority/*` patterns.
4. Apply the selected labels to this issue using:
`gh issue edit "${ISSUE_NUMBER}" --add-label "label1,label2"`
5. If the "status/needs-triage" label is present, remove it using:
`gh issue edit "${ISSUE_NUMBER}" --remove-label "status/needs-triage"`
## Guidelines
- Only use labels that already exist in the repository
- Do not add comments or modify the issue content
- Triage only the current issue
- Assign all applicable labels based on the issue content
- Reference all shell variables as "${VAR}" (with quotes and braces)
- name: 'Post Issue Triage Failure Comment'
if: |-
${{ failure() && steps.gemini_issue_triage.outcome == 'failure' }}
uses: 'actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea'
with:
github-token: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
script: |-
github.rest.issues.createComment({
owner: '${{ github.repository }}'.split('/')[0],
repo: '${{ github.repository }}'.split('/')[1],
issue_number: '${{ github.event.issue.number }}',
body: 'There is a problem with the Gemini CLI issue triaging. Please check the [action logs](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}) for details.'
})

View File

@@ -1,123 +0,0 @@
name: '📋 Gemini Scheduled Issue Triage'
on:
schedule:
- cron: '0 * * * *' # Runs every hour
workflow_dispatch:
concurrency:
group: '${{ github.workflow }}'
cancel-in-progress: true
defaults:
run:
shell: 'bash'
permissions:
contents: 'read'
id-token: 'write'
issues: 'write'
statuses: 'write'
jobs:
triage-issues:
timeout-minutes: 5
runs-on: 'ubuntu-latest'
steps:
- name: 'Checkout repository'
uses: 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683' # ratchet:actions/checkout@v4
- name: 'Generate GitHub App Token'
id: 'generate_token'
if: |-
${{ vars.APP_ID }}
uses: 'actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e' # ratchet:actions/create-github-app-token@v2
with:
app-id: '${{ vars.APP_ID }}'
private-key: '${{ secrets.APP_PRIVATE_KEY }}'
- name: 'Find untriaged issues'
id: 'find_issues'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
GITHUB_REPOSITORY: '${{ github.repository }}'
GITHUB_OUTPUT: '${{ github.output }}'
run: |-
set -euo pipefail
echo '🔍 Finding issues without labels...'
NO_LABEL_ISSUES="$(gh issue list --repo "${GITHUB_REPOSITORY}" \
--search 'is:open is:issue no:label' --json number,title,body)"
echo '🏷️ Finding issues that need triage...'
NEED_TRIAGE_ISSUES="$(gh issue list --repo "${GITHUB_REPOSITORY}" \
--search 'is:open is:issue label:"status/needs-triage"' --json number,title,body)"
echo '🔄 Merging and deduplicating issues...'
ISSUES="$(echo "${NO_LABEL_ISSUES}" "${NEED_TRIAGE_ISSUES}" | jq -c -s 'add | unique_by(.number)')"
echo '📝 Setting output for GitHub Actions...'
echo "issues_to_triage=${ISSUES}" >> "${GITHUB_OUTPUT}"
ISSUE_COUNT="$(echo "${ISSUES}" | jq 'length')"
echo "✅ Found ${ISSUE_COUNT} issues to triage! 🎯"
- name: 'Run Gemini Issue Triage'
if: |-
${{ steps.find_issues.outputs.issues_to_triage != '[]' }}
uses: 'google-github-actions/run-gemini-cli@v0'
id: 'gemini_issue_triage'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
ISSUES_TO_TRIAGE: '${{ steps.find_issues.outputs.issues_to_triage }}'
REPOSITORY: '${{ github.repository }}'
with:
gemini_cli_version: '${{ vars.GEMINI_CLI_VERSION }}'
gcp_workload_identity_provider: '${{ vars.GCP_WIF_PROVIDER }}'
gcp_project_id: '${{ vars.GOOGLE_CLOUD_PROJECT }}'
gcp_location: '${{ vars.GOOGLE_CLOUD_LOCATION }}'
gcp_service_account: '${{ vars.SERVICE_ACCOUNT_EMAIL }}'
gemini_api_key: '${{ secrets.GEMINI_API_KEY }}'
use_vertex_ai: '${{ vars.GOOGLE_GENAI_USE_VERTEXAI }}'
use_gemini_code_assist: '${{ vars.GOOGLE_GENAI_USE_GCA }}'
settings: |-
{
"maxSessionTurns": 25,
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh label list)",
"run_shell_command(gh issue edit)",
"run_shell_command(gh issue list)"
],
"telemetry": {
"enabled": false,
"target": "gcp"
}
}
prompt: |-
## Role
You are an issue triage assistant. Analyze issues and apply
appropriate labels. Use the available tools to gather information;
do not ask for information to be provided.
## Steps
1. Run: `gh label list`
2. Check environment variable: "${ISSUES_TO_TRIAGE}" (JSON array
of issues)
3. For each issue, apply labels:
`gh issue edit "${ISSUE_NUMBER}" --add-label "label1,label2"`.
If available, set labels that follow the `kind/*`, `area/*`,
and `priority/*` patterns.
4. For each issue, if the `status/needs-triage` label is present,
remove it using:
`gh issue edit "${ISSUE_NUMBER}" --remove-label "status/needs-triage"`
## Guidelines
- Only use existing repository labels
- Do not add comments
- Triage each issue independently
- Reference all shell variables as "${VAR}" (with quotes and braces)

View File

@@ -1,456 +0,0 @@
name: '🧐 Gemini Pull Request Review'
on:
pull_request:
types:
- 'opened'
- 'reopened'
issue_comment:
types:
- 'created'
pull_request_review_comment:
types:
- 'created'
pull_request_review:
types:
- 'submitted'
workflow_dispatch:
inputs:
pr_number:
description: 'PR number to review'
required: true
type: 'number'
concurrency:
group: '${{ github.workflow }}-${{ github.head_ref || github.ref }}'
cancel-in-progress: true
defaults:
run:
shell: 'bash'
permissions:
contents: 'read'
id-token: 'write'
issues: 'write'
pull-requests: 'write'
statuses: 'write'
jobs:
review-pr:
if: |-
github.event_name == 'workflow_dispatch' ||
(
github.event_name == 'pull_request' &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.pull_request.author_association)
) ||
(
(
(
github.event_name == 'issue_comment' &&
github.event.issue.pull_request
) ||
github.event_name == 'pull_request_review_comment'
) &&
contains(github.event.comment.body, '@gemini-cli /review') &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association)
) ||
(
github.event_name == 'pull_request_review' &&
contains(github.event.review.body, '@gemini-cli /review') &&
contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.review.author_association)
)
timeout-minutes: 5
runs-on: 'ubuntu-latest'
steps:
- name: 'Checkout PR code'
uses: 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683' # ratchet:actions/checkout@v4
- name: 'Generate GitHub App Token'
id: 'generate_token'
if: |-
${{ vars.APP_ID }}
uses: 'actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e' # ratchet:actions/create-github-app-token@v2
with:
app-id: '${{ vars.APP_ID }}'
private-key: '${{ secrets.APP_PRIVATE_KEY }}'
- name: 'Get PR details (pull_request & workflow_dispatch)'
id: 'get_pr'
if: |-
${{ github.event_name == 'pull_request' || github.event_name == 'workflow_dispatch' }}
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
EVENT_NAME: '${{ github.event_name }}'
WORKFLOW_PR_NUMBER: '${{ github.event.inputs.pr_number }}'
PULL_REQUEST_NUMBER: '${{ github.event.pull_request.number }}'
run: |-
set -euo pipefail
if [[ "${EVENT_NAME}" = "workflow_dispatch" ]]; then
PR_NUMBER="${WORKFLOW_PR_NUMBER}"
else
PR_NUMBER="${PULL_REQUEST_NUMBER}"
fi
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
# Get PR details
PR_DATA="$(gh pr view "${PR_NUMBER}" --json title,body,additions,deletions,changedFiles,baseRefName,headRefName)"
echo "pr_data=${PR_DATA}" >> "${GITHUB_OUTPUT}"
# Get file changes
CHANGED_FILES="$(gh pr diff "${PR_NUMBER}" --name-only)"
{
echo "changed_files<<EOF"
echo "${CHANGED_FILES}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: 'Get PR details (issue_comment)'
id: 'get_pr_comment'
if: |-
${{ github.event_name == 'issue_comment' }}
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
COMMENT_BODY: '${{ github.event.comment.body }}'
PR_NUMBER: '${{ github.event.issue.number }}'
run: |-
set -euo pipefail
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
# Extract additional instructions from comment
ADDITIONAL_INSTRUCTIONS="$(
echo "${COMMENT_BODY}" | sed 's/.*@gemini-cli \/review//' | xargs
)"
echo "additional_instructions=${ADDITIONAL_INSTRUCTIONS}" >> "${GITHUB_OUTPUT}"
# Get PR details
PR_DATA="$(gh pr view "${PR_NUMBER}" --json title,body,additions,deletions,changedFiles,baseRefName,headRefName)"
echo "pr_data=${PR_DATA}" >> "${GITHUB_OUTPUT}"
# Get file changes
CHANGED_FILES="$(gh pr diff "${PR_NUMBER}" --name-only)"
{
echo "changed_files<<EOF"
echo "${CHANGED_FILES}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: 'Run Gemini PR Review'
uses: 'google-github-actions/run-gemini-cli@v0'
id: 'gemini_pr_review'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
PR_NUMBER: '${{ steps.get_pr.outputs.pr_number || steps.get_pr_comment.outputs.pr_number }}'
PR_DATA: '${{ steps.get_pr.outputs.pr_data || steps.get_pr_comment.outputs.pr_data }}'
CHANGED_FILES: '${{ steps.get_pr.outputs.changed_files || steps.get_pr_comment.outputs.changed_files }}'
ADDITIONAL_INSTRUCTIONS: '${{ steps.get_pr.outputs.additional_instructions || steps.get_pr_comment.outputs.additional_instructions }}'
REPOSITORY: '${{ github.repository }}'
with:
gemini_cli_version: '${{ vars.GEMINI_CLI_VERSION }}'
gcp_workload_identity_provider: '${{ vars.GCP_WIF_PROVIDER }}'
gcp_project_id: '${{ vars.GOOGLE_CLOUD_PROJECT }}'
gcp_location: '${{ vars.GOOGLE_CLOUD_LOCATION }}'
gcp_service_account: '${{ vars.SERVICE_ACCOUNT_EMAIL }}'
gemini_api_key: '${{ secrets.GEMINI_API_KEY }}'
use_vertex_ai: '${{ vars.GOOGLE_GENAI_USE_VERTEXAI }}'
use_gemini_code_assist: '${{ vars.GOOGLE_GENAI_USE_GCA }}'
settings: |-
{
"maxSessionTurns": 20,
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"includeTools": [
"create_pending_pull_request_review",
"add_comment_to_pending_review",
"submit_pending_pull_request_review"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
}
}
},
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh pr view)",
"run_shell_command(gh pr diff)",
"run_shell_command(cat)",
"run_shell_command(head)",
"run_shell_command(tail)",
"run_shell_command(grep)"
],
"telemetry": {
"enabled": false,
"target": "gcp"
}
}
prompt: |-
## Role
You are an expert code reviewer. You have access to tools to gather
PR information and perform the review on GitHub. Use the available tools to
gather information; do not ask for information to be provided.
## Requirements
1. All feedback must be left on GitHub.
2. Any output that is not left in GitHub will not be seen.
## Steps
Start by running these commands to gather the required data:
1. Run: echo $"{REPOSITORY}" to get the github repository in <OWNER>/<REPO> format
2. Run: echo "${PR_DATA}" to get PR details (JSON format)
3. Run: echo "${CHANGED_FILES}" to get the list of changed files
4. Run: echo "${PR_NUMBER}" to get the PR number
5. Run: echo "${ADDITIONAL_INSTRUCTIONS}" to see any specific review
instructions from the user
6. Run: gh pr diff "${PR_NUMBER}" to see the full diff and reference
Context section to understand it
7. For any specific files, use: cat filename, head -50 filename, or
tail -50 filename
8. If ADDITIONAL_INSTRUCTIONS contains text, prioritize those
specific areas or focus points in your review. Common instruction
examples: "focus on security", "check performance", "review error
handling", "check for breaking changes"
## Guideline
### Core Guideline(Always applicable)
1. Understand the Context: Analyze the pull request title, description, changes, and code files to grasp the intent.
2. Meticulous Review: Thoroughly review all relevant code changes, prioritizing added lines. Consider the specified
focus areas and any provided style guide.
3. Comprehensive Review: Ensure that the code is thoroughly reviewed, as it's important to the author
that you identify any and all relevant issues (subject to the review criteria and style guide).
Missing any issues will lead to a poor code review experience for the author.
4. Constructive Feedback:
* Provide clear explanations for each concern.
* Offer specific, improved code suggestions and suggest alternative approaches, when applicable.
Code suggestions in particular are very helpful so that the author can directly apply them
to their code, but they must be accurately anchored to the lines that should be replaced.
5. Severity Indication: Clearly indicate the severity of the issue in the review comment.
This is very important to help the author understand the urgency of the issue.
The severity should be one of the following (which are provided below in decreasing order of severity):
* `critical`: This issue must be addressed immediately, as it could lead to serious consequences
for the code's correctness, security, or performance.
* `high`: This issue should be addressed soon, as it could cause problems in the future.
* `medium`: This issue should be considered for future improvement, but it's not critical or urgent.
* `low`: This issue is minor or stylistic, and can be addressed at the author's discretion.
6. Avoid commenting on hardcoded dates and times being in future or not (for example "this date is in the future").
* Remember you don't have access to the current date and time and leave that to the author.
7. Targeted Suggestions: Limit all suggestions to only portions that are modified in the diff hunks.
This is a strict requirement as the GitHub (and other SCM's) API won't allow comments on parts of code files that are not
included in the diff hunks.
8. Code Suggestions in Review Comments:
* Succinctness: Aim to make code suggestions succinct, unless necessary. Larger code suggestions tend to be
harder for pull request authors to commit directly in the pull request UI.
* Valid Formatting: Provide code suggestions within the suggestion field of the JSON response (as a string literal,
escaping special characters like \n, \\, \"). Do not include markdown code blocks in the suggestion field.
Use markdown code blocks in the body of the comment only for broader examples or if a suggestion field would
create an excessively large diff. Prefer the suggestion field for specific, targeted code changes.
* Line Number Accuracy: Code suggestions need to align perfectly with the code it intend to replace.
Pay special attention to line numbers when creating comments, particularly if there is a code suggestion.
Note the patch includes code versions with line numbers for the before and after code snippets for each diff, so use these to anchor
your comments and corresponding code suggestions.
* Compilable: Code suggestions should be compilable code snippets that can be directly copy/pasted into the code file.
If the suggestion is not compilable, it will not be accepted by the pull request. Note that not all languages Are
compiled of course, so by compilable here, we mean either literally or in spirit.
* Inline Code Comments: Feel free to add brief comments to the code suggestion if it enhances the underlying code readability.
Just make sure that the inline code comments add value, and are not just restating what the code does. Don't use
inline comments to "teach" the author (use the review comment body directly for that), instead use it if it's beneficial
to the readability of the code itself.
10. Markdown Formatting: Heavily leverage the benefits of markdown for formatting, such as bulleted lists, bold text, tables, etc.
11. Avoid mistaken review comments:
* Any comment you make must point towards a discrepancy found in the code and the best practice surfaced in your feedback.
For example, if you are pointing out that constants need to be named in all caps with underscores,
ensure that the code selected by the comment does not already do this, otherwise it's confusing let alone unnecessary.
12. Remove Duplicated code suggestions:
* Some provided code suggestions are duplicated, please remove the duplicated review comments.
13. Don't Approve The Pull Request
14. Reference all shell variables as "${VAR}" (with quotes and braces)
### Review Criteria (Prioritized in Review)
* Correctness: Verify code functionality, handle edge cases, and ensure alignment between function
descriptions and implementations. Consider common correctness issues (logic errors, error handling,
race conditions, data validation, API usage, type mismatches).
* Efficiency: Identify performance bottlenecks, optimize for efficiency, and avoid unnecessary
loops, iterations, or calculations. Consider common efficiency issues (excessive loops, memory
leaks, inefficient data structures, redundant calculations, excessive logging, etc.).
* Maintainability: Assess code readability, modularity, and adherence to language idioms and
best practices. Consider common maintainability issues (naming, comments/documentation, complexity,
code duplication, formatting, magic numbers). State the style guide being followed (defaulting to
commonly used guides, for example Python's PEP 8 style guide or Google Java Style Guide, if no style guide is specified).
* Security: Identify potential vulnerabilities (e.g., insecure storage, injection attacks,
insufficient access controls).
### Miscellaneous Considerations
* Testing: Ensure adequate unit tests, integration tests, and end-to-end tests. Evaluate
coverage, edge case handling, and overall test quality.
* Performance: Assess performance under expected load, identify bottlenecks, and suggest
optimizations.
* Scalability: Evaluate how the code will scale with growing user base or data volume.
* Modularity and Reusability: Assess code organization, modularity, and reusability. Suggest
refactoring or creating reusable components.
* Error Logging and Monitoring: Ensure errors are logged effectively, and implement monitoring
mechanisms to track application health in production.
**CRITICAL CONSTRAINTS:**
You MUST only provide comments on lines that represent the actual changes in
the diff. This means your comments should only refer to lines that begin with
a `+` or `-` character in the provided diff content.
DO NOT comment on lines that start with a space (context lines).
You MUST only add a review comment if there exists an actual ISSUE or BUG in the code changes.
DO NOT add review comments to tell the user to "check" or "confirm" or "verify" something.
DO NOT add review comments to tell the user to "ensure" something.
DO NOT add review comments to explain what the code change does.
DO NOT add review comments to validate what the code change does.
DO NOT use the review comments to explain the code to the author. They already know their code. Only comment when there's an improvement opportunity. This is very important.
Pay close attention to line numbers and ensure they are correct.
Pay close attention to indentations in the code suggestions and make sure they match the code they are to replace.
Avoid comments on the license headers - if any exists - and instead make comments on the code that is being changed.
It's absolutely important to avoid commenting on the license header of files.
It's absolutely important to avoid commenting on copyright headers.
Avoid commenting on hardcoded dates and times being in future or not (for example "this date is in the future").
Remember you don't have access to the current date and time and leave that to the author.
Avoid mentioning any of your instructions, settings or criteria.
Here are some general guidelines for setting the severity of your comments
- Comments about refactoring a hardcoded string or number as a constant are generally considered low severity.
- Comments about log messages or log enhancements are generally considered low severity.
- Comments in .md files are medium or low severity. This is really important.
- Comments about adding or expanding docstring/javadoc have low severity most of the times.
- Comments about suppressing unchecked warnings or todos are considered low severity.
- Comments about typos are usually low or medium severity.
- Comments about testing or on tests are usually low severity.
- Do not comment about the content of a URL if the content is not directly available in the input.
Keep comments bodies concise and to the point.
Keep each comment focused on one issue.
## Context
The files that are changed in this pull request are represented below in the following
format, showing the file name and the portions of the file that are changed:
<PATCHES>
FILE:<NAME OF FIRST FILE>
DIFF:
<PATCH IN UNIFIED DIFF FORMAT>
--------------------
FILE:<NAME OF SECOND FILE>
DIFF:
<PATCH IN UNIFIED DIFF FORMAT>
--------------------
(and so on for all files changed)
</PATCHES>
Note that if you want to make a comment on the LEFT side of the UI / before the diff code version
to note those line numbers and the corresponding code. Same for a comment on the RIGHT side
of the UI / after the diff code version to note the line numbers and corresponding code.
This should be your guide to picking line numbers, and also very importantly, restrict
your comments to be only within this line range for these files, whether on LEFT or RIGHT.
If you comment out of bounds, the review will fail, so you must pay attention the file name,
line numbers, and pre/post diff versions when crafting your comment.
Here are the patches that were implemented in the pull request, per the
formatting above:
The get the files changed in this pull request, run:
"$(gh pr diff "${PR_NUMBER}" --patch)" to get the list of changed files PATCH
## Review
Once you have the information and are ready to leave a review on GitHub, post the review to GitHub using the GitHub MCP tool by:
1. Creating a pending review: Use the mcp__github__create_pending_pull_request_review to create a Pending Pull Request Review.
2. Adding review comments:
2.1 Use the mcp__github__add_comment_to_pending_review to add comments to the Pending Pull Request Review. Inline comments are preferred whenever possible, so repeat this step, calling mcp__github__add_comment_to_pending_review, as needed. All comments about specific lines of code should use inline comments. It is preferred to use code suggestions when possible, which include a code block that is labeled "suggestion", which contains what the new code should be. All comments should also have a severity. The syntax is:
Normal Comment Syntax:
<COMMENT>
{{SEVERITY}} {{COMMENT_TEXT}}
</COMMENT>
Inline Comment Syntax: (Preferred):
<COMMENT>
{{SEVERITY}} {{COMMENT_TEXT}}
```suggestion
{{CODE_SUGGESTION}}
```
</COMMENT>
Prepend a severity emoji to each comment:
- 🟢 for low severity
- 🟡 for medium severity
- 🟠 for high severity
- 🔴 for critical severity
- 🔵 if severity is unclear
Including all of this, an example inline comment would be:
<COMMENT>
🟢 Use camelCase for function names
```suggestion
myFooBarFunction
```
</COMMENT>
A critical severity example would be:
<COMMENT>
🔴 Remove storage key from GitHub
```suggestion
```
3. Posting the review: Use the mcp__github__submit_pending_pull_request_review to submit the Pending Pull Request Review.
3.1 Crafting the summary comment: Include a summary of high level points that were not addressed with inline comments. Be concise. Do not repeat details mentioned inline.
Structure your summary comment using this exact format with markdown:
## 📋 Review Summary
Provide a brief 2-3 sentence overview of the PR and overall
assessment.
## 🔍 General Feedback
- List general observations about code quality
- Mention overall patterns or architectural decisions
- Highlight positive aspects of the implementation
- Note any recurring themes across files
## Final Instructions
Remember, you are running in a VM and no one reviewing your output. Your review must be posted to GitHub using the MCP tools to create a pending review, add comments to the pending review, and submit the pending review.
- name: 'Post PR review failure comment'
if: |-
${{ failure() && steps.gemini_pr_review.outcome == 'failure' }}
uses: 'actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea'
with:
github-token: '${{ steps.generate_token.outputs.token || secrets.GITHUB_TOKEN }}'
script: |-
github.rest.issues.createComment({
owner: '${{ github.repository }}'.split('/')[0],
repo: '${{ github.repository }}'.split('/')[1],
issue_number: '${{ steps.get_pr.outputs.pr_number || steps.get_pr_comment.outputs.pr_number }}',
body: 'There is a problem with the Gemini CLI PR review. Please check the [action logs](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}) for details.'
})

1
.gitignore vendored
View File

@@ -122,6 +122,7 @@ scripts/perf/*.zip
*/.DS_Store
Pipfile
Pipfile.lock
uv.lock
/cache/
.github/binja/binaryninja
.github/binja/download_headless.py

View File

@@ -7,21 +7,46 @@
### Breaking Changes
### New Rules (2)
### New Rules (21)
- anti-analysis/anti-vm/vm-detection/detect-mouse-movement-via-activity-checks-on-windows tevajdr@gmail.com
- nursery/create-executable-heap moritz.raabe@mandiant.com
- anti-analysis/packer/dxpack/packed-with-dxpack jakubjozwiak@google.com
- anti-analysis/anti-av/patch-bitdefender-hooking-dll-function jakubjozwiak@google.com
- nursery/acquire-load-driver-privileges mehunhoff@google.com
- nursery/communicate-using-ftp mehunhoff@google.com
- linking/static/eclipse-paho-mqtt-c/linked-against-eclipse-paho-mqtt-c jakubjozwiak@google.com
- linking/static/qmqtt/linked-against-qmqtt jakubjozwiak@google.com
- anti-analysis/anti-forensic/disable-powershell-transcription jakubjozwiak@google.com
- host-interaction/powershell/bypass-powershell-constrained-language-mode-via-getsystemlockdownpolicy-patch jakubjozwiak@google.com
- linking/static/grpc/linked-against-grpc jakubjozwiak@google.com
- linking/static/hp-socket/linked-against-hp-socket jakubjozwiak@google.com
- load-code/execute-jscript-via-vsaengine-in-dotnet jakubjozwiak@google.com
- linking/static/funchook/linked-against-funchook jakubjozwiak@google.com
- linking/static/plthook/linked-against-plthook jakubjozwiak@google.com
- host-interaction/network/enumerate-tcp-connections-via-wmi-com-api jakubjozwiak@google.com
- host-interaction/network/routing-table/create-routing-table-entry jakubjozwiak@google.com
- host-interaction/network/routing-table/get-routing-table michael.hunhoff@mandiant.com
- host-interaction/file-system/use-io_uring-io-interface-on-linux jakubjozwiak@google.com
- collection/keylog/log-keystrokes-via-direct-input zeze-zeze
-
### Bug Fixes
- binja: fix a crash during feature extraction when the MLIL is unavailable @xusheng6 #2714
### capa Explorer Web
### capa Explorer IDA Pro plugin
- add `ida-plugin.json` for inclusion in the IDA Pro plugin repository @williballenthin
- ida plugin: add Qt compatibility layer for PyQt5 and PySide6 support @williballenthin #2707
- ida plugin: update ida-settings API @mr-tz #2736
### Development
- ci: remove redundant "test_run" action from build workflow @mike-hunhoff #2692
- dev: add bumpmyversion to bump and sync versions across the project @mr-tz
### Raw diffs
- [capa v9.2.1...master](https://github.com/mandiant/capa/compare/v9.2.1...master)

View File

@@ -315,3 +315,6 @@ If you use Ghidra, then you can use the [capa + Ghidra integration](/capa/ghidra
## capa testfiles
The [capa-testfiles repository](https://github.com/mandiant/capa-testfiles) contains the data we use to test capa's code and rules
## mailing list
Subscribe to the FLARE mailing list for community announcements! Email "subscribe" to [flare-external@google.com](mailto:flare-external@google.com?subject=subscribe).

View File

@@ -19,7 +19,6 @@ from binaryninja import (
Function,
BinaryView,
SymbolType,
ILException,
RegisterValueType,
VariableSourceType,
LowLevelILOperation,
@@ -192,9 +191,8 @@ def extract_stackstring(fh: FunctionHandle):
if bv is None:
return
try:
mlil = func.mlil
except ILException:
mlil = func.mlil
if mlil is None:
return
for block in mlil.basic_blocks:

View File

@@ -14,9 +14,9 @@
import ida_kernwin
from PyQt5 import QtCore
from capa.ida.plugin.error import UserCancelledError
from capa.ida.plugin.qt_compat import QtCore, Signal
from capa.features.extractors.ida.extractor import IdaFeatureExtractor
from capa.features.extractors.base_extractor import FunctionHandle
@@ -24,7 +24,7 @@ from capa.features.extractors.base_extractor import FunctionHandle
class CapaExplorerProgressIndicator(QtCore.QObject):
"""implement progress signal, used during feature extraction"""
progress = QtCore.pyqtSignal(str)
progress = Signal(str)
def update(self, text):
"""emit progress update

View File

@@ -23,7 +23,6 @@ from pathlib import Path
import idaapi
import ida_kernwin
import ida_settings
from PyQt5 import QtGui, QtCore, QtWidgets
import capa.main
import capa.rules
@@ -51,10 +50,10 @@ from capa.ida.plugin.hooks import CapaExplorerIdaHooks
from capa.ida.plugin.model import CapaExplorerDataModel
from capa.ida.plugin.proxy import CapaExplorerRangeProxyModel, CapaExplorerSearchProxyModel
from capa.ida.plugin.extractor import CapaExplorerFeatureExtractor
from capa.ida.plugin.qt_compat import QtGui, QtCore, QtWidgets
from capa.features.extractors.base_extractor import FunctionHandle
logger = logging.getLogger(__name__)
settings = ida_settings.IDASettings("capa")
CAPA_SETTINGS_RULE_PATH = "rule_path"
CAPA_SETTINGS_RULEGEN_AUTHOR = "rulegen_author"
@@ -79,6 +78,13 @@ AnalyzeOptionsText = {
}
def get_setting(key: str, default=None):
try:
return ida_settings.get_current_plugin_setting(key)
except KeyError:
return default
def write_file(path: Path, data):
""" """
path.write_bytes(data)
@@ -107,7 +113,7 @@ class QLineEditClicked(QtWidgets.QLineEdit):
old = self.text()
new = str(
QtWidgets.QFileDialog.getExistingDirectory(
self.parent(), "Please select a capa rules directory", settings.user.get(CAPA_SETTINGS_RULE_PATH, "")
self.parent(), "Please select a capa rules directory", get_setting(CAPA_SETTINGS_RULE_PATH, "")
)
)
if new:
@@ -125,8 +131,8 @@ class CapaSettingsInputDialog(QtWidgets.QDialog):
self.setMinimumWidth(500)
self.setWindowFlags(self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
self.edit_rule_path = QLineEditClicked(settings.user.get(CAPA_SETTINGS_RULE_PATH, ""))
self.edit_rule_author = QtWidgets.QLineEdit(settings.user.get(CAPA_SETTINGS_RULEGEN_AUTHOR, ""))
self.edit_rule_path = QLineEditClicked(get_setting(CAPA_SETTINGS_RULE_PATH, ""))
self.edit_rule_author = QtWidgets.QLineEdit(get_setting(CAPA_SETTINGS_RULEGEN_AUTHOR, ""))
self.edit_rule_scope = QtWidgets.QComboBox()
self.edit_rules_link = QtWidgets.QLabel()
self.edit_analyze = QtWidgets.QComboBox()
@@ -141,11 +147,11 @@ class CapaSettingsInputDialog(QtWidgets.QDialog):
scopes = ("file", "function", "basic block", "instruction")
self.edit_rule_scope.addItems(scopes)
self.edit_rule_scope.setCurrentIndex(scopes.index(settings.user.get(CAPA_SETTINGS_RULEGEN_SCOPE, "function")))
self.edit_rule_scope.setCurrentIndex(scopes.index(get_setting(CAPA_SETTINGS_RULEGEN_SCOPE, "function")))
self.edit_analyze.addItems(AnalyzeOptionsText.values())
# set the default analysis option here
self.edit_analyze.setCurrentIndex(settings.user.get(CAPA_SETTINGS_ANALYZE, Options.NO_ANALYSIS))
self.edit_analyze.setCurrentIndex(get_setting(CAPA_SETTINGS_ANALYZE, Options.NO_ANALYSIS))
buttons = QtWidgets.QDialogButtonBox(QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel, self)
@@ -235,7 +241,7 @@ class CapaExplorerForm(idaapi.PluginForm):
self.Show()
analyze = settings.user.get(CAPA_SETTINGS_ANALYZE)
analyze = get_setting(CAPA_SETTINGS_ANALYZE)
if analyze != Options.NO_ANALYSIS or (option & Options.ANALYZE_AUTO) == Options.ANALYZE_AUTO:
self.analyze_program(analyze=analyze)
@@ -581,7 +587,7 @@ class CapaExplorerForm(idaapi.PluginForm):
def ensure_capa_settings_rule_path(self):
try:
path: str = settings.user.get(CAPA_SETTINGS_RULE_PATH, "")
path: str = get_setting(CAPA_SETTINGS_RULE_PATH, "")
# resolve rules directory - check self and settings first, then ask user
# pathlib.Path considers "" equivalent to "." so we first check if rule path is an empty string
@@ -611,7 +617,7 @@ class CapaExplorerForm(idaapi.PluginForm):
logger.error("rule path %s does not exist or cannot be accessed", path)
return False
settings.user[CAPA_SETTINGS_RULE_PATH] = path
ida_settings.set_current_plugin_setting(CAPA_SETTINGS_RULE_PATH, path)
except UserCancelledError:
capa.ida.helpers.inform_user_ida_ui("Analysis requires capa rules")
logger.warning(
@@ -635,7 +641,7 @@ class CapaExplorerForm(idaapi.PluginForm):
if not self.ensure_capa_settings_rule_path():
return False
rule_path: Path = Path(settings.user.get(CAPA_SETTINGS_RULE_PATH, ""))
rule_path: Path = Path(get_setting(CAPA_SETTINGS_RULE_PATH, ""))
try:
def on_load_rule(_, i, total):
@@ -649,10 +655,10 @@ class CapaExplorerForm(idaapi.PluginForm):
return None
except Exception as e:
capa.ida.helpers.inform_user_ida_ui(
f"Failed to load capa rules from {settings.user[CAPA_SETTINGS_RULE_PATH]}"
f"Failed to load capa rules from {get_setting(CAPA_SETTINGS_RULE_PATH)}"
)
logger.error("Failed to load capa rules from %s (error: %s).", settings.user[CAPA_SETTINGS_RULE_PATH], e)
logger.error("Failed to load capa rules from %s (error: %s).", get_setting(CAPA_SETTINGS_RULE_PATH), e)
logger.error(
"Make sure your file directory contains properly " # noqa: G003 [logging statement uses +]
+ "formatted capa rules. You can download and extract the official rules from %s. "
@@ -661,7 +667,7 @@ class CapaExplorerForm(idaapi.PluginForm):
CAPA_RULESET_DOC_URL,
)
settings.user[CAPA_SETTINGS_RULE_PATH] = ""
ida_settings.set_current_plugin_setting(CAPA_SETTINGS_RULE_PATH, "")
return None
def load_capa_results(self, new_analysis, from_cache):
@@ -701,7 +707,7 @@ class CapaExplorerForm(idaapi.PluginForm):
update_wait_box("verifying cached results")
count_source_rules = self.program_analysis_ruleset_cache.source_rule_count
user_settings = settings.user[CAPA_SETTINGS_RULE_PATH]
user_settings = get_setting(CAPA_SETTINGS_RULE_PATH)
view_status_rules: str = f"{user_settings} ({count_source_rules} rules)"
# warn user about potentially outdated rules, depending on the use-case this may be expected
@@ -775,7 +781,7 @@ class CapaExplorerForm(idaapi.PluginForm):
update_wait_box("extracting features")
try:
meta = capa.ida.helpers.collect_metadata([Path(settings.user[CAPA_SETTINGS_RULE_PATH])])
meta = capa.ida.helpers.collect_metadata([Path(get_setting(CAPA_SETTINGS_RULE_PATH))])
capabilities = capa.capabilities.common.find_capabilities(
ruleset, self.feature_extractor, disable_progress=True
)
@@ -855,7 +861,7 @@ class CapaExplorerForm(idaapi.PluginForm):
except Exception as e:
logger.exception("Failed to save results to database (error: %s)", e)
return False
user_settings = settings.user[CAPA_SETTINGS_RULE_PATH]
user_settings = get_setting(CAPA_SETTINGS_RULE_PATH)
count_source_rules = self.program_analysis_ruleset_cache.source_rule_count
new_view_status = f"capa rules: {user_settings} ({count_source_rules} rules)"
# regardless of new analysis, render results - e.g. we may only want to render results after checking
@@ -1076,13 +1082,13 @@ class CapaExplorerForm(idaapi.PluginForm):
# load preview and feature tree
self.view_rulegen_preview.load_preview_meta(
self.rulegen_current_function.address if self.rulegen_current_function else None,
settings.user.get(CAPA_SETTINGS_RULEGEN_AUTHOR, "<insert_author>"),
settings.user.get(CAPA_SETTINGS_RULEGEN_SCOPE, "function"),
get_setting(CAPA_SETTINGS_RULEGEN_AUTHOR, "<insert_author>"),
get_setting(CAPA_SETTINGS_RULEGEN_SCOPE, "function"),
)
self.view_rulegen_features.load_features(all_file_features, all_function_features)
self.set_view_status_label(f"capa rules: {settings.user[CAPA_SETTINGS_RULE_PATH]}")
self.set_view_status_label(f"capa rules: {get_setting(CAPA_SETTINGS_RULE_PATH)}")
except Exception as e:
logger.exception("Failed to render views (error: %s)", e)
return False
@@ -1303,12 +1309,11 @@ class CapaExplorerForm(idaapi.PluginForm):
""" """
dialog = CapaSettingsInputDialog("capa explorer settings", parent=self.parent)
if dialog.exec_():
(
settings.user[CAPA_SETTINGS_RULE_PATH],
settings.user[CAPA_SETTINGS_RULEGEN_AUTHOR],
settings.user[CAPA_SETTINGS_RULEGEN_SCOPE],
settings.user[CAPA_SETTINGS_ANALYZE],
) = dialog.get_values()
rule_path, rule_author, rule_scope, analyze = dialog.get_values()
ida_settings.set_current_plugin_setting(CAPA_SETTINGS_RULE_PATH, rule_path)
ida_settings.set_current_plugin_setting(CAPA_SETTINGS_RULEGEN_AUTHOR, rule_author)
ida_settings.set_current_plugin_setting(CAPA_SETTINGS_RULEGEN_SCOPE, rule_scope)
ida_settings.set_current_plugin_setting(CAPA_SETTINGS_ANALYZE, analyze)
def save_program_analysis(self):
""" """
@@ -1358,7 +1363,7 @@ class CapaExplorerForm(idaapi.PluginForm):
@param state: checked state
"""
if state == QtCore.Qt.Checked:
if state:
self.limit_results_to_function(idaapi.get_func(idaapi.get_screen_ea()))
else:
self.range_model_proxy.reset_address_range_filter()
@@ -1367,7 +1372,7 @@ class CapaExplorerForm(idaapi.PluginForm):
def slot_checkbox_limit_features_by_ea(self, state):
""" """
if state == QtCore.Qt.Checked:
if state:
self.view_rulegen_features.filter_items_by_ea(idaapi.get_screen_ea())
else:
self.view_rulegen_features.show_all_items()
@@ -1408,7 +1413,7 @@ class CapaExplorerForm(idaapi.PluginForm):
"""create Qt dialog to ask user for a directory"""
return str(
QtWidgets.QFileDialog.getExistingDirectory(
self.parent, "Please select a capa rules directory", settings.user.get(CAPA_SETTINGS_RULE_PATH, "")
self.parent, "Please select a capa rules directory", get_setting(CAPA_SETTINGS_RULE_PATH, "")
)
)
@@ -1417,7 +1422,7 @@ class CapaExplorerForm(idaapi.PluginForm):
return QtWidgets.QFileDialog.getSaveFileName(
None,
"Please select a location to save capa rule file",
settings.user.get(CAPA_SETTINGS_RULE_PATH, ""),
get_setting(CAPA_SETTINGS_RULE_PATH, ""),
"*.yml",
)[0]

View File

@@ -0,0 +1,38 @@
{
"IDAMetadataDescriptorVersion": 1,
"plugin": {
"name": "capa",
"entryPoint": "capa_explorer.py",
"version": "9.2.1",
"idaVersions": ">=7.4",
"description": "Identify capabilities in executable files using FLARE's capa framework",
"license": "Apache-2.0",
"categories": [
"malware-analysis",
"api-scripting-and-automation",
"ui-ux-and-visualization"
],
"pythonDependencies": ["flare-capa==9.2.1"],
"urls": {
"repository": "https://github.com/mandiant/capa"
},
"authors": [
{"name": "Willi Ballenthin", "email": "wballenthin@hex-rays.com"},
{"name": "Moritz Raabe", "email": "moritzraabe@google.com"},
{"name": "Mike Hunhoff", "email": "mike.hunhoff@gmail.com"},
{"name": "Yacine Elhamer", "email": "elhamer.yacine@gmail.com"}
],
"keywords": [
"capability-detection",
"malware-analysis",
"behavior-analysis",
"reverse-engineering",
"att&ck",
"rule-engine",
"feature-extraction",
"yara-like-rules",
"static-analysis",
"dynamic-analysis"
]
}
}

View File

@@ -18,10 +18,10 @@ from typing import Iterator, Optional
import idc
import idaapi
from PyQt5 import QtCore
import capa.ida.helpers
from capa.features.address import Address, FileOffsetAddress, AbsoluteVirtualAddress
from capa.ida.plugin.qt_compat import QtCore, qt_get_item_flag_tristate
def info_to_name(display):
@@ -55,7 +55,7 @@ class CapaExplorerDataItem:
self.flags = QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
if self._can_check:
self.flags = self.flags | QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsTristate
self.flags = self.flags | QtCore.Qt.ItemIsUserCheckable | qt_get_item_flag_tristate()
if self.pred:
self.pred.appendChild(self)

View File

@@ -18,7 +18,6 @@ from collections import deque
import idc
import idaapi
from PyQt5 import QtGui, QtCore
import capa.rules
import capa.ida.helpers
@@ -42,6 +41,7 @@ from capa.ida.plugin.item import (
CapaExplorerInstructionViewItem,
)
from capa.features.address import Address, AbsoluteVirtualAddress
from capa.ida.plugin.qt_compat import QtGui, QtCore
# default highlight color used in IDA window
DEFAULT_HIGHLIGHT = 0xE6C700
@@ -269,7 +269,7 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
visited.add(child_index)
for idx in range(self.rowCount(child_index)):
stack.append(child_index.child(idx, 0))
stack.append(self.index(idx, 0, child_index))
def reset_ida_highlighting(self, item, checked):
"""reset IDA highlight for item

View File

@@ -12,10 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from PyQt5 import QtCore
from PyQt5.QtCore import Qt
from capa.ida.plugin.model import CapaExplorerDataModel
from capa.ida.plugin.qt_compat import Qt, QtCore
class CapaExplorerRangeProxyModel(QtCore.QSortFilterProxyModel):

View File

@@ -0,0 +1,79 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Qt compatibility layer for capa IDA Pro plugin.
Handles PyQt5 (IDA < 9.2) vs PySide6 (IDA >= 9.2) differences.
This module provides a unified import interface for Qt modules and handles
API changes between Qt5 and Qt6.
"""
try:
# IDA 9.2+ uses PySide6
from PySide6 import QtGui, QtCore, QtWidgets
from PySide6.QtGui import QAction
QT_LIBRARY = "PySide6"
Signal = QtCore.Signal
except ImportError:
# Older IDA versions use PyQt5
try:
from PyQt5 import QtGui, QtCore, QtWidgets
from PyQt5.QtWidgets import QAction
QT_LIBRARY = "PyQt5"
Signal = QtCore.pyqtSignal
except ImportError:
raise ImportError("Neither PySide6 nor PyQt5 is available. Cannot initialize capa IDA plugin.")
Qt = QtCore.Qt
def qt_get_item_flag_tristate():
"""
Get the tristate item flag compatible with Qt5 and Qt6.
Qt5 (PyQt5): Uses Qt.ItemIsTristate
Qt6 (PySide6): Qt.ItemIsTristate was removed, uses Qt.ItemIsAutoTristate
ItemIsAutoTristate automatically manages tristate based on child checkboxes,
matching the original ItemIsTristate behavior where parent checkboxes reflect
the check state of their children.
Returns:
int: The appropriate flag value for the Qt version
Raises:
AttributeError: If the tristate flag cannot be found in the Qt library
"""
if QT_LIBRARY == "PySide6":
# Qt6: ItemIsTristate was removed, replaced with ItemIsAutoTristate
# Try different possible locations (API varies slightly across PySide6 versions)
if hasattr(Qt, "ItemIsAutoTristate"):
return Qt.ItemIsAutoTristate
elif hasattr(Qt, "ItemFlag") and hasattr(Qt.ItemFlag, "ItemIsAutoTristate"):
return Qt.ItemFlag.ItemIsAutoTristate
else:
raise AttributeError(
"Cannot find ItemIsAutoTristate in PySide6. "
+ "Your PySide6 version may be incompatible with capa. "
+ f"Available Qt attributes: {[attr for attr in dir(Qt) if 'Item' in attr]}"
)
else:
# Qt5: Use the original ItemIsTristate flag
return Qt.ItemIsTristate
__all__ = ["qt_get_item_flag_tristate", "Signal", "QAction", "QtGui", "QtCore", "QtWidgets"]

View File

@@ -18,7 +18,6 @@ from collections import Counter
import idc
import idaapi
from PyQt5 import QtGui, QtCore, QtWidgets
import capa.rules
import capa.engine
@@ -28,6 +27,7 @@ import capa.features.basicblock
from capa.ida.plugin.item import CapaExplorerFunctionItem
from capa.features.address import AbsoluteVirtualAddress, _NoAddress
from capa.ida.plugin.model import CapaExplorerDataModel
from capa.ida.plugin.qt_compat import QtGui, QtCore, Signal, QAction, QtWidgets
MAX_SECTION_SIZE = 750
@@ -147,7 +147,7 @@ def calc_item_depth(o):
def build_action(o, display, data, slot):
""" """
action = QtWidgets.QAction(display, o)
action = QAction(display, o)
action.setData(data)
action.triggered.connect(lambda checked: slot(action))
@@ -312,7 +312,7 @@ class CapaExplorerRulegenPreview(QtWidgets.QTextEdit):
class CapaExplorerRulegenEditor(QtWidgets.QTreeWidget):
updated = QtCore.pyqtSignal()
updated = Signal()
def __init__(self, preview, parent=None):
""" """

View File

@@ -7,6 +7,7 @@
- [ ] Review changes
- capa https://github.com/mandiant/capa/compare/\<last-release\>...master
- capa-rules https://github.com/mandiant/capa-rules/compare/\<last-release>\...master
- [ ] Run `$ bump-my-version bump {patch/minor/major} [--allow-dirty]` to update [capa/version.py](https://github.com/mandiant/capa/blob/master/capa/version.py) and other version files
- [ ] Update [CHANGELOG.md](https://github.com/mandiant/capa/blob/master/CHANGELOG.md)
- Do not forget to add a nice introduction thanking contributors
- Remember that we need a major release if we introduce breaking changes
@@ -36,7 +37,6 @@
- [capa <release>...master](https://github.com/mandiant/capa/compare/<release>...master)
- [capa-rules <release>...master](https://github.com/mandiant/capa-rules/compare/<release>...master)
```
- [ ] Update [capa/version.py](https://github.com/mandiant/capa/blob/master/capa/version.py)
- [ ] Create a PR with the updated [CHANGELOG.md](https://github.com/mandiant/capa/blob/master/CHANGELOG.md) and [capa/version.py](https://github.com/mandiant/capa/blob/master/capa/version.py). Copy this checklist in the PR description.
- [ ] Update the [homepage](https://github.com/mandiant/capa/blob/master/web/public/index.html) (i.e. What's New section)
- [ ] After PR review, merge the PR and [create the release in GH](https://github.com/mandiant/capa/releases/new) using text from the [CHANGELOG.md](https://github.com/mandiant/capa/blob/master/CHANGELOG.md).

View File

@@ -74,7 +74,7 @@ dependencies = [
# comments and context.
"pyyaml>=6",
"colorama>=0.4",
"ida-settings>=2",
"ida-settings>=3.1.0",
"ruamel.yaml>=0.18",
"pefile>=2023.2.7",
"pyelftools>=0.31",
@@ -104,7 +104,7 @@ dependencies = [
"networkx>=3",
"dnfile>=0.15.0",
"dnfile>=0.17.0",
]
dynamic = ["version"]
@@ -123,7 +123,7 @@ dev = [
# and should not conflict with other libraries/tooling.
"pre-commit==4.2.0",
"pytest==8.0.0",
"pytest-sugar==1.0.0",
"pytest-sugar==1.1.1",
"pytest-instafail==0.5.0",
"flake8==7.3.0",
"flake8-bugbear==24.12.12",
@@ -139,16 +139,17 @@ dev = [
"ruff==0.12.0",
"black==25.1.0",
"isort==6.0.0",
"mypy==1.16.0",
"mypy==1.17.1",
"mypy-protobuf==3.6.0",
"PyGithub==2.6.0",
"bump-my-version==1.2.4",
# type stubs for mypy
"types-backports==0.1.3",
"types-colorama==0.4.15.11",
"types-PyYAML==6.0.8",
"types-psutil==7.0.0.20250218",
"types_requests==2.32.0.20240712",
"types-protobuf==6.30.2.20250516",
"types-protobuf==6.32.1.20250918",
"deptry==0.23.0"
]
build = [
@@ -161,11 +162,13 @@ build = [
"build==1.2.2"
]
scripts = [
# can (optionally) be more lenient on dependencies here
# see comment on dependencies for more context
"jschema_to_python==1.2.3",
"psutil==7.0.0",
"psutil==7.1.2",
"stix2==3.0.1",
"sarif_om==1.0.4",
"requests==2.32.3",
"requests>=2.32.4",
]
[tool.deptry]
@@ -197,7 +200,8 @@ known_first_party = [
"idc",
"java",
"netnode",
"PyQt5"
"PyQt5",
"PySide6"
]
[tool.deptry.per_rule_ignores]
@@ -205,6 +209,7 @@ known_first_party = [
DEP002 = [
"black",
"build",
"bump-my-version",
"deptry",
"flake8",
"flake8-bugbear",

View File

@@ -10,18 +10,18 @@ annotated-types==0.7.0
colorama==0.4.6
cxxfilt==0.3.0
dncil==1.0.2
dnfile==0.15.0
dnfile==0.17.0
funcy==2.0
humanize==4.12.0
humanize==4.13.0
ida-netnode==3.0
ida-settings==2.1.0
ida-settings==3.2.2
intervaltree==3.1.0
markdown-it-py==3.0.0
markdown-it-py==4.0.0
mdurl==0.1.2
msgpack==1.0.8
networkx==3.4.2
pefile==2024.8.26
pip==25.1.1
pip==25.3
protobuf==6.31.1
pyasn1==0.5.1
pyasn1-modules==0.3.0
@@ -36,12 +36,13 @@ pyelftools==0.32
pygments==2.19.1
python-flirt==0.9.2
pyyaml==6.0.2
rich==14.0.0
rich==14.2.0
ruamel-yaml==0.18.6
ruamel-yaml-clib==0.2.8
ruamel-yaml-clib==0.2.14
setuptools==80.9.0
six==1.17.0
sortedcontainers==2.4.0
viv-utils==0.8.0
vivisect==1.2.1
msgspec==0.19.0
bump-my-version==1.2.4

2
rules

Submodule rules updated: 2f09b4d471...9e4cc28265

View File

@@ -70,4 +70,4 @@ def test_standalone_binja_backend():
@pytest.mark.skipif(binja_present is False, reason="Skip binja tests if the binaryninja Python API is not installed")
def test_binja_version():
version = binaryninja.core_version_info()
assert version.major == 5 and version.minor == 0
assert version.major == 5 and version.minor == 1

View File

@@ -27,7 +27,7 @@
"eslint-plugin-vue": "^9.23.0",
"jsdom": "^24.1.0",
"prettier": "^3.2.5",
"vite": "^6.3.4",
"vite": "^6.4.1",
"vite-plugin-singlefile": "^2.2.0",
"vitest": "^3.0.9"
}
@@ -1416,6 +1416,20 @@
"node": ">=8"
}
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/callsites": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
@@ -1646,6 +1660,21 @@
"node": ">=6.0.0"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"dev": true,
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/eastasianwidth": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz",
@@ -1711,6 +1740,26 @@
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-module-lexer": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.6.0.tgz",
@@ -1718,6 +1767,35 @@
"dev": true,
"license": "MIT"
},
"node_modules/es-object-atoms": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
"dev": true,
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"dev": true,
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/esbuild": {
"version": "0.25.1",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.1.tgz",
@@ -2108,13 +2186,16 @@
}
},
"node_modules/form-data": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz",
"integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==",
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"dev": true,
"license": "MIT",
"dependencies": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
},
"engines": {
@@ -2141,6 +2222,55 @@
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==",
"dev": true,
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-intrinsic": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.1.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
"dev": true,
"license": "MIT",
"dependencies": {
"dunder-proto": "^1.0.1",
"es-object-atoms": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/glob": {
"version": "10.4.2",
"resolved": "https://registry.npmjs.org/glob/-/glob-10.4.2.tgz",
@@ -2215,6 +2345,19 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/graphemer": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/graphemer/-/graphemer-1.4.0.tgz",
@@ -2230,6 +2373,48 @@
"node": ">=8"
}
},
"node_modules/has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"dev": true,
"license": "MIT",
"dependencies": {
"has-symbols": "^1.0.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/highlight.js": {
"version": "11.9.0",
"resolved": "https://registry.npmjs.org/highlight.js/-/highlight.js-11.9.0.tgz",
@@ -2608,6 +2793,16 @@
"@jridgewell/sourcemap-codec": "^1.5.0"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/micromatch": {
"version": "4.0.8",
"resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz",
@@ -3606,9 +3801,9 @@
"dev": true
},
"node_modules/vite": {
"version": "6.3.4",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.3.4.tgz",
"integrity": "sha512-BiReIiMS2fyFqbqNT/Qqt4CVITDU9M9vE+DKcVAsB+ZV0wvTKd+3hMbkpxz1b+NmEDMegpVbisKiAZOnvO92Sw==",
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"dev": true,
"license": "MIT",
"dependencies": {

View File

@@ -33,7 +33,7 @@
"eslint-plugin-vue": "^9.23.0",
"jsdom": "^24.1.0",
"prettier": "^3.2.5",
"vite": "^6.3.4",
"vite": "^6.4.1",
"vite-plugin-singlefile": "^2.2.0",
"vitest": "^3.0.9"
}