diff --git a/examples/anthos-cicd-with-gitlab/docs/3-set-up-anthos-config-management.md b/examples/anthos-cicd-with-gitlab/docs/3-set-up-anthos-config-management.md index 4b5a618318a..5d6504dc986 100644 --- a/examples/anthos-cicd-with-gitlab/docs/3-set-up-anthos-config-management.md +++ b/examples/anthos-cicd-with-gitlab/docs/3-set-up-anthos-config-management.md @@ -49,7 +49,7 @@ The [Config Sync Operator](https://cloud.google.com/kubernetes-engine/docs/add-o cd ~/$GROUP_NAME/acm mkdir setup cd setup/ -gsutil cp gs://config-management-release/released/latest/config-management-operator.yaml config-management-operator.yaml +gcloud storage cp gs://config-management-release/released/latest/config-management-operator.yaml config-management-operator.yaml for i in "dev" "prod"; do gcloud container clusters get-credentials ${i} --zone=$ZONE diff --git a/examples/cloud-composer-cicd/cloudbuild.yaml b/examples/cloud-composer-cicd/cloudbuild.yaml index c6b07cef7c2..b591642e778 100644 --- a/examples/cloud-composer-cicd/cloudbuild.yaml +++ b/examples/cloud-composer-cicd/cloudbuild.yaml @@ -65,15 +65,15 @@ steps: find /workspace/airflow/logs/ -type f -name *.log | sort | xargs cat # Fail if the backfill failed. exit $ret -- name: gcr.io/cloud-builders/gsutil +- name: gcr.io/cloud-builders/gcloud # Deploy the DAGs to your composer environment DAGs GCS folder id: Deploy DAGs args: - - -m + - storage - rsync - - -r - - -c - - -x + - --recursive + - --checksums-only + - --exclude - .*\.pyc|airflow_monitoring.py - /workspace/${_DIRECTORY}/dags/ - ${_DEPLOY_DAGS_LOCATION} diff --git a/examples/cloud-composer-examples/composer_dataflow_examples/README.md b/examples/cloud-composer-examples/composer_dataflow_examples/README.md index 4097eaa2045..e89f214fbf3 100644 --- a/examples/cloud-composer-examples/composer_dataflow_examples/README.md +++ b/examples/cloud-composer-examples/composer_dataflow_examples/README.md @@ -98,6 +98,6 @@ The following high-level steps describe the setup needed to run this example: The workflow is automatically triggered by Cloud Function that gets invoked when a new file is uploaded into the *input-gcs-bucket* For this example workflow, the [usa_names.csv](resources/usa_names.csv) file can be uploaded into the *input-gcs-bucket* -`gsutil cp resources/usa_names.csv gs://` **_input-gcs-bucket_** +`gcloud storage cp resources/usa_names.csv gs://` **_input-gcs-bucket_** *** diff --git a/examples/cloud-composer-examples/composer_http_post_example/README.md b/examples/cloud-composer-examples/composer_http_post_example/README.md index b921d34deb8..07fdc5c3630 100644 --- a/examples/cloud-composer-examples/composer_http_post_example/README.md +++ b/examples/cloud-composer-examples/composer_http_post_example/README.md @@ -84,7 +84,7 @@ gcloud config set project $PROJECT ``` 1. Create a Cloud Storage (GCS) bucket for receiving input files (*input-gcs-bucket*). ```bash -gsutil mb -c regional -l us-central1 gs://$PROJECT +gcloud storage buckets create --default-storage-class=regional --location=us-central1 gs://$PROJECT ``` 2. Export the public BigQuery Table to a new dataset. ```bash @@ -131,13 +131,13 @@ gcloud composer environments run demo-ephemeral-dataproc \ 7. Upload the PySpark code [spark_avg_speed.py](composer_http_examples/spark_avg_speed.py) into a *spark-jobs* folder in GCS. ```bash -gsutil cp ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/spark_avg_speed.py gs://$PROJECT/spark-jobs/ +gcloud storage cp ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/spark_avg_speed.py gs://$PROJECT/spark-jobs/ ``` 8. The DAG folder is essentially a Cloud Storage bucket. Upload the [ephemeral_dataproc_spark_dag.py](composer_http_examples/ephemeral_dataproc_spark_dag.py) file into the folder: ```bash -gsutil cp ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py gs:///dags +gcloud storage cp ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py gs:///dags ``` *** diff --git a/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py b/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py index 94febdab623..073f726a1da 100644 --- a/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py +++ b/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py @@ -126,14 +126,14 @@ # Delete gcs files in the timestamped transformed folder. delete_transformed_files = BashOperator( task_id='delete_transformed_files', - bash_command="gsutil -m rm -r gs://{{ var.value.gcs_bucket }}" + + bash_command="gcloud storage rm --recursive gs://{{ var.value.gcs_bucket }}" + "/{{ dag_run.conf['transformed_path'] }}/") # If the spark job or BQ Load fails we rename the timestamped raw path to # a timestamped failed path. move_failed_files = BashOperator( task_id='move_failed_files', - bash_command="gsutil mv gs://{{ var.value.gcs_bucket }}" + + bash_command="gcloud storage mv gs://{{ var.value.gcs_bucket }}" + "/{{ dag_run.conf['raw_path'] }}/ " + "gs://{{ var.value.gcs_bucket}}" + "/{{ dag_run.conf['failed_path'] }}/", diff --git a/examples/cloudml-bee-health-detection/README.md b/examples/cloudml-bee-health-detection/README.md index e510512c763..6d75bd3a09d 100644 --- a/examples/cloudml-bee-health-detection/README.md +++ b/examples/cloudml-bee-health-detection/README.md @@ -9,7 +9,7 @@ The code leverages pre-trained TF Hub image modules and uses Google Cloud Machin JOB_NAME = ml_job$(date +%Y%m%d%H%M%S) JOB_FOLDER = MLEngine/${JOB_NAME} BUCKET_NAME = bee-health -MODEL_PATH = $(gsutil ls gs://${BUCKET_NAME}/${JOB_FOLDER}/export/estimator/ | tail -1) +MODEL_PATH = $(gcloud storage ls gs://${BUCKET_NAME}/${JOB_FOLDER}/export/estimator/ | tail -1) MODEL_NAME = prediction_model MODEL_VERSION = version_1 TEST_DATA = data/test.csv diff --git a/examples/cloudml-churn-prediction/README.md b/examples/cloudml-churn-prediction/README.md index 9eddc64ede7..e4e086ba1dd 100644 --- a/examples/cloudml-churn-prediction/README.md +++ b/examples/cloudml-churn-prediction/README.md @@ -152,7 +152,7 @@ The SavedModel was saved in a timestamped subdirectory of model_dir. ```shell MODEL_NAME="survival_model" VERSION_NAME="demo_version" -SAVED_MODEL_DIR=$(gsutil ls $MODEL_DIR/export/export | tail -1) +SAVED_MODEL_DIR=$(gcloud storage ls $MODEL_DIR/export/export | tail -1) gcloud ai-platform models create $MODEL_NAME \ --regions us-east1 diff --git a/examples/cloudml-collaborative-filtering/bin/run.serve.cloud.sh b/examples/cloudml-collaborative-filtering/bin/run.serve.cloud.sh index c5a4c682485..fa0c4eae5ea 100755 --- a/examples/cloudml-collaborative-filtering/bin/run.serve.cloud.sh +++ b/examples/cloudml-collaborative-filtering/bin/run.serve.cloud.sh @@ -34,7 +34,7 @@ PROJECT_ID="$(get_project_id)" VERSION_NAME="v${MODEL_OUTPUTS_DIR}_${TRIAL}" INPUT_BUCKET="gs://${PROJECT_ID}-bucket" MODEL_OUTPUTS_PATH="${INPUT_BUCKET}/${USER}/${MODEL_DIR}/${MODEL_OUTPUTS_DIR}/${TRIAL}/export/export" -MODEL_PATH="$(gsutil ls ${MODEL_OUTPUTS_PATH} | tail -n1)" +MODEL_PATH="$(gcloud storage ls ${MODEL_OUTPUTS_PATH} | tail -n1)" gcloud ai-platform models create "${MODEL_NAME}" \ --regions us-east1 \ diff --git a/examples/cloudml-energy-price-forecasting/README.md b/examples/cloudml-energy-price-forecasting/README.md index 2ec8dd2e5b1..9061796b809 100644 --- a/examples/cloudml-energy-price-forecasting/README.md +++ b/examples/cloudml-energy-price-forecasting/README.md @@ -30,7 +30,7 @@ The code takes in raw data from BigQuery, transforms and prepares the data, uses JOB_NAME = ml_job$(date +%Y%m%d%H%M%S) JOB_FOLDER = MLEngine/${JOB_NAME} BUCKET_NAME = energyforecast -MODEL_PATH = $(gsutil ls gs://${BUCKET_NAME}/${JOB_FOLDER}/export/estimator/ | tail -1) +MODEL_PATH = $(gcloud storage ls gs://${BUCKET_NAME}/${JOB_FOLDER}/export/estimator/ | tail -1) MODEL_NAME = forecaster_model MODEL_VERSION = version_1 TEST_DATA = data/csv/DataTest.csv diff --git a/examples/cloudml-fraud-detection/README.MD b/examples/cloudml-fraud-detection/README.MD index 86c7f6ca015..d0471465f04 100644 --- a/examples/cloudml-fraud-detection/README.MD +++ b/examples/cloudml-fraud-detection/README.MD @@ -131,7 +131,7 @@ Different versions of a same model can be stored in the ML-engine. The ML-engine MODEL_NAME=fraud_detection MODEL_VERSION=v_$(date +"%Y%m%d_%H%M%S") TRIAL_NUMBER=1 -MODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}/trials/${TRIAL_NUMBER}/export/exporter/ | tail -1) +MODEL_SAVED_NAME=$(gcloud storage ls ${TRAINING_OUTPUT_DIR}/trials/${TRIAL_NUMBER}/export/exporter/ | tail -1) gcloud ml-engine models create $MODEL_NAME \ --regions us-central1 gcloud ml-engine versions create $MODEL_VERSION \ @@ -162,11 +162,11 @@ Assess model's performances on out-of-sample data. Compute precision-recall curv ``` ANALYSIS_OUTPUT_PATH=. mkdir ${ANALYSIS_OUTPUT_PATH}/labels -gsutil cp gs://${BUCKET_ID}/${DATAFLOW_OUTPUT_DIR}split_data/split_data_TEST_labels.txt* labels/ +gcloud storage cp gs://${BUCKET_ID}/${DATAFLOW_OUTPUT_DIR}split_data/split_data_TEST_labels.txt* labels/ cat ${ANALYSIS_OUTPUT_PATH}/labels/* > ${ANALYSIS_OUTPUT_PATH}/labels.txt mkdir ${ANALYSIS_OUTPUT_PATH}/predictions -gsutil cp ${PREDICTIONS_OUTPUT_PATH}/prediction.results* ./predictions/ +gcloud storage cp ${PREDICTIONS_OUTPUT_PATH}/prediction.results* ./predictions/ cat ${ANALYSIS_OUTPUT_PATH}/predictions/* > ${ANALYSIS_OUTPUT_PATH}/predictions.txt ``` diff --git a/examples/cloudml-sentiment-analysis/README.md b/examples/cloudml-sentiment-analysis/README.md index cf5fe82b36a..2c63e439b46 100644 --- a/examples/cloudml-sentiment-analysis/README.md +++ b/examples/cloudml-sentiment-analysis/README.md @@ -49,7 +49,7 @@ gcloud config set project $PROJECT_ID ### Move data to GCP. ```sh -gsutil -m cp -r $DATA_PATH/aclImdb $BUCKET_PATH +gcloud storage cp --recursive $DATA_PATH/aclImdb $BUCKET_PATH GCP_INPUT_DATA=$BUCKET_PATH/aclImdb/train ``` @@ -121,12 +121,12 @@ tensorboard --logdir=$TRAINING_OUTPUT_DIR **With HP tuning:** ```sh TRIAL_NUMBER='' -MODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}/${TRIAL_NUMBER}/export/exporter/ | tail -1) +MODEL_SAVED_NAME=$(gcloud storage ls ${TRAINING_OUTPUT_DIR}/${TRIAL_NUMBER}/export/exporter/ | tail -1) ``` **Without HP tuning:** ```sh -MODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}/export/exporter/ | tail -1) +MODEL_SAVED_NAME=$(gcloud storage ls ${TRAINING_OUTPUT_DIR}/export/exporter/ | tail -1) ``` ```sh @@ -159,7 +159,7 @@ gcloud ml-engine predict \ ```sh PREDICTION_DATA_PATH=${BUCKET_PATH}/prediction_data -gsutil -m cp -r ${DATA_PATH}/aclImdb/test/ $PREDICTION_DATA_PATH +gcloud storage cp --recursive ${DATA_PATH}/aclImdb/test/ $PREDICTION_DATA_PATH ``` ### Make batch predictions with GCP. diff --git a/tools/agile-machine-learning-api/update.sh b/tools/agile-machine-learning-api/update.sh index a56b0f0e0db..bf9681297ba 100644 --- a/tools/agile-machine-learning-api/update.sh +++ b/tools/agile-machine-learning-api/update.sh @@ -22,6 +22,6 @@ TRAINER_PACKAGE='trainer-0.0.tar.gz' cd codes/ python setup.py sdist export GOOGLE_APPLICATION_CREDENTIALS=$service_account_json_key -gsutil cp -r dist/$TRAINER_PACKAGE $bucket_name/$TRAINER_PACKAGE +gcloud storage cp --recursive dist/$TRAINER_PACKAGE $bucket_name/$TRAINER_PACKAGE echo "INFO: Please make sure that train.yaml and config.yaml have same name for trainer file" diff --git a/tools/airpiler/README.md b/tools/airpiler/README.md index 2654ccdd6b0..ba3e79cc143 100644 --- a/tools/airpiler/README.md +++ b/tools/airpiler/README.md @@ -128,7 +128,7 @@ example_dag: Now let's copy that to our gcs bucket into the **data** folder: ```bash -> gsutil cp dag-factory.yaml gs://us-central1-test-692672b8-bucket/data +> gcloud storage cp dag-factory.yaml gs://us-central1-test-692672b8-bucket/data Copying file://dag-factory.yaml [Content-Type=application/octet-stream]... / [1 files][ 386.0 B/ 386.0 B] Operation completed over 1 objects/386.0 B. @@ -191,7 +191,7 @@ Run the following to get your GCS Bucket gcloud composer environments describe --location us-central1 --format="get(config.dagGcsPrefix)" Run the following to upload the dag-factory yaml file to the bucket: -gsutil cp use-case2.yaml gs:///data +gcloud storage cp use-case2.yaml gs:///data Then run the following to upload the airflow dag python script to your composer environment: gcloud composer environments storage dags import --environment --location us-central1 --source use-case2-dag.py @@ -205,7 +205,7 @@ Then visit the URL and trigger your DAG Then following the instructions we can run the following to upload the files: ```bash -gsutil cp use-case2.yaml gs://us-central1-test-692672b8-bucket/data +gcloud storage cp use-case2.yaml gs://us-central1-test-692672b8-bucket/data gcloud composer environments storage dags import --environment test --location us-central1 --source use-case2-dag.py ``` @@ -269,7 +269,7 @@ digraph USE_CASE2_TG_DAG { All the logs are written to the GCS bucket and you can check them out by putting all the above information together ([Log folder directory structure](https://cloud.google.com/composer/docs/concepts/logs#log_folder_directory_structure) describes the format): ```bash -> gsutil cat gs://us-central1-test-692672b8-bucket/logs/example_dag/task_3/2021-05-12T15:19:58+00:00/1.log +> gcloud storage cat gs://us-central1-test-692672b8-bucket/logs/example_dag/task_3/2021-05-12T15:19:58+00:00/1.log [2021-05-12 15:21:01,602] {taskinstance.py:671} INFO - Dependencies all met for @-@{"workflow": "example_dag", "task-id": "task_3", "execution-date": "2021-05-12T15:19:58+00:00"} [2021-05-12 15:21:01,733] {taskinstance.py:671} INFO - Dependencies all met for @-@{"workflow": "example_dag", "task-id": "task_3", "execution-date": "2021-05-12T15:19:58+00:00"} [2021-05-12 15:21:01,734] {taskinstance.py:881} INFO - @@ -311,4 +311,3 @@ https://tddbc3f0ad77184ffp-tp.appspot.com ``` Upon visiting the above page and authenticating using IAP you will see a list of the available DAGS and also check out the logs as well. - diff --git a/tools/airpiler/airpiler.py b/tools/airpiler/airpiler.py index 3e9620117aa..8a27ae8ffe4 100644 --- a/tools/airpiler/airpiler.py +++ b/tools/airpiler/airpiler.py @@ -315,8 +315,8 @@ def parse_jil(input_file): gcloud_gcs_command = ( f"gcloud composer environments describe {ENV_TEMPL}" f" --location us-central1 --format=\"get(config.dagGcsPrefix)\"") - gsutil_cp_command = ( - f"gsutil cp {dag_factory_yaml_file} gs://{ENV_TEMPL}/data") + gcloud_storage_cp_command = ( + f"gcloud storage cp {dag_factory_yaml_file} gs://{ENV_TEMPL}/data") gcloud_upload_command = ( f"gcloud composer environments storage dags import --environment" @@ -327,7 +327,7 @@ def parse_jil(input_file): f"Run the following to get your GCS Bucket \n" f"{gcloud_gcs_command}\n\n" f"Run the following to upload the dag-factory yaml file to the " - f"bucket:\n{gsutil_cp_command}\n\n" + f"bucket:\n{gcloud_storage_cp_command}\n\n" f"Then run the following to upload the airflow dag python" f" script to your composer environment: \n" f"{gcloud_upload_command}\n\n" diff --git a/tools/anthosbm-ansible-module/roles/bmctl-tool/tasks/main.yml b/tools/anthosbm-ansible-module/roles/bmctl-tool/tasks/main.yml index 59bcb6b13f8..831eef43480 100644 --- a/tools/anthosbm-ansible-module/roles/bmctl-tool/tasks/main.yml +++ b/tools/anthosbm-ansible-module/roles/bmctl-tool/tasks/main.yml @@ -34,7 +34,7 @@ - name: Download bmctl shell: - cmd: gsutil cp {{ bmctl_download_url }} . + cmd: gcloud storage cp {{ bmctl_download_url }} . - name: Make bmctl executable ansible.builtin.file: diff --git a/tools/anthosvmware-ansible-module/README.md b/tools/anthosvmware-ansible-module/README.md index 54d80d73c1a..6ad576ec03a 100644 --- a/tools/anthosvmware-ansible-module/README.md +++ b/tools/anthosvmware-ansible-module/README.md @@ -34,7 +34,7 @@ Consider these assumptions when you wonder how certain tasks are implemented or ## Prerequisites - Ansible -- Authenticate `gcloud` on jumphost with `gcloud auth login` so that Ansible can run the `gsutil` command on the jumphost +- Authenticate `gcloud` on jumphost with `gcloud auth login` so that Ansible can run the `gcloud storage` command on the jumphost - vSphere: Create VM-Folder for Anthos VMs - vSphere: Create folder on vSAN for Anthos Admin Cluster (if using vSAN). Consider using value of Ansible variable `{{ ac_name }}` as the vSAN folder name to be consistent. diff --git a/tools/anthosvmware-ansible-module/roles/adminws/tasks/install.yml b/tools/anthosvmware-ansible-module/roles/adminws/tasks/install.yml index 6063ef558d9..3d4be11af5e 100644 --- a/tools/anthosvmware-ansible-module/roles/adminws/tasks/install.yml +++ b/tools/anthosvmware-ansible-module/roles/adminws/tasks/install.yml @@ -44,7 +44,8 @@ ansible.builtin.command: # noqa 204 301 chdir: "{{ yamldestpath }}" argv: - - gsutil + - gcloud + - storage - cp - gs://gke-on-prem-release/gkeadm/{{ glb_anthos_version }}/linux/gkeadm - "{{ yamldestpath }}/gkeadm-{{ glb_anthos_version }}" @@ -63,7 +64,8 @@ ansible.builtin.command: # noqa 204 301 chdir: "{{ yamldestpath }}" argv: - - gsutil + - gcloud + - storage - cp - gs://gke-on-prem-release/gkeadm/{{ glb_anthos_version }}/linux/gkeadm.1.sig - "{{ yamldestpath }}/gkeadm-{{ glb_anthos_version }}.1.sig" diff --git a/tools/anthosvmware-ansible-module/roles/ais/tasks/upload.yml b/tools/anthosvmware-ansible-module/roles/ais/tasks/upload.yml index 122e53995e5..a7cccb0aab1 100644 --- a/tools/anthosvmware-ansible-module/roles/ais/tasks/upload.yml +++ b/tools/anthosvmware-ansible-module/roles/ais/tasks/upload.yml @@ -32,7 +32,8 @@ - name: "[ais] Upload login config file" ansible.builtin.command: argv: - - gsutil + - gcloud + - storage - cp - "{{ [yamldestpath, ais_login_config_file] | join('/') }}" - "{{ ais_gcsbucket }}/{{ uc_name if uc_name is defined else ac_name }}/{{ ais_login_config_file }}" diff --git a/tools/anthosvmware-ansible-module/roles/upload_artifactory/tasks/install.yml b/tools/anthosvmware-ansible-module/roles/upload_artifactory/tasks/install.yml index 4dfbfcfa5bf..4bd23476772 100644 --- a/tools/anthosvmware-ansible-module/roles/upload_artifactory/tasks/install.yml +++ b/tools/anthosvmware-ansible-module/roles/upload_artifactory/tasks/install.yml @@ -141,7 +141,8 @@ - name: "[upload_artifactory] Download files from GCS" ansible.builtin.command: # noqa 305 301 no-changed-when argv: - - gsutil + - gcloud + - storage - cp - "{{ item.item.src }}" - "{{ workdir }}/{{ item.item.file }}" diff --git a/tools/anthosvmware-ansible-module/roles/usercluster/README.md b/tools/anthosvmware-ansible-module/roles/usercluster/README.md index 2f4a0e0a05b..f20f8938ee4 100644 --- a/tools/anthosvmware-ansible-module/roles/usercluster/README.md +++ b/tools/anthosvmware-ansible-module/roles/usercluster/README.md @@ -164,12 +164,12 @@ ASM Service Mesh version, revision, and network ID information. The available `asmcli` versions for use can be found by using the below command: ``` -gsutil ls gs://csm-artifacts/asm/ +gcloud storage ls gs://csm-artifacts/asm/ ``` You can filter for a specific revision with `grep`. For example: ``` -gsutil ls gs://csm-artifacts/asm/ | grep 1.14 +gcloud storage ls gs://csm-artifacts/asm/ | grep 1.14 ``` > **Note:** `asm_network_id` is used for configuring a multi-cluster mesh. It *must be unique* for proper diff --git a/tools/asset-inventory/README.md b/tools/asset-inventory/README.md index 25bb65d3fb4..952703c937c 100644 --- a/tools/asset-inventory/README.md +++ b/tools/asset-inventory/README.md @@ -81,7 +81,7 @@ It's suggested to create a new project to hold the asset inventory resources. Es ``` export BUCKET=gs://${ORGANIZATION_ID}-assets - gsutil mb $BUCKET + gcloud storage buckets create $BUCKET ``` 1. Create the dataset to hold the resource tables in BigQuey. diff --git a/tools/asset-inventory/asset_inventory/export.py b/tools/asset-inventory/asset_inventory/export.py index 2cca929546a..48cdb924453 100755 --- a/tools/asset-inventory/asset_inventory/export.py +++ b/tools/asset-inventory/asset_inventory/export.py @@ -139,8 +139,8 @@ def add_argparse_args(ap, required=False): ' the project that you enabled the API on, then you must also grant the' ' "service-@gcp-sa-cloudasset.iam.gserviceaccount.com" account' ' objectAdmin privileges to the bucket:\n' - 'gsutil iam ch serviceAccount:service-@gcp-sa-cloudasset.iam.gserviceaccount.com:objectAdmin ' - 'gs://\n' + 'gcloud storage buckets add-iam-policy-binding gs:// ' + '--member=serviceAccount:service-@gcp-sa-cloudasset.iam.gserviceaccount.com --role=roles/storage.objectAdmin\n' '\n\n') ap.add_argument( '--parent', diff --git a/tools/bigdata-generator/README.md b/tools/bigdata-generator/README.md index 162e9e1d3c3..9b827a902a0 100644 --- a/tools/bigdata-generator/README.md +++ b/tools/bigdata-generator/README.md @@ -82,7 +82,7 @@ When running the program using Dataflow, the config file needs to be stored in G Upload the config file to GCS, for example: ``` CONFIG_FILE_PATH=gs://${TMP_BUCKET}/config.json -gsutil cp config_file_samples/sales_sample_bigquery.json $CONFIG_FILE_PATH +gcloud storage cp config_file_samples/sales_sample_bigquery.json $CONFIG_FILE_PATH ``` submitting the Dataflow job @@ -107,4 +107,3 @@ This project was developed using a GCP sandbox that has policies that make the c Given these restrictions, a custom Dataflow container is being used (defined by the [Dockerfile](Dockerfile)) that installs the dependencies. The Dataflow job is submitted to run inside a VPC with no public IP address. Feel free to run the data generator process as best fits your needs. - diff --git a/tools/bigquery-s3tobq/README.md b/tools/bigquery-s3tobq/README.md index 31be7e977be..4f872058d61 100644 --- a/tools/bigquery-s3tobq/README.md +++ b/tools/bigquery-s3tobq/README.md @@ -81,7 +81,7 @@ $gcloud projects add-iam-policy-binding [PROJECT_ID] \ --member=serviceAccount:service-[PROJECT_NUMBER]@gcp-sa-pubsub.iam.gserviceaccount.com \ --role=roles/iam.serviceAccountTokenCreator -SERVICE_ACCOUNT="$(gsutil kms serviceaccount -p [PROJECT_ID])" +SERVICE_ACCOUNT="$(gcloud storage service-agent --project [PROJECT_ID])" $gcloud projects add-iam-policy-binding [PROJECT_ID] \ --member="serviceAccount:${SERVICE_ACCOUNT}" \ diff --git a/tools/cloud-composer-backup-restore/composer_br/app.py b/tools/cloud-composer-backup-restore/composer_br/app.py index e5e02818f59..dfb76206e75 100644 --- a/tools/cloud-composer-backup-restore/composer_br/app.py +++ b/tools/cloud-composer-backup-restore/composer_br/app.py @@ -60,7 +60,7 @@ def _upload_blob(bucket_name: str, source_file_name: str, def _copy_gcs_folder(bucket_from: str, bucket_to: str) -> None: - command_utils.sh(['gsutil', '-m', 'rsync', '-r', bucket_from, bucket_to]) + command_utils.sh(['gcloud', 'storage', 'rsync', '--recursive', bucket_from, bucket_to]) def _check_cli_depdendencies() -> None: diff --git a/tools/cloud-composer-backup-restore/composer_br/db_utils.py b/tools/cloud-composer-backup-restore/composer_br/db_utils.py index 9416645f985..3518739ee39 100644 --- a/tools/cloud-composer-backup-restore/composer_br/db_utils.py +++ b/tools/cloud-composer-backup-restore/composer_br/db_utils.py @@ -78,7 +78,7 @@ def import_db(username: str, password: str, host: str, port: str, database: str, """ Extract a SQL filefrom a GCS path and imports it into postgres """ - command_utils.sh(['gsutil', 'cp', gcs_sql_file_path, '/tmp/']) + command_utils.sh(['gcloud', 'storage', 'cp', gcs_sql_file_path, '/tmp/']) split_path = gcs_sql_file_path.split('/') diff --git a/tools/cloud-composer-dag-validation/cloudbuild.yaml b/tools/cloud-composer-dag-validation/cloudbuild.yaml index 5da8765b283..f2ce06c332a 100644 --- a/tools/cloud-composer-dag-validation/cloudbuild.yaml +++ b/tools/cloud-composer-dag-validation/cloudbuild.yaml @@ -24,11 +24,11 @@ steps: dir: /workspace - id: deploy-dags dir: tools/cloud-composer-dag-validation - name: 'gcr.io/cloud-builders/gsutil' + name: 'gcr.io/cloud-builders/gcloud' entrypoint: bash args: - '-c' - | - gsutil -m cp -r 'dags/' '${_COMPOSER_BUCKET}/dags_export' + gcloud storage cp --recursive 'dags/' '${_COMPOSER_BUCKET}/dags_export' options: logging: CLOUD_LOGGING_ONLY \ No newline at end of file diff --git a/tools/cloud-composer-environment-rotator/rotate-composer.sh b/tools/cloud-composer-environment-rotator/rotate-composer.sh index 9b313b0cd91..1269d0e8825 100755 --- a/tools/cloud-composer-environment-rotator/rotate-composer.sh +++ b/tools/cloud-composer-environment-rotator/rotate-composer.sh @@ -91,7 +91,7 @@ echo "-------------------------------" echo "... Uploading pause_all DAG ..." echo "-------------------------------" -gsutil cp dags/pause_all_dags.py "${OLD_DAG_FOLDER}" +gcloud storage cp dags/pause_all_dags.py "${OLD_DAG_FOLDER}" echo "---------------------------------------------------" echo "... Waiting for DAG to be synced to environment ..." @@ -122,7 +122,7 @@ echo "-------------------------------" echo "... Removing pause_all DAG ..." echo "-------------------------------" -gsutil rm "${OLD_DAG_FOLDER}/pause_all_dags.py" +gcloud storage rm "${OLD_DAG_FOLDER}/pause_all_dags.py" echo "------------------------------------------------------" echo "... Loading snapshot into new Composer environment ..." diff --git a/tools/cloud-composer-migration-complexity-assessment/dags/migration_assessment.py b/tools/cloud-composer-migration-complexity-assessment/dags/migration_assessment.py index 5128aa6b754..8b805997680 100644 --- a/tools/cloud-composer-migration-complexity-assessment/dags/migration_assessment.py +++ b/tools/cloud-composer-migration-complexity-assessment/dags/migration_assessment.py @@ -300,7 +300,7 @@ def full_migration_complexity(**context): rm -rf upgrade-check mkdir -p upgrade-check airflow upgrade_check > upgrade-check/results - gsutil cp -r upgrade-check gs://{bucket}/{root_path}/ + gcloud storage cp --recursive upgrade-check gs://{bucket}/{root_path}/ """.format( bucket=GCS_BUCKET, root_path=GCS_ROOT_PATH ) @@ -319,8 +319,8 @@ def full_migration_complexity(**context): mkdir -p v1-to-v2-report cp {root_dir}/airflow-v1-to-v2-migration/migration_rules/rules.csv v1-to-v2-report/rules.csv python3 {root_dir}/airflow-v1-to-v2-migration/run_mig.py --input_dag_folder={root_dir} --output_dag_folder=v1-to-v2-report --rules_file={root_dir}/airflow-v1-to-v2-migration/migration_rules/rules.csv - gsutil cp -r v1-to-v2-report gs://{gcs_bucket}/{root_path}/ - gsutil rm gs://{gcs_bucket}/{root_path}/v1-to-v2-report/*.py + gcloud storage cp --recursive v1-to-v2-report gs://{gcs_bucket}/{root_path}/ + gcloud storage rm gs://{gcs_bucket}/{root_path}/v1-to-v2-report/*.py """.format( root_dir=AIRFLOW_HOME_DIR, gcs_bucket=GCS_BUCKET, root_path=GCS_ROOT_PATH ) diff --git a/tools/cloud-composer-migration-terraform-generator/terraform-generate.sh b/tools/cloud-composer-migration-terraform-generator/terraform-generate.sh index 243f48df039..db0292653e4 100755 --- a/tools/cloud-composer-migration-terraform-generator/terraform-generate.sh +++ b/tools/cloud-composer-migration-terraform-generator/terraform-generate.sh @@ -53,8 +53,8 @@ airflow_config=$(gcloud composer environments describe "$environment_name" --loc # Determine Min/Max Workers dag_gcs_prefix=$(echo "$airflow_config" | jq .config.dagGcsPrefix | tr -d '"' | cut -d "/" -f3) -gsutil cp gs://"$dag_gcs_prefix"/airflow.cfg . -worker_concurrency=$(gsutil cat gs://"$dag_gcs_prefix"/airflow.cfg | grep worker_concurrency | cut -d "=" -f2) +gcloud storage cp gs://"$dag_gcs_prefix"/airflow.cfg . +worker_concurrency=$(gcloud storage cat gs://"$dag_gcs_prefix"/airflow.cfg | grep worker_concurrency | cut -d "=" -f2) min_workers=$((total/worker_concurrency)) echo "min_workers=$((total/worker_concurrency))" >> $existing_config diff --git a/tools/cloud-composer-stress-testing/cloud-composer-dag-generator/dag_generator/__main__.py b/tools/cloud-composer-stress-testing/cloud-composer-dag-generator/dag_generator/__main__.py index e94bc0d1855..6e0d76085ae 100644 --- a/tools/cloud-composer-stress-testing/cloud-composer-dag-generator/dag_generator/__main__.py +++ b/tools/cloud-composer-stress-testing/cloud-composer-dag-generator/dag_generator/__main__.py @@ -20,7 +20,7 @@ a) modify config.json file to adjust size and type of tasks for the workload b) run $python main.py c) move dags folder generated to the dag buckets in composer like: - gsutil cp -r out gs://BUCKET_NAME/dags + gcloud storage cp --recursive out gs://BUCKET_NAME/dags NOTE: the "number_of_operators_defined" variable in the configuration file (config.json) allows to create up to 5 differents kind of task, none has complex functionallity: diff --git a/tools/cloud-composer-stress-testing/cloud-composer-workload-simulator/local_utils/helper_functions.py b/tools/cloud-composer-stress-testing/cloud-composer-workload-simulator/local_utils/helper_functions.py index d7db68b2e39..1580f09a0f5 100644 --- a/tools/cloud-composer-stress-testing/cloud-composer-workload-simulator/local_utils/helper_functions.py +++ b/tools/cloud-composer-stress-testing/cloud-composer-workload-simulator/local_utils/helper_functions.py @@ -50,7 +50,7 @@ def upload_directory(source_folder, target_gcs_path): Defaults to None (root of the bucket). """ - command = ["gsutil", "-m", "cp", "-r", source_folder, target_gcs_path] + command = ["gcloud", "storage", "cp", "--recursive", source_folder, target_gcs_path] process = subprocess.Popen( command, diff --git a/tools/cuds-prioritized-attribution/README.md b/tools/cuds-prioritized-attribution/README.md index 55151d12a41..ea242c4d265 100644 --- a/tools/cuds-prioritized-attribution/README.md +++ b/tools/cuds-prioritized-attribution/README.md @@ -206,7 +206,7 @@ message) 1. Create a GCS bucket. This is a temporary file store for Composer. ``` - gsutil mb -l [LOCATION] gs://$PROJECT-cud-correction-commitment-data + gcloud storage buckets create --location [LOCATION] gs://$PROJECT-cud-correction-commitment-data ``` where `[LOCATION]` is the region of your billing export data, either us, eu, or asia. @@ -354,7 +354,7 @@ success message. 1. `BigQuery fails to execute the query due to different dataset regions` All of the datasets used in this solution must reside in the same region. Your exported billing data most likely resides in your company's location (if you are a European company, it is probably in the EU. If you are in the US, your dataset is probably in the US.) To resolve this, verify that the dataset containing the `commitments_table` and the `cud_corrected_dataset` are in the same region as your billing data. You can update the location of a dataset following these [instructions](https://cloud.google.com/bigquery/docs/locations#moving-data). -If your billing datasets are all in the same region, ensure that your GCS buckets are also in the same region. You specified this on creation using the `l` flag when executing `gsutil mb`. +If your billing datasets are all in the same region, ensure that your GCS buckets are also in the same region. You specified this on creation using the `l` flag when executing `gcloud storage buckets create`. 1. `Other "Invalid Argument"` diff --git a/tools/custom-organization-policy-library/build/custom-constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml b/tools/custom-organization-policy-library/build/custom-constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml index d6ffc655410..924c5eeed3a 100755 --- a/tools/custom-organization-policy-library/build/custom-constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml +++ b/tools/custom-organization-policy-library/build/custom-constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml @@ -2,7 +2,20 @@ #@ constraint = build_constraint("cloudkmsAllowedAlgorithms") #@ def condition(algorithms): -#@ return 'resource.versionTemplate.algorithm in ' + str(algorithms) + " == false" +#@ prefix = "" +#@ if "GOOGLE_SYMMETRIC_ENCRYPTION" in algorithms: +#@ prefix = "has(resource.versionTemplate.algorithm) && " +#@ end +#@ lines = [prefix + "resource.versionTemplate.algorithm in ["] +#@ for i in range(len(algorithms)): +#@ line = " '" + algorithms[i] + "'" +#@ if i < len(algorithms) - 1: +#@ line += "," +#@ end +#@ lines.append(line) +#@ end +#@ lines.append("] == false") +#@ return "\n".join(lines) #@ end #@ if constraint.to_generate(): diff --git a/tools/custom-organization-policy-library/samples/gcloud/constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml b/tools/custom-organization-policy-library/samples/gcloud/constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml index 540cad1a46a..e2d0aa8c965 100755 --- a/tools/custom-organization-policy-library/samples/gcloud/constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml +++ b/tools/custom-organization-policy-library/samples/gcloud/constraints/cloudkms/cloudkmsAllowedAlgorithms.yaml @@ -4,7 +4,17 @@ resourceTypes: methodTypes: - CREATE - UPDATE -condition: resource.versionTemplate.algorithm in ["GOOGLE_SYMMETRIC_ENCRYPTION"] == false +condition: |- + has(resource.versionTemplate.algorithm) && resource.versionTemplate.algorithm in [ + 'GOOGLE_SYMMETRIC_ENCRYPTION', + 'RSA_SIGN_PSS_2048_SHA256', + 'RSA_SIGN_PSS_3072_SHA256', + 'RSA_SIGN_PSS_4096_SHA256', + 'RSA_DECRYPT_OAEP_2048_SHA256', + 'RSA_DECRYPT_OAEP_4096_SHA256', + 'RSA_DECRYPT_OAEP_2048_SHA1', + 'RSA_DECRYPT_OAEP_4096_SHA1' + ] == false actionType: DENY displayName: Require Cloud KMS keys algorithm to be configured correctly description: Ensure the algorithm for Cloud KMS keys is configured correctly diff --git a/tools/custom-organization-policy-library/samples/tf/custom-constraints/custom.cloudkmsAllowedAlgorithms.yaml b/tools/custom-organization-policy-library/samples/tf/custom-constraints/custom.cloudkmsAllowedAlgorithms.yaml index 13ced1cb86e..ca007427d98 100644 --- a/tools/custom-organization-policy-library/samples/tf/custom-constraints/custom.cloudkmsAllowedAlgorithms.yaml +++ b/tools/custom-organization-policy-library/samples/tf/custom-constraints/custom.cloudkmsAllowedAlgorithms.yaml @@ -1,7 +1,16 @@ custom.cloudkmsAllowedAlgorithms: action_type: DENY condition: |- - resource.versionTemplate.algorithm in ["GOOGLE_SYMMETRIC_ENCRYPTION"] == false + has(resource.versionTemplate.algorithm) && resource.versionTemplate.algorithm in [ + 'GOOGLE_SYMMETRIC_ENCRYPTION', + 'RSA_SIGN_PSS_2048_SHA256', + 'RSA_SIGN_PSS_3072_SHA256', + 'RSA_SIGN_PSS_4096_SHA256', + 'RSA_DECRYPT_OAEP_2048_SHA256', + 'RSA_DECRYPT_OAEP_4096_SHA256', + 'RSA_DECRYPT_OAEP_2048_SHA1', + 'RSA_DECRYPT_OAEP_4096_SHA1' + ] == false description: Ensure the algorithm for Cloud KMS keys is configured correctly display_name: Require Cloud KMS keys algorithm to be configured correctly method_types: diff --git a/tools/custom-organization-policy-library/values.yaml b/tools/custom-organization-policy-library/values.yaml index 3cebe264726..28e2aa7c5fd 100644 --- a/tools/custom-organization-policy-library/values.yaml +++ b/tools/custom-organization-policy-library/values.yaml @@ -39,6 +39,13 @@ cloudkms: params: algorithms: - "GOOGLE_SYMMETRIC_ENCRYPTION" + - "RSA_SIGN_PSS_2048_SHA256" + - "RSA_SIGN_PSS_3072_SHA256" + - "RSA_SIGN_PSS_4096_SHA256" + - "RSA_DECRYPT_OAEP_2048_SHA256" + - "RSA_DECRYPT_OAEP_4096_SHA256" + - "RSA_DECRYPT_OAEP_2048_SHA1" + - "RSA_DECRYPT_OAEP_4096_SHA1" cloudkmsAllowedProtectionLevel: params: protection_levels: diff --git a/tools/dataproc-edge-node/1_create-image.sh b/tools/dataproc-edge-node/1_create-image.sh index 03d01abfb2f..fb9c2ba7459 100755 --- a/tools/dataproc-edge-node/1_create-image.sh +++ b/tools/dataproc-edge-node/1_create-image.sh @@ -38,15 +38,15 @@ set -e . image_env -gsutil ls "gs://${BUCKET}" >/dev/null 2>&1 +gcloud storage ls "gs://${BUCKET}" >/dev/null 2>&1 e=$? if [ $e -ne 0 ]; then - gsutil mb -p "${PROJECT}" -c "${STORAGE_CLASS}" "gs://${BUCKET}" + gcloud storage buckets create "gs://${BUCKET}" --project "${PROJECT}" --default-storage-class "${STORAGE_CLASS}" fi FILES=$(ls util) for f in $FILES; do - gsutil -m cp "util/${f}" "gs://${BUCKET}/" + gcloud storage cp "util/${f}" "gs://${BUCKET}/" done if [ ! -z "$DATAPROC_VERSION" ]; then diff --git a/tools/dataproc-edge-node/util/edge-node-startup-script.sh b/tools/dataproc-edge-node/util/edge-node-startup-script.sh index aaf29206a5d..5f5d60aa7ed 100644 --- a/tools/dataproc-edge-node/util/edge-node-startup-script.sh +++ b/tools/dataproc-edge-node/util/edge-node-startup-script.sh @@ -14,5 +14,5 @@ # limitations under the License. BUCKET=$(/usr/share/google/get_metadata_value attributes/config-bucket) LOG=/var/log/edgenode-startup.log -gsutil cp "gs://${BUCKET}/setup_edge_node.sh" "/usr/share/google/" +gcloud storage cp "gs://${BUCKET}/setup_edge_node.sh" "/usr/share/google/" nohup bash /usr/share/google/setup_edge_node.sh >"${LOG}" 2>&1 & diff --git a/tools/dataproc-edge-node/util/setup_edge_node.sh b/tools/dataproc-edge-node/util/setup_edge_node.sh index a550d237c52..d41d9acce46 100755 --- a/tools/dataproc-edge-node/util/setup_edge_node.sh +++ b/tools/dataproc-edge-node/util/setup_edge_node.sh @@ -53,13 +53,13 @@ cd /usr/share/google BUCKET=$(/usr/share/google/get_metadata_value attributes/config-bucket) FILES="config-files configure.sh create-templates.sh disable-services.sh" for f in $FILES; do - gsutil cp "gs://${BUCKET}/${f}" . + gcloud storage cp "gs://${BUCKET}/${f}" . done chmod +x configure.sh create-templates.sh disable-services.sh # Install configuration startup service SERVICE_URI="gs://${BUCKET}/google-edgenode-configure.service" -gsutil cp "${SERVICE_URI}" /etc/systemd/system/ +gcloud storage cp "${SERVICE_URI}" /etc/systemd/system/ systemctl daemon-reload systemctl enable google-edgenode-configure diff --git a/tools/gcpviz/gitlab-ci.yml b/tools/gcpviz/gitlab-ci.yml index ae10a8c07d6..376fb2989c1 100644 --- a/tools/gcpviz/gitlab-ci.yml +++ b/tools/gcpviz/gitlab-ci.yml @@ -30,7 +30,7 @@ export_cai: stage: export_cai script: - /gcpviz/wait_for_export.sh gcloud asset export --output-path=gs://$CAI_BUCKET_NAME/resource_inventory.json --content-type=resource --organization=$ORGANIZATION_ID - - gsutil cp://$CAI_BUCKET_NAME/resource_inventory.json . + - gcloud storage cp://$CAI_BUCKET_NAME/resource_inventory.json . generate_graphs: stage: generate_graphs diff --git a/tools/gcs2bq/run.sh b/tools/gcs2bq/run.sh index db453628814..a0ef3f441c7 100644 --- a/tools/gcs2bq/run.sh +++ b/tools/gcs2bq/run.sh @@ -50,15 +50,15 @@ fi # shellcheck disable=SC2086 ./gcs2bq $GCS2BQ_FLAGS || error "Export failed!" 2 -gsutil mb -p "${GCS2BQ_PROJECT}" -c standard -l "${GCS2BQ_LOCATION}" -b on "gs://${GCS2BQ_BUCKET}" || echo "Info: Storage bucket already exists: ${GCS2BQ_BUCKET}" +gcloud storage buckets create "gs://${GCS2BQ_BUCKET}" --project="${GCS2BQ_PROJECT}" --default-storage-class=standard --location="${GCS2BQ_LOCATION}" --uniform-bucket-level-access || echo "Info: Storage bucket already exists: ${GCS2BQ_BUCKET}" -gsutil cp "${GCS2BQ_FILE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed copying ${GCS2BQ_FILE} to gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 3 +gcloud storage cp "${GCS2BQ_FILE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed copying ${GCS2BQ_FILE} to gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 3 bq mk --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" "${GCS2BQ_DATASET}" || echo "Info: BigQuery dataset already exists: ${GCS2BQ_DATASET}" bq load --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" --schema bigquery.schema --source_format=AVRO --use_avro_logical_types --replace=true "${GCS2BQ_DATASET}.${GCS2BQ_TABLE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || \ error "Failed to load gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME} to BigQuery table ${GCS2BQ_DATASET}.${GCS2BQ_TABLE}!" 4 -gsutil rm "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed deleting gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 5 +gcloud storage rm "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed deleting gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 5 rm -f "${GCS2BQ_FILE}" diff --git a/tools/hive-bigquery/README.md b/tools/hive-bigquery/README.md index 841109aa314..ed3de28495f 100755 --- a/tools/hive-bigquery/README.md +++ b/tools/hive-bigquery/README.md @@ -50,7 +50,7 @@ gcloud kms encrypt \ 4. Upload the encrypted file, `password.txt.enc`, to the GCS bucket. Note this file location, which will be provided later as an input to the migration tool. ``` -gsutil cp password.txt.enc gs:/// +gcloud storage cp password.txt.enc gs:/// ``` 5. Delete the plaintext `password.txt` file from the local machine. # Usage diff --git a/tools/monitoring-alert-library/build/alerts/binaryAuthorizationPolicyChanges.yaml b/tools/monitoring-alert-library/build/alerts/binaryAuthorizationPolicyChanges.yaml index c1e91715f29..10dc6ebdbeb 100644 --- a/tools/monitoring-alert-library/build/alerts/binaryAuthorizationPolicyChanges.yaml +++ b/tools/monitoring-alert-library/build/alerts/binaryAuthorizationPolicyChanges.yaml @@ -24,7 +24,7 @@ documentation: content: #@ documentation_content() mimeType: text/markdown conditions: -- displayName: 'Log match condition: Cloud SQL instance configuration changes' +- displayName: "Log match condition: Binary Authorization policy configuration changes" conditionThreshold: filter: >- resource.type = "logging_bucket" AND metric.type = "logging.googleapis.com/user/binaryAuthorizationPolicyChanges" diff --git a/tools/monitoring-alert-library/samples/gcloud/alerts/binaryAuthorizationPolicyChanges.yaml b/tools/monitoring-alert-library/samples/gcloud/alerts/binaryAuthorizationPolicyChanges.yaml index 022dc31f9a2..6a812e9d8aa 100755 --- a/tools/monitoring-alert-library/samples/gcloud/alerts/binaryAuthorizationPolicyChanges.yaml +++ b/tools/monitoring-alert-library/samples/gcloud/alerts/binaryAuthorizationPolicyChanges.yaml @@ -12,7 +12,7 @@ documentation: ``` mimeType: text/markdown conditions: - - displayName: "Log match condition: Cloud SQL instance configuration changes" + - displayName: "Log match condition: Binary Authorization policy configuration changes" conditionThreshold: filter: resource.type = "logging_bucket" AND metric.type = diff --git a/tools/monitoring-alert-library/samples/tf/binaryAuthorizationPolicyChanges.yaml b/tools/monitoring-alert-library/samples/tf/binaryAuthorizationPolicyChanges.yaml index bff491e0303..2baf562c8a1 100644 --- a/tools/monitoring-alert-library/samples/tf/binaryAuthorizationPolicyChanges.yaml +++ b/tools/monitoring-alert-library/samples/tf/binaryAuthorizationPolicyChanges.yaml @@ -19,7 +19,7 @@ alerts: threshold_value: 0 trigger: count: 1 - display_name: "Log match condition: Cloud SQL instance configuration changes" + display_name: "Log match condition: Binary Authorization policy configuration changes" display_name: Binary Authorization Policy Changes documentation: content: |- diff --git a/tools/policy-tags-engine/README.md b/tools/policy-tags-engine/README.md index 6f7a1e5431e..a3a73303b87 100644 --- a/tools/policy-tags-engine/README.md +++ b/tools/policy-tags-engine/README.md @@ -95,7 +95,7 @@ Now, deploy the Python application code. Upload the test file to the GCS bucket created by Terraform (`dev-informatica-metadata`). ```bash - gsutil cp taxonomy_example.json gs://dev-informatica-metadata/ + gcloud storage cp taxonomy_example.json gs://dev-informatica-metadata/ ``` 4. **Verify the Results:** diff --git a/tools/quota-monitoring-alerting/java/README.md b/tools/quota-monitoring-alerting/java/README.md index 84680287875..dbb10e47a90 100644 --- a/tools/quota-monitoring-alerting/java/README.md +++ b/tools/quota-monitoring-alerting/java/README.md @@ -254,9 +254,9 @@ gcloud iam service-accounts keys create CREDENTIALS_FILE.json \ ``` mkdir terraform cd terraform -gsutil cp gs://quota-monitoring-solution-source/v4.2/main.tf . -gsutil cp gs://quota-monitoring-solution-source/v4.2/variables.tf . -gsutil cp gs://quota-monitoring-solution-source/v4.2/terraform.tfvars . +gcloud storage cp gs://quota-monitoring-solution-source/v4.2/main.tf . +gcloud storage cp gs://quota-monitoring-solution-source/v4.2/variables.tf . +gcloud storage cp gs://quota-monitoring-solution-source/v4.2/terraform.tfvars . ``` 2. Verify that you have these 4 files in your local directory: - CREDENTIALS_FILE.json diff --git a/tools/ranger-to-bigquery-biglake-assessment/README.md b/tools/ranger-to-bigquery-biglake-assessment/README.md index 225df2fd17f..e80f9b485e8 100644 --- a/tools/ranger-to-bigquery-biglake-assessment/README.md +++ b/tools/ranger-to-bigquery-biglake-assessment/README.md @@ -153,7 +153,7 @@ cat .json | jq -c '.policies[]' > ranger_policies.jsonl 4. copy the policy export to GCS: ```sh export PROJECT_ID= - gsutil cp ranger_policies.jsonl gs://$PROJECT_ID-ranger-assessment/ + gcloud storage cp ranger_policies.jsonl gs://$PROJECT_ID-ranger-assessment/ ``` 5. load into BQ: ```sh diff --git a/tools/run-tool-using-cloud-shell/sdf/tutorial.md b/tools/run-tool-using-cloud-shell/sdf/tutorial.md index 61c2cce66b4..cb69fb4d687 100644 --- a/tools/run-tool-using-cloud-shell/sdf/tutorial.md +++ b/tools/run-tool-using-cloud-shell/sdf/tutorial.md @@ -18,7 +18,7 @@ gcloud organizations list To lunch the SDF run the below command. ```bash -gsutil cp gs://atc-artifacts/SDF/docker-compose.yaml ./; sudo docker-compose up -d +gcloud storage cp gs://atc-artifacts/SDF/docker-compose.yaml ./; sudo docker-compose up -d ``` Let's open SDF page. Run the following to get web preview URL and click on the output URL to open SDF. @@ -26,4 +26,3 @@ Let's open SDF page. Run the following to get web preview URL and click on the o ```bash cloudshell get-web-preview-url -p 8080 ``` - diff --git a/tools/spiffe-gcp-proxy/README.md b/tools/spiffe-gcp-proxy/README.md index 792ca85bdb5..a8f1c376af7 100644 --- a/tools/spiffe-gcp-proxy/README.md +++ b/tools/spiffe-gcp-proxy/README.md @@ -48,4 +48,3 @@ In order to run the proxy will need to checkout the sources and use `go build ma | providerId | | Provider ID of the Workload Identity provider to use (required) | | poolId | | Pool ID of the Workload Identity Pool to use (required) | | scope | https://www.googleapis.com/auth/cloud-platform | Scope to request from GCP, e.g. https://www.googleapis.com/auth/cloud-platform | -