Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions examples/dbt-on-cloud-composer/basic/dbt-project/cloudbuild.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ steps:
dbt docs generate --vars '{"project_id": "pso-dbt-airflow-demo","bigquery_location": "us","impersonate_service_account": ${_DBT_SERVICE_ACCOUNT},"execution_date": "1970-01-01","source_data_project": "bigquery-public-data"}' --profiles-dir .dbt --target cloud-build

# _GCS_BUCKET is the GCS Bucket that will store the dbt documentation
- name: gcr.io/cloud-builders/gsutil
- name: gcr.io/cloud-builders/gcloud
id: Copy the target to GCS
args:
- -m
- storage
- rsync
- -r
- -c
- -x
- --recursive
- --checksums-only
- --exclude
- .dockerignore|key|logs|models|tests|.dockerignore|.gitignore|cloudbuild.yaml|Dockerfile|README.md|.git
- .
- gs://${_GCS_BUCKET}/data/dbt-docs-basic/
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ steps:
dbt docs generate --vars '{"project_id": "pso-dbt-airflow-demo","bigquery_location": "us","impersonate_service_account": ${_DBT_SERVICE_ACCOUNT},"execution_date": "1970-01-01","source_data_project": "bigquery-public-data"}' --profiles-dir .dbt --target cloud-build

# _GCS_BUCKET is the GCS Bucket that will store the dbt documentation
- name: gcr.io/cloud-builders/gsutil
- name: gcr.io/cloud-builders/gcloud
id: Copy the target to GCS
args:
- -m
- storage
- rsync
- -r
- -c
- -x
- --recursive
- --checksums-only
- --exclude
- .dockerignore|key|logs|models|tests|.dockerignore|.gitignore|cloudbuild.yaml|Dockerfile|README.md|.git
- .
- gs://${_GCS_BUCKET}/data/dbt-docs-optimized/
8 changes: 4 additions & 4 deletions examples/direct-upload-to-gcs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,13 +56,13 @@ DISTRIBUTION_BUCKET="<DISTRIBUTION BUCKET NAME>"
LIFECYCLE_POLICY_FILE="./lifecycle.json"

# Creates the uploadable bucket
gsutil mb -p $PROJECT_ID -l $REGION --retention 900s gs://$UPLOADABLE_BUCKET
gcloud storage buckets create gs://$UPLOADABLE_BUCKET --project=$PROJECT_ID --location=$REGION --retention-period=900s
# Creates the bucket for distribution
gsutil mb -p $PROJECT_ID -l $REGION gs://$DISTRIBUTION_BUCKET
gcloud storage buckets create gs://$DISTRIBUTION_BUCKET --project=$PROJECT_ID --location=$REGION
# Set lifecycle for the uploadable bucket
gsutil lifecycle set $LIFECYCLE_POLICY_FILE gs://$UPLOADABLE_BUCKET
gcloud storage buckets update gs://$UPLOADABLE_BUCKET --lifecycle-file=$LIFECYCLE_POLICY_FILE
# Publish all objects to all users
gsutil iam ch allUsers:objectViewer gs://$DISTRIBUTION_BUCKET
gcloud storage buckets add-iam-policy-binding gs://$DISTRIBUTION_BUCKET --member=allUsers --role=objectViewer
```

### Step.2 Deploy to App Engine Standard
Expand Down
5 changes: 2 additions & 3 deletions examples/e2e-home-appliance-status-monitoring/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ GOOGLE_APPLICATION_CREDENTIALS=${PWD}"/e2e_demo_credential.json"

# create a new GCS bucket if you don't have one
BUCKET_NAME=[your-bucket-name]
gsutil mb -p ${GOOGLE_PROJECT_ID} gs://${BUCKET_NAME}/
gcloud storage buckets create gs://${BUCKET_NAME} --project=${GOOGLE_PROJECT_ID}
```

You also need to enable the following APIs in the APIs & Services menu.
Expand All @@ -50,7 +50,7 @@ If you are using our trained model:
tar jxvf data/model.tar.bz2

# upload the model to your bucket
gsutil cp -r model gs://${BUCKET_NAME}
gcloud storage cp --recursive model gs://${BUCKET_NAME}
```

If you want to train your own model:
Expand Down Expand Up @@ -186,4 +186,3 @@ jupyter notebook

![Demo system sample output](./img/demo03.gif)


16 changes: 8 additions & 8 deletions examples/gcs-to-bq-serverless-services/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,20 @@ In this solution, we build an approch to ingestion flat files (in GCS) to BigQue
```
PROJECT_ID=<<project_id>>
GCS_BUCKET_NAME=<<Bucket name>>
gsutil mb gs://${GCS_BUCKET_NAME}
gsutil notification create \
-t projects/${PROJECT_ID}/topics/create_notification_${GCS_BUCKET_NAME} \
-e OBJECT_FINALIZE \
-f json gs://${GCS_BUCKET_NAME}
gcloud storage buckets create gs://${GCS_BUCKET_NAME}
gcloud storage buckets notifications create gs://${GCS_BUCKET_NAME} \
--topic=projects/${PROJECT_ID}/topics/create_notification_${GCS_BUCKET_NAME} \
--event-types=OBJECT_FINALIZE \
--payload-format=json
```
- **Step 2:** Build and copy jar to a GCS bucket(Create a GCS bucket to store the jar if you dont have one). There are number of dataproce templates that are avaliable to [use](https://github.com/GoogleCloudPlatform/dataproc-templates).

```
GCS_ARTIFACT_REPO=<<artifact repo name>>
gsutil mb gs://${GCS_ARTIFACT_REPO}
gcloud storage buckets create gs://${GCS_ARTIFACT_REPO}
cd gcs2bq-spark
mvn clean install
gsutil cp target/GCS2BQWithSpark-1.0-SNAPSHOT.jar gs://${GCS_ARTIFACT_REPO}/
gcloud storage cp target/GCS2BQWithSpark-1.0-SNAPSHOT.jar gs://${GCS_ARTIFACT_REPO}/
```

- **Step 3:** [The page](https://cloud.google.com/dataproc-serverless/docs/concepts/network) describe the network configuration required to run serverless spark
Expand Down Expand Up @@ -76,7 +76,7 @@ In this solution, we build an approch to ingestion flat files (in GCS) to BigQue
- **Create BQ temp Bucket** GCS to BigQuery requires a temporary bucket. Lets create a temporary bucket
```
GCS_TEMP_BUCKET=<<temp_bucket>>
gsutil mb gs://${GCS_TEMP_BUCKET}
gcloud storage buckets create gs://${GCS_TEMP_BUCKET}
```
- **Create Deadletter Topic and Subscription** Lets create a dead letter topic and subscription

Expand Down
10 changes: 5 additions & 5 deletions examples/gcs-to-bq/data-ingestion/cloudbuild.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
steps:
# _COMPOSER_BUCKET_NAME is the GCS Bucket that will store the dags
- name: gcr.io/cloud-builders/gsutil
- name: gcr.io/cloud-builders/gcloud
id: Copy dag definition (with its dependencies) to the Composer folder (dags and plugins folders included)
dir: 'data-ingestion'
args:
- -m
- storage
- rsync
- -r
- -c
- -x
- --recursive
- --checksums-only
- --exclude
- .dockerignore|.gitignore|cloudbuild.yaml|README.md|.git|imgs|tests|deps|deploy|dbt|config|sql|terraform
- .
- gs://${_COMPOSER_BUCKET_NAME}
Expand Down
7 changes: 3 additions & 4 deletions examples/iot-nirvana/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,12 @@ Copy the JAR package containing the client binaries to Google Cloud Storage in
the bucket previously created. Run the following command in the `/client`
folder:

`gsutil cp target/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar gs://$BUCKET_NAME/client/`
`gcloud storage cp target/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar gs://$BUCKET_NAME/client/`

Check that the JAR file has been correctly copied in the Google Cloud Storage
bucket with the following command:

`gsutil ls gs://$BUCKET_NAME/client/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar`
`gcloud storage ls gs://$BUCKET_NAME/client/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar`

## AppEngine Web frontend

Expand All @@ -136,7 +136,7 @@ from the temperature sensors:
bootstrapping script
2. Copy the `startup.sh` file in the Google Cloud Storage bucket by running the
following command in the `/app-engine` folder:
`gsutil cp src/main/webapp/startup.sh gs://$BUCKET_NAME/`
`gcloud storage cp src/main/webapp/startup.sh gs://$BUCKET_NAME/`
3. Modify the `/pom.xml` file in the `/app-engine` folder:
* Update the `<app.id/>` node with the **[PROJECT_ID]** of your GCP project
* Update the `<app.version/>` with the desired version of the application
Expand Down Expand Up @@ -194,4 +194,3 @@ following:

To stop the simulation click on the **Stop** button at the bottom right of the
page `https://[YOUR_PROJECT_ID].appspot.com/index.html`.

2 changes: 1 addition & 1 deletion examples/iot-nirvana/app-engine/src/main/webapp/startup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ INDEX_START=$[$INSTANCE_NUMBER*10]
# Create a temporary folder and copy the client
echo "Creating temporary folder and downloading the client"
mkdir ${TMP_FOLDER}
/usr/bin/gsutil cp \
/usr/bin/gcloud storage cp \
gs://${BUCKET_NAME}/client/${CLIENT_JAR} \
${TMP_FOLDER}/${CLIENT_JAR} 1>${TMP_FOLDER}/startup_log.txt 2>&1

Expand Down
20 changes: 10 additions & 10 deletions examples/iot-nirvana/setup_gcp_environment.sh
Original file line number Diff line number Diff line change
Expand Up @@ -67,17 +67,17 @@ echo "Executing gcloud config set project ${PROJECT_ID}"
gcloud config set project ${PROJECT_ID}

# create a bucket with the name of the project-id
echo "Executing gsutil mb gs://${PROJECT_ID}"
gsutil mb gs://${PROJECT_ID}
echo "Executing gcloud storage buckets create gs://${PROJECT_ID}"
gcloud storage buckets create gs://${PROJECT_ID}

#create DataFlow folders
touch delete.me
echo "Executing gsutil cp delete.me gs://${PROJECT_ID}/dataflow/"
gsutil cp delete.me gs://${PROJECT_ID}/dataflow/
echo "Executing gsutil cp delete.me gs://${PROJECT_ID}/dataflow/temp/"
gsutil cp delete.me gs://${PROJECT_ID}/dataflow/temp/
echo "Executing gsutil cp delete.me gs://${PROJECT_ID}/dataflow/staging/"
gsutil cp delete.me gs://${PROJECT_ID}/dataflow/staging/
echo "Executing gcloud storage cp delete.me gs://${PROJECT_ID}/dataflow/"
gcloud storage cp delete.me gs://${PROJECT_ID}/dataflow/
echo "Executing gcloud storage cp delete.me gs://${PROJECT_ID}/dataflow/temp/"
gcloud storage cp delete.me gs://${PROJECT_ID}/dataflow/temp/
echo "Executing gcloud storage cp delete.me gs://${PROJECT_ID}/dataflow/staging/"
gcloud storage cp delete.me gs://${PROJECT_ID}/dataflow/staging/

# create PubSub topic
echo "Executing gcloud beta pubsub topics create ${PUBSUB_TOPIC}"
Expand All @@ -96,8 +96,8 @@ echo "Executing bq --location=US mk --dataset ${PROJECT_ID}:${BIGQUERY_DATASET}"
bq --location=US mk --dataset ${PROJECT_ID}:${BIGQUERY_DATASET}

# copy VM startup-script into Google Cloud Storage
echo "Executing gsutil cp startup_install_java8.sh gs://${PROJECT_ID}"
gsutil cp startup_install_java8.sh gs://${PROJECT_ID}
echo "Executing gcloud storage cp startup_install_java8.sh gs://${PROJECT_ID}"
gcloud storage cp startup_install_java8.sh gs://${PROJECT_ID}

# generate a temporary VM that will be used to generate custom image
echo "Executing gcloud compute instances create debian9-java8-img --zone ${ZONE} --image-family debian-9 --image-project debian-cloud --metadata startup-script-url=gs://$1/startup_install_java8.sh"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ function log_and_fail() {

# GCS copy wrapper
function g_rm() {
CMD="gsutil rm ${1}"
CMD="gcloud storage rm ${1}"
${CMD} || log_and_fail "Unable to execute ${CMD}"
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ function log_and_fail() {

# gcs copy
function g_cp() {
CMD="gsutil cp ${1} ${2}"
CMD="gcloud storage cp ${1} ${2}"
${CMD} || log_and_fail "Unable to execute ${CMD}"
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ function log_and_fail() {

# gcs copy
function g_cp_r() {
CMD="gsutil -m cp -r ${1} ${2}"
CMD="gcloud storage cp --recursive ${1} ${2}"
${CMD} || log_and_fail "Unable to execute ${CMD}"
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ function decrypt_file_with_kms_key() {
local encrypted_file_uri=$1
local kms_key_uri=$2

gsutil cat "${encrypted_file_uri}" | gcloud kms decrypt \
gcloud storage cat "${encrypted_file_uri}" | gcloud kms decrypt \
--key "${kms_key_uri}" \
--ciphertext-file - --plaintext-file -
}
Expand All @@ -68,7 +68,7 @@ function set_property_hive_site() {
}

function g_download() {
gsutil cp "$1" "$2" || log_and_fail "Unable to download $1"
gcloud storage cp "$1" "$2" || log_and_fail "Unable to download $1"
}

# hive config for remote hive metastore
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ function log_and_fail() {

# gcs copy
function g_cp() {
CMD="gsutil cp ${1} ${2}"
CMD="gcloud storage cp ${1} ${2}"
${CMD} || log_and_fail "Unable to execute ${CMD}"
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ function log_and_fail() {
}

function g_download() {
gsutil cp "$1" "$2" || log_and_fail "Unable to download $1"
gcloud storage cp "$1" "$2" || log_and_fail "Unable to download $1"
}

function set_env_helpers() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ function encrypt_to_file_with_kms_key() {

# GCS copy wrapper
function g_cp() {
CMD="gsutil cp ${1} ${2}"
CMD="gcloud storage cp ${1} ${2}"
${CMD} || log_and_fail "Unable to execute ${CMD}"
}

Expand Down
14 changes: 7 additions & 7 deletions examples/ml-audio-content-profiling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ your file into an accepted encoding type.
The solution involves creating five GCS buckets using default configuration settings. Because of
this, no [object lifecycle management](https://cloud.google.com/storage/docs/lifecycle) policies are
configured. If you would like to specify different retention policies you can [enable](https://cloud.google.com/storage/docs/managing-lifecycles#enable)
this using `gsutil` while following the deployment process.
this using `gcloud storage` while following the deployment process.

During processing, audio files are moved between buckets as they progress
through various stages of the pipeline. Specifically, the audio file should first be moved to the
Expand Down Expand Up @@ -189,28 +189,28 @@ export STATIC_UUID=$(echo $(uuidgen | tr '[:upper:]' '[:lower:]') | cut -c1-20)
````
export staging_audio_bucket=staging-audio-files-$STATIC_UUID

gsutil mb gs://$staging_audio_bucket
gcloud storage buckets create gs://$staging_audio_bucket
````

````
export processed_audio_bucket=processed-audio-files-$STATIC_UUID
gsutil mb gs://$processed_audio_bucket
gcloud storage buckets create gs://$processed_audio_bucket
````

````
export error_audio_bucket=error-audio-files-$STATIC_UUID
gsutil mb gs://$error_audio_bucket
gcloud storage buckets create gs://$error_audio_bucket
````


````
export transcription_bucket=transcription-files-$STATIC_UUID
gsutil mb gs://$transcription_bucket
gcloud storage buckets create gs://$transcription_bucket
````

````
export output_bucket=output-files-$STATIC_UUID
gsutil mb gs://$output_bucket
gcloud storage buckets create gs://$output_bucket
````


Expand Down Expand Up @@ -342,7 +342,7 @@ All of the resources should be deployed.
### View Results
<h3>Test it out</h3>

1. You can start by trying to upload an audio file in GCS. You can do this using `gsutil` or in the
1. You can start by trying to upload an audio file in GCS. You can do this using `gcloud storage` or in the
UI under the <b>staging bucket</b>. This will trigger `send_stt_api_function`. This submits the
request to the Speech API and publishes the job id to PubSub.

Expand Down