Skip to content

Commit c8e4179

Browse files
authored
Add custom routes for directpath to net-vpc module (GoogleCloudPlatform#2966)
* add custom routes for directpath to net-vpc module * blueprint tests * blueprint tests * blueprint tests * fast tests * tfdoc * module examples
1 parent 73022a7 commit c8e4179

File tree

79 files changed

+356
-205
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

79 files changed

+356
-205
lines changed

blueprints/apigee/apigee-x-foundations/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ module "apigee-x-foundations" {
218218
}
219219
}
220220
}
221-
# tftest modules=8 resources=62
221+
# tftest modules=8 resources=63
222222
```
223223

224224
### Apigee X in service project with peering disabled and exposed using Global LB
@@ -376,7 +376,7 @@ module "apigee-x-foundations" {
376376
}
377377
}
378378
}
379-
# tftest modules=6 resources=48
379+
# tftest modules=6 resources=49
380380
```
381381

382382
### Apigee X in standalone project with peering disabled and exposed using Global External Application LB
@@ -453,7 +453,7 @@ module "apigee-x-foundations" {
453453
}
454454
enable_monitoring = true
455455
}
456-
# tftest modules=6 resources=63
456+
# tftest modules=6 resources=64
457457
```
458458

459459
<!-- TFDOC OPTS files:1 show_extra:1 -->

blueprints/apigee/bigquery-analytics/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ In addition to this it also creates the setup depicted in the diagram below to e
1111
Find below a description on how the analytics export to BigQuery works:
1212

1313
1. A Cloud Scheduler Job runs daily at a selected time, publishing a message to a Pub/Sub topic.
14-
2. The message published triggers the execution of a function that makes a call to the Apigee Analytics Export API to export the analytical data available for the previous day.
15-
3. The export function is passed the Apigee organization, environments, datastore name as environment variables. The service account used to run the function needs to be granted the Apigee Admin role on the project. The Apigee Analytics engine asynchronously exports the analytical data to a GCS bucket. This requires the _Apigee Service Agent_ service account to be granted the _Storage Admin_ role on the project.
14+
2. The message published triggers the execution of a function that makes a call to the Apigee Analytics Export API to export the analytical data available for the previous day.
15+
3. The export function is passed the Apigee organization, environments, datastore name as environment variables. The service account used to run the function needs to be granted the Apigee Admin role on the project. The Apigee Analytics engine asynchronously exports the analytical data to a GCS bucket. This requires the _Apigee Service Agent_ service account to be granted the _Storage Admin_ role on the project.
1616
4. A notification of the files created on GCS is received in a Pub/Sub topic that triggers the execution of the cloud function in charge of loading the data from GCS to the right BigQuery table partition. This function is passed the name of the BigQuery dataset, its location and the name of the table inside that dataset as environment variables. The service account used to run the function needs to be granted the _Storage Object Viewer_ role on the GCS bucket, the _BigQuery Job User_ role on the project and the _BigQuery Data Editor_ role on the table.
1717

1818
Note: This setup only works if you are not using custom analytics.
@@ -103,5 +103,5 @@ module "test" {
103103
europe-west1 = "10.0.0.0/28"
104104
}
105105
}
106-
# tftest modules=10 resources=72
106+
# tftest modules=10 resources=73
107107
```

blueprints/apigee/hybrid-gke/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,5 +78,5 @@ module "test" {
7878
project_id = "my-project"
7979
hostname = "test.myorg.org"
8080
}
81-
# tftest modules=18 resources=67
81+
# tftest modules=18 resources=68
8282
```

blueprints/apigee/network-patterns/nb-glb-psc-neg-sb-psc-ilbl7-hybrid-neg/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,5 +79,5 @@ module "test" {
7979
onprem_project_id = "my-onprem-project"
8080
hostname = "test.myorg.org"
8181
}
82-
# tftest modules=14 resources=84
82+
# tftest modules=14 resources=86
8383
```

blueprints/cloud-operations/adfs/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Once the resources have been created, do the following:
4242

4343
https://adfs.my-domain.org/adfs/ls/IdpInitiatedSignOn.aspx
4444

45-
2. Enter the username and password of one of the users provisioned. The username has to be in the format: username@my-domain.org
45+
2. Enter the username and password of one of the users provisioned. The username has to be in the format: <username@my-domain.org>
4646
3. Verify that you have successfully signed in.
4747

4848
Once done testing, you can clean up resources by running `terraform destroy`.
@@ -89,5 +89,5 @@ module "test" {
8989
ad_dns_domain_name = "example.com"
9090
adfs_dns_domain_name = "adfs.example.com"
9191
}
92-
# tftest modules=5 resources=25
92+
# tftest modules=5 resources=26
9393
```

blueprints/cloud-operations/asset-inventory-feed-remediation/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@ The resources created in this blueprint are shown in the high level diagram belo
2626

2727
<img src="diagram.png" width="640px">
2828

29-
3029
## Running the blueprint
3130

3231
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fasset-inventory-feed-remediation), then go through the following steps to create resources:
@@ -82,5 +81,5 @@ module "test" {
8281
project_id = "project-1"
8382
}
8483
85-
# tftest modules=7 resources=28
84+
# tftest modules=7 resources=29
8685
```

blueprints/cloud-operations/dns-fine-grained-iam/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,5 +128,5 @@ module "test1" {
128128
project_create = true
129129
project_id = "test"
130130
}
131-
# tftest modules=9 resources=32
131+
# tftest modules=9 resources=33
132132
```

blueprints/cloud-operations/dns-shared-vpc/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,5 +51,5 @@ module "test" {
5151
shared_vpc_link = "https://www.googleapis.com/compute/v1/projects/test-dns/global/networks/default"
5252
teams = ["team1", "team2"]
5353
}
54-
# tftest modules=9 resources=22
54+
# tftest modules=9 resources=24
5555
```

blueprints/cloud-operations/packer-image-builder/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,5 +115,5 @@ module "test" {
115115
packer_account_users = ["user:john@example.com"]
116116
create_packer_vars = true
117117
}
118-
# tftest modules=7 resources=20 files=pkrvars
118+
# tftest modules=7 resources=21 files=pkrvars
119119
```

blueprints/cloud-operations/unmanaged-instances-healthcheck/README.md

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,29 +8,28 @@ The blueprint contains the following components:
88

99
- [Cloud Scheduler](https://cloud.google.com/scheduler) to initiate a healthcheck on a schedule.
1010
- [Serverless VPC Connector](https://cloud.google.com/vpc/docs/configure-serverless-vpc-access) to allow Cloud Functions TCP level access to private GCE instances.
11-
- **Healthchecker** Cloud Function to perform TCP checks against GCE instances.
11+
- **Healthchecker** Cloud Function to perform TCP checks against GCE instances.
1212
- **Restarter** PubSub topic to keep track of instances which are to be restarted.
1313
- **Restarter** Cloud Function to perform GCE instance reset for instances which are failing TCP healthcheck.
1414

15-
1615
The resources created in this blueprint are shown in the high level diagram below:
1716

1817
<img src="diagram.png" width="640px">
1918

2019
### Healthchecker configuration
20+
2121
Healthchecker cloud function has the following configuration options:
2222

2323
- `FILTER` to filter list of GCE instances the health check will be targeted to. For instance `(name = nginx-*) AND (labels.env = dev)`
24-
- `GRACE_PERIOD` time period to prevent instance check of newly created instanced allowing services to start on the instance.
24+
- `GRACE_PERIOD` time period to prevent instance check of newly created instanced allowing services to start on the instance.
2525
- `MAX_PARALLELISM` - max amount of healthchecks performed in parallel, be aware that every check requires an open TCP connection which is limited.
26-
- `PUBSUB_TOPIC` topic to publish the message with instance metadata.
27-
- `RECHECK_INTERVAL` time period for performing recheck, when a check is failed it will be rechecked before marking as unhealthy.
26+
- `PUBSUB_TOPIC` topic to publish the message with instance metadata.
27+
- `RECHECK_INTERVAL` time period for performing recheck, when a check is failed it will be rechecked before marking as unhealthy.
2828
- `TCP_PORT` port used for health checking
2929
- `TIMEOUT` the timeout time of a TCP probe.
3030

3131
**_NOTE:_** In the current example `healthchecker` is used along with the `restarter` cloud function, but restarter can be replaced with another function like [Pubsub2Inbox](https://github.com/GoogleCloudPlatform/professional-services/tree/main/tools/pubsub2inbox) for email notifications.
3232

33-
3433
## Running the blueprint
3534

3635
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Funmanaged-instances-healthcheck), then go through the following steps to create resources:
@@ -41,17 +40,21 @@ Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/c
4140
Once done testing, you can clean up resources by running `terraform destroy`. To persist state, check out the `backend.tf.sample` file.
4241

4342
## Testing the blueprint
43+
4444
Configure `gcloud` with the project used for the deployment
45+
4546
```bash
4647
gcloud config set project <MY-PROJECT-ID>
4748
```
4849

4950
Wait until cloud scheduler executes the healthchecker function
51+
5052
```bash
5153
gcloud scheduler jobs describe healthchecker-schedule
5254
```
5355

5456
Check the healthchecker function logs to ensure instance is checked and healthy
57+
5558
```bash
5659
gcloud functions logs read cf-healthchecker --region=europe-west1
5760

@@ -61,11 +64,13 @@ gcloud functions logs read cf-healthchecker --region=europe-west1
6164
```
6265

6366
Stop `nginx` service on the test instance
67+
6468
```
6569
gcloud compute ssh --zone europe-west1-b nginx-test -- 'sudo systemctl stop nginx'
6670
```
6771

6872
Wait a few minutes to allow scheduler to execute another healthcheck and examine the function logs
73+
6974
```bash
7075
gcloud functions logs read cf-healthchecker --region=europe-west1
7176

@@ -78,6 +83,7 @@ gcloud functions logs read cf-healthchecker --region=europe-west1
7883
```
7984

8085
Examine `cf-restarter` function logs
86+
8187
```bash
8288
gcloud functions logs read cf-restarter --region=europe-west1
8389

@@ -88,6 +94,7 @@ gcloud functions logs read cf-restarter --region=europe-west1
8894
```
8995

9096
Verify that `nginx` service is running again and uptime shows that instance has been reset
97+
9198
```bash
9299
gcloud compute ssh --zone europe-west1-b nginx-test -- 'sudo systemctl status nginx'
93100
gcloud compute ssh --zone europe-west1-b nginx-test -- 'uptime'
@@ -128,5 +135,5 @@ module "test" {
128135
billing_account = "123456-123456-123456"
129136
project_create = true
130137
}
131-
# tftest modules=11 resources=46
138+
# tftest modules=11 resources=47
132139
```

0 commit comments

Comments
 (0)