Skip to content
This repository was archived by the owner on Jan 3, 2023. It is now read-only.

Commit adb38ec

Browse files
committed
Incorporate Jennifer's suggestions. Add section on deploying their own model.
1 parent 66bfb17 commit adb38ec

File tree

1 file changed

+47
-15
lines changed

1 file changed

+47
-15
lines changed

Notebooks/MLADS_Spring_2019.ipynb

Lines changed: 47 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@
9393
"\n",
9494
"Notice that currently in `my_api/runserver.py`, there are two endpoints defined, marked by the `@ai4e_service.api_async_func` and the `@ai4e_service.api_sync_func` decorators. \n",
9595
"\n",
96-
"For a more detaile dexplanation of the input/output patterns, see this [section](https://github.com/microsoft/AIforEarth-API-Development/blob/master/Quickstart.md#inputoutput-patterns) in our Quickstart.\n",
96+
"For a more detailed explanation of the input/output patterns, see this [section](https://github.com/microsoft/AIforEarth-API-Development/blob/master/Quickstart.md#inputoutput-patterns) in our Quickstart.\n",
9797
"\n",
9898
"##### Async endpoint\n",
9999
"\n",
@@ -136,6 +136,8 @@
136136
"```\n",
137137
"The port mapping specified using `-p` maps localhost:6002 to port 1212 in the Docker container, which you exposed in the Dockerfile. \n",
138138
"\n",
139+
"If you're on Windows and run into an error `standard_init_linux.go:207: exec user process caused \"no such file or directory\"`, see this [section](https://github.com/microsoft/AIforEarth-API-Development/blob/master/Quickstart.md#run-your-image-locally) in our Quickstart for how to fix it.\n",
140+
"\n",
139141
"\n",
140142
"### 4.3 Test the synchronous endpoint\n",
141143
"You can now make an API call to\n",
@@ -148,7 +150,7 @@
148150
},
149151
{
150152
"cell_type": "code",
151-
"execution_count": 38,
153+
"execution_count": 1,
152154
"metadata": {
153155
"collapsed": false
154156
},
@@ -183,7 +185,7 @@
183185
},
184186
{
185187
"cell_type": "code",
186-
"execution_count": 46,
188+
"execution_count": 2,
187189
"metadata": {
188190
"collapsed": false
189191
},
@@ -192,24 +194,38 @@
192194
"name": "stdout",
193195
"output_type": "stream",
194196
"text": [
195-
"TaskId: 4033\n"
197+
"TaskId: 9365\n"
196198
]
197199
}
198200
],
199201
"source": [
202+
"import json\n",
203+
"\n",
200204
"async_endpoint = 'example'\n",
201205
"\n",
202206
"url = base_url + async_endpoint\n",
203207
"\n",
204-
"payload = {'key':'value'}\n",
208+
"payload = {'key': 'value'}\n",
209+
"payload = json.dumps(payload) # serialize the json payload\n",
205210
"\n",
206211
"r = requests.post(url, data=payload)\n",
207212
"print(r.text)"
208213
]
209214
},
210215
{
211216
"cell_type": "code",
212-
"execution_count": 47,
217+
"execution_count": 3,
218+
"metadata": {
219+
"collapsed": false
220+
},
221+
"outputs": [],
222+
"source": [
223+
"task_id = r.text.split('TaskId: ')[1]"
224+
]
225+
},
226+
{
227+
"cell_type": "code",
228+
"execution_count": 4,
213229
"metadata": {
214230
"collapsed": false
215231
},
@@ -218,17 +234,18 @@
218234
"name": "stdout",
219235
"output_type": "stream",
220236
"text": [
221-
"{\"uuid\": 4033, \"status\": \"Task failed - Body was empty or could not be parsed.\", \"timestamp\": \"2019-06-04 23:56:06\", \"endpoint\": \"uri\"}\n",
237+
"{\"uuid\": 9365, \"status\": \"running model\", \"timestamp\": \"2019-06-07 04:44:46\", \"endpoint\": \"uri\"}\n",
222238
"\n"
223239
]
224240
}
225241
],
226242
"source": [
227243
"# check the status using the TaskID returned\n",
228-
"task_id = r.text.split('TaskId: ')[1]\n",
229-
"\n",
230244
"r = requests.get(base_url + 'task/' + task_id)\n",
231-
"print(r.text)"
245+
"print(r.text)\n",
246+
"\n",
247+
"# the example async API sleeps for 10 seconds. Check status again after 10 seconds and you should\n",
248+
"# see that the \"status\" is now \"completed\"."
232249
]
233250
},
234251
{
@@ -288,7 +305,7 @@
288305
"source": [
289306
"## 6. Deploy on a VM\n",
290307
"\n",
291-
"One way to deploy this API for people in your team or a small group of users to call is to serve it from an Azure VM.\n",
308+
"One way to deploy this API for people in your team or a small group of users to call is to serve it from an Azure Linux VM.\n",
292309
"\n",
293310
"This involves starting a Docker container based on your Docker image in a [tmux session](https://hackernoon.com/a-gentle-introduction-to-tmux-8d784c404340) (or running in the background) on the VM. The tmux session allows your process to run after you've left the ssh session.\n",
294311
"\n",
@@ -298,6 +315,8 @@
298315
"docker pull yasiyu.azurecr.io/my-api/1.0-example-api:1\n",
299316
"```\n",
300317
"\n",
318+
"(It seems that you need your ACR name in all lower case...)\n",
319+
"\n",
301320
"And start a container based on that image:\n",
302321
"\n",
303322
"```\n",
@@ -334,7 +353,7 @@
334353
"```\n",
335354
"az container create --resource-group yasiyu_rg --name example-container1 --image yasiyu.azurecr.io/my-api/1.0-example-api:1 --dns-name-label yasiyu-api1 --ports 1212 --registry-username <username> --registry-password <password>\n",
336355
"```\n",
337-
"- You can look up the `registry-username` and `registry-password` fields in the Azure Portal page for your registry, in the \"Access keys\" section.\n",
356+
"- You can look up the `registry-username` and `registry-password` fields in the Azure Portal page for your registry, in the \"Access keys\" section under \"Settings\".\n",
338357
"\n",
339358
"- Note that the `ports` argument should be `1212` since that is the port we specified to expose in the Dockerfile.\n",
340359
"\n",
@@ -353,7 +372,7 @@
353372
},
354373
{
355374
"cell_type": "code",
356-
"execution_count": 45,
375+
"execution_count": 5,
357376
"metadata": {
358377
"collapsed": false
359378
},
@@ -394,7 +413,20 @@
394413
"cell_type": "markdown",
395414
"metadata": {},
396415
"source": [
397-
"## 8. Deploy on our scalable platform\n",
416+
"## 8. Deploy your own model\n",
417+
"\n",
418+
"Time to plug in your useful model! If you don't have a model that you'd like to try this with right now, we have sample code in [Examples](Examples) for PyTorch and TensorFlow (in addition, `animal-detector-api` is a model built with the TensorFlow Object Detection API) and instructions for downloading the required model files and sample data. \n",
419+
"\n",
420+
"You can now copy the `base-py` folder that we've built this example image with to your own repo, drop in your model file in `my-api`, and place your input handling and model inference code in `runserver.py` or another file it imports from. \n",
421+
"\n",
422+
"If you decide to change the name of the folder `my-api` or `runserver.py`, you also need to change the path to this entry point script in `supervisord.conf`."
423+
]
424+
},
425+
{
426+
"cell_type": "markdown",
427+
"metadata": {},
428+
"source": [
429+
"## 9. Deploy on our scalable platform\n",
398430
"\n",
399431
"The Docker image you have created in this process, once integrated with your model files and inference code, can be deployed on our hosting platform with no additional changes. The local packages for task management and telemetry will be swapped with distributed versions.\n",
400432
"\n",
@@ -405,7 +437,7 @@
405437
"cell_type": "markdown",
406438
"metadata": {},
407439
"source": [
408-
"## 9. Resource cleanup\n",
440+
"## 10. Resource cleanup\n",
409441
"\n",
410442
"Don't forget to delete all the resources that you've set up to complete this lab afterwards, most importantly any VM or Azure Container Instances (ACI) instances, but also the Azure Container Registry (you can delete individual images stored there). You can do this in the Azure Portal or through the CLI."
411443
]

0 commit comments

Comments
 (0)