Skip to content
This repository was archived by the owner on Jan 3, 2023. It is now read-only.

Commit de95e94

Browse files
authored
Readme changes (#70)
* Correct formatting issues in readme. * Clarify custom base image building instruction.
1 parent 0dde9cd commit de95e94

File tree

3 files changed

+19
-17
lines changed

3 files changed

+19
-17
lines changed

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -335,4 +335,7 @@ Notebooks/.ipynb_checkpoints/template-demo-checkpoint.ipynb
335335
.DS_Store
336336

337337
# Jupyter notebook
338-
.ipynb_checkpoints/
338+
.ipynb_checkpoints/
339+
340+
# IDE
341+
.idea

Examples/base-py/runserver.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
print('Creating Application')
1111
app = Flask(__name__)
1212

13-
# Use the AI4EAppInsights library to send log messages. NOT REQURIED
13+
# Use the AI4EAppInsights library to send log messages. NOT REQUIRED
1414
log = AI4EAppInsights()
1515

1616
# Use the APIService to executes your functions within a logging trace, supports long-running/async functions,

README.md

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ These images and examples are meant to illustrate how to build containers for us
77
- 1.14-cuda-9.0 - nvidia/cuda:9.0-runtime-ubuntu16.04
88
- 1.14-cuda-9.0-devel - nvidia/cuda:9.0-devel-ubuntu16.04
99
- The base-py image can be built using any Ubuntu image of your choice by building with the optional BASE_IMAGE build argument.
10-
- Example of how to build with the CUDA 9.0 devel image:
11-
- docker build . -f base-py/Dockerfile -t base-py:1.13-cuda-9.0-devel --build-arg BASE_IMAGE=nvidia/cuda:9.0-devel-ubuntu16.04
10+
- Example of how to build with the CUDA 9.0 devel image (inside [Containers](./Containers)):
11+
- `docker build . -f base-py/Dockerfile -t base-py:1.13-cuda-9.0-devel --build-arg BASE_IMAGE=nvidia/cuda:9.0-devel-ubuntu16.04`
1212

1313
- [mcr.microsoft.com/aiforearth/blob-py](https://hub.docker.com/_/microsoft-aiforearth-blob-python)
1414
- [Available Tags](https://mcr.microsoft.com/v2/aiforearth/blob-python/tags/list)
@@ -38,8 +38,7 @@ To view the license for cuDNN included in the cuda base image, click [here](http
3838

3939
## Contents
4040
1. [Repo Layout](#repo-layout)
41-
2. [Quickstart](#Quickstart)
42-
3. [Quickstart Tutorial](#Quickstart-Tutorial)
41+
2. [Quickstart Tutorial](#Quickstart-Tutorial)
4342
1. [Choose a base image or example](#Choose-a-base-image-or-example)
4443
2. [Insert code to call your model](#Insert-code-to-call-your-model)
4544
3. [Input handling](#Input-handling)
@@ -53,7 +52,7 @@ To view the license for cuDNN included in the cuda base image, click [here](http
5352
11. [Publish to Azure Container Registry](#Publish-to-Azure-Container-Registry)
5453
12. [Run your container in ACI](#Run-your-container-in-ACI)
5554
13. [FAQs](#FAQs)
56-
4. [Contributing](#Contributing)
55+
3. [Contributing](#Contributing)
5756

5857
## Repo Layout
5958
- Containers
@@ -118,18 +117,19 @@ AI for Earth APIs are all built from an AI for Earth base image. You may use a
118117
In general, if you're using Python, you will want to use an image or example with the base-py or blob-py images. If you are using R, you will want to use an image or example with the base-r or blob-r images. The difference between them: the blob-* image contains everything that the cooresponding base-* image contains, plus additional support for mounting [Azure blob storage](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction). This may be useful if you need to process (for example) a batch of images all at once; you can upload them all to Azure blob storage, the container in which your model is running can mount that storage, and access it like it is local storage.
119118

120119
## Asynchronous (async) vs. Synchronous (sync) Endpoint
121-
In addition to your language choice, you should think about whether your API call should be synchronous or asynchronous. A synchronous API call will invoke your model, get results, and return immediately. This is a good paradigm to use if you want to perform classification with your model on a single image, for example. An asynchronous API call should be used for long-running tasks, like processing a whole folder of images, performing object detection on each image with your model, and storing the results.
120+
In addition to your language choice, think about whether your API call should be synchronous or asynchronous.
121+
- A synchronous API call will invoke your model, get results, and return immediately. This is a good paradigm to use if you want to perform classification with your model on a single image, for example.
122+
- An asynchronous API call should be used for long-running tasks, like processing a whole folder of images using your model and storing the results, or constructing a forecasting model from historical data that the user provides.
122123

123124
### Asynchronous Implementation Examples
124125
The following examples demonstrate async endpoints:
125-
- [base-py](./Examples/base-py/runserver.py)'s / endpoint
126+
- [base-py](./Examples/base-py/runserver.py)'s `example` endpoint
126127
- [base-r](./Examples/base-r/my_api/api_example.R)
127128
- [tensorflow](./Examples/tensorflow/tf_iNat_api/runserver.py)
128129

129130
### Synchronous Implementation Examples
130131
The following examples demonstrate sync endpoints:
131-
- [base-py](./Examples/base-py/runserver.py)'s echo endpoint
132-
- [customvision-sample](./Examples/customvision-sample/custom_vision_api/runserver.py)
132+
- [base-py](./Examples/base-py/runserver.py)'s `echo` endpoint
133133
- [pytorch](./Examples/pytorch/pytorch_api/runserver.py)
134134

135135
## Input/Output Patterns
@@ -141,18 +141,18 @@ While input patterns can be used for sync or async designs, your output design i
141141

142142
#### Binary Input
143143
Many applications of AI apply models to image/binary inputs. Here are some approaches:
144-
- Send the image directly via request data. See the [tensorflow](./examples/tensorflow/tf_iNat_api/runserver.py) example to see how it is accomplished.
144+
- Send the image directly via request data. See the [tensorflow](./Examples/tensorflow/tf_iNat_api/runserver.py) example to see how it is accomplished.
145145
- Upload your binary input to an Azure Blob, create a [SAS key](https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1), and add a JSON field for it.
146146
- If you would like users to use your own Azure blob storage, we provide tools to [mount blobs as local drives](https://github.com/Azure/azure-storage-fuse) within your service. You may then use this virtual file system, locally.
147147
- Serializing your payload is a very efficient method for transmission. [BSON](http://bsonspec.org/) is an open standard, bin­ary-en­coded serialization for such purposes.
148148

149149
### Asynchronous Pattern
150-
The preferred way of handling asynchronous API calls is to provide a task status endpoint to your users. When a request is submitted, a new taskId is immediately returned to the caller to track the status of their request as it is processed.
150+
The preferred way of handling asynchronous API calls is to provide a task status endpoint to your users. When a request is submitted, a new `taskId` is immediately returned to the caller to track the status of their request as it is processed.
151151

152152
We have several tools to help with task tracking that you can use for local development and testing. These tools create a database within the service instance and are not recommended for production use.
153153

154154
Once a task is completed, the user needs to retrieve the result of their service call. This can be accomplished in several ways:
155-
- Return a SAS-keyed URL to an Azure Blob Container via a call to the task endpoint.
155+
- Return a SAS-keyed URL to an Azure Blob Container via a call to the `task` endpoint.
156156
- Request that a writable SAS-keyed URL is provided as input to your API call. Indicate completion via the task interface and write the output to that URL.
157157
- If you would like users to use your own Azure blob storage, you can write directly to a virtually-mounted drive.
158158

@@ -291,7 +291,7 @@ Each decorator contains the following parameters:
291291
- ```maximum_concurrent_requests = 5```: If the number of requests exceed this limit, a 503 is returned to the caller.
292292
- ```content_types = ['application/json']```: An array of accepted content types. If the requested type is not found in the array, a 503 will be returned.
293293
- ```content_max_length = 1000```: The maximum length of the request data (in bytes) permitted. If the length of the data exceeds this setting, a 503 will be returned.
294-
-```trace_name = 'post:my_long_running_funct'```: A trace name to associate with this function. This allows you to search logs and metrics for this particular function.
294+
- ```trace_name = 'post:my_long_running_funct'```: A trace name to associate with this function. This allows you to search logs and metrics for this particular function.
295295

296296
## Create AppInsights instrumentation keys
297297
[Application Insights](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) is an Azure service for application performance management. We have integrated with Application Insights to provide advanced monitoring capabilities. You will need to generate both an Instrumentation key and an API key to use in your application.
@@ -324,7 +324,6 @@ Now, let's look at the Dockerfile in your code. Update the Dockerfile to instal
324324
```Dockerfile
325325
RUN /usr/local/envs/ai4e_py_api/bin/pip install grpcio opencensus
326326
```
327-
```
328327

329328
- apt-get
330329
```Dockerfile
@@ -410,7 +409,7 @@ In the above command, the -p switch designates the local port mapping to the con
410409
```Dockerfile
411410
EXPOSE 80
412411
```
413-
TIP: Depending on your git settings and your operating system, the "docker run" command may fail with the error 'standard_init_linux.go:190: exec user process caused "no such file or directory"'. If this happens, you need to change the end-of-line characters in startup.sh to LF. One way to do this is using VS Code; open the startup.sh file and click on CRLF in the bottom right corner in the blue bar and select LF instead, then save.
412+
TIP: Depending on your git settings and your operating system, the "docker run" command may fail with the error `standard_init_linux.go:190: exec user process caused "no such file or directory"`. If this happens, you need to change the end-of-line characters in startup.sh to LF. One way to do this is using VS Code; open the startup.sh file and click on CRLF in the bottom right corner in the blue bar and select LF instead, then save.
414413

415414
If you find that there are errors and you need to go back and rebuild your docker container, run the following commands:
416415
```Bash

0 commit comments

Comments
 (0)