-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
At the time of creating this issue, there is only a single way of deploying models/handing model deployment reconciliation with the flexserve library TapisPods model deployment client. It handles the download of the model itself. But in the future, it will be MLHubs responsibility to furnish the model artifact, or at the very least, provide a path or url to fetch that model artifact.
Once the TapisPods and the flexserve library model deployment client (or some other as-of-yet developed client) is ready to hand over responsibility to MLHub for model provisioning, add back/uncomment the logic related to that task. All lines of code related to this issue will be annotated with a link to this github issue.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels