Add Op (instance_norm) | feat(torchlib)#1284
Merged
Conversation
BowenBao
added a commit
that referenced
this pull request
Feb 29, 2024
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 [ghstack-poisoned]
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
…or export" Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 [ghstack-poisoned]
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 ghstack-source-id: 11cdbae Pull Request resolved: #120866
Test Results 24 files ± 0 24 suites ±0 1h 35m 3s ⏱️ - 9m 18s For more details on these failures, see this check. Results for commit c4e1912. ± Comparison against base commit 457e52e. ♻️ This comment has been updated with latest results. |
titaiwangms
approved these changes
Feb 29, 2024
BowenBao
added a commit
that referenced
this pull request
Feb 29, 2024
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
…p instance_norm decomp for export" Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 [ghstack-poisoned]
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 ghstack-source-id: 9f7c15d Pull Request resolved: #120866
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
…tance_norm decomp for export" Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 [ghstack-poisoned]
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Feb 29, 2024
Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 ghstack-source-id: 91812c4 Pull Request resolved: #120866
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Mar 1, 2024
Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 ghstack-source-id: f60eead Pull Request resolved: #120866
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Mar 1, 2024
…xport] Skip instance_norm decomp for export" Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 [ghstack-poisoned]
BowenBao
added a commit
to pytorch/pytorch
that referenced
this pull request
Mar 1, 2024
…ance_norm decomp for export" Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 [ghstack-poisoned]
pytorchmergebot
pushed a commit
to pytorch/pytorch
that referenced
this pull request
Mar 1, 2024
Otherwise, instance_norm is decomposed into batch_norm with training set to True. Downstream exporter has no way to figure out that training is actually not needed. On the other hand, ONNX does have InstanceNormalization operator defined, however due to decomp, it unnecessarily exports as batch norm and glue code. Depends on microsoft/onnxscript#1284 Pull Request resolved: #120866 Approved by: https://github.com/thiagocrepaldi, https://github.com/titaiwangms
justinchuby
reviewed
Mar 5, 2024
|
|
||
| batch_size = op.Shape(input, start=0, end=1) | ||
| bn_input = op.Reshape(input, op.Concat([1, -1], op.Shape(input, start=2), axis=0)) | ||
| weight = op.Tile(weight, batch_size) |
Collaborator
There was a problem hiding this comment.
Curious: When should we use Tile vs Expand? Is there a difference here?
Contributor
Author
There was a problem hiding this comment.
Expand only works when source dimension size is either 1 or equal to target dimension size.
Tile on the other hand is like repeat. Tile and Expand are only equivalent when source dimension size is 1.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):