Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
c57a2e7
Merge pull request #3 from tensorflow/master
JimClarke5 Oct 8, 2020
09fc07e
Merge pull request #4 from tensorflow/master
JimClarke5 Oct 27, 2020
a99dcb4
Merge pull request #5 from tensorflow/master
JimClarke5 Nov 17, 2020
ba294ea
Merge pull request #6 from tensorflow/master
JimClarke5 Nov 19, 2020
04f419a
Merge pull request #7 from tensorflow/master
JimClarke5 Dec 30, 2020
02e7ebf
Merge pull request #8 from tensorflow/master
JimClarke5 Jan 29, 2021
e0c9ed8
Merge pull request #9 from tensorflow/master
JimClarke5 Feb 1, 2021
5b0374b
Merge pull request #10 from tensorflow/master
JimClarke5 Feb 11, 2021
e038bbd
Merge pull request #11 from tensorflow/master
JimClarke5 Feb 23, 2021
def3051
Merge pull request #13 from tensorflow/master
JimClarke5 Mar 3, 2021
11748ae
Merge pull request #15 from tensorflow/master
JimClarke5 Mar 21, 2021
dc94953
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
1878b60
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
9225a48
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 27, 2021
caab79b
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
bd072f4
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
a9412ea
Merge pull request #16 from tensorflow/master
JimClarke5 Apr 9, 2021
d29262b
Add confusionMatrix() method. add Unit test
JimClarke5 Apr 16, 2021
2ff8dfe
Merge pull request #17 from tensorflow/master
JimClarke5 Apr 22, 2021
ee5e38a
Merge pull request #18 from tensorflow/master
JimClarke5 May 1, 2021
26394d6
Merge pull request #19 from tensorflow/master
JimClarke5 May 2, 2021
e0a4a26
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
28db4df
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
ba24371
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 27, 2021
4d3f17c
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
9e07483
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
790bf35
Add confusionMatrix() method. add Unit test
JimClarke5 Apr 16, 2021
b4ca97a
Added linalg methods for matmul
JimClarke5 May 2, 2021
e83d26b
add nn ops for sigmoidCrossEntropyWithLogits, softmaxCrossEntropyWith…
JimClarke5 May 2, 2021
e4e65f2
Moved SetOps to FrameworkOps
JimClarke5 May 2, 2021
a2ed723
Added tensordot and reduceLogSumExp
JimClarke5 May 2, 2021
be1fe66
Added frameworkOps for nn and linalg
JimClarke5 May 2, 2021
7b51e7f
Modified to use FrameworkOps
JimClarke5 May 2, 2021
f1c63c0
move nn.raw classes to nn in core, remove nn.raw
JimClarke5 May 2, 2021
f4b75b9
Merge remote-tracking branch 'origin/Framework_Ops' into Framework_Ops
JimClarke5 May 2, 2021
043654b
Update FrameworkOps.java
JimClarke5 May 2, 2021
06c28df
Fix unusual regression error in confustion matrix. Needed to reduceA…
JimClarke5 May 3, 2021
8f33d21
javadoc fixes
JimClarke5 May 3, 2021
a24b8ca
Setting all the optimizers to have useLocking = True (#310)
Craigacp May 4, 2021
94f5b15
Load TF library before computing TString size (#322)
karllessard May 17, 2021
743475d
Update README.md
karllessard May 19, 2021
3648a96
Fix sometimes generating Javadoc for scope param in Ops (#291)
rnett May 21, 2021
ceae489
Use spotless plugin for formating (#308)
rnett May 23, 2021
0f7274e
Quick fix for spotless (#324)
rnett May 24, 2021
ace917b
Temporarily disabling Linux MKL-GPU
karllessard May 26, 2021
3b4533c
Fix Scope name collisions (#248)
rnett May 28, 2021
daeb257
Native functions v2 (#233)
rnett May 31, 2021
19e1c8d
Spotless updates (#331)
rnett Jun 1, 2021
23d6f0b
activations, constraints, initializers, losses, regularizers: move Op…
JimClarke5 Jun 2, 2021
7b5a1ca
Skip tests in check-format job
karllessard Jun 8, 2021
cea76cd
Upgrade for TensorFlow 2.5.0 (#303)
saudet Jun 10, 2021
caed0e8
Skip implementation-less TF_InitKernel
karllessard Jun 11, 2021
b997f12
Upgrade TF version in current snapshots
karllessard Jun 11, 2021
031a0c1
SavedModelBundle leak fix (#335)
Craigacp Jun 11, 2021
b38cc04
Use OP_NAME constant instead of hard coding (#328)
rnett Jun 16, 2021
4d8d24f
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
e483792
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
9480126
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 27, 2021
074794b
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
7526b7e
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
0a163c6
Add confusionMatrix() method. add Unit test
JimClarke5 Apr 16, 2021
c234b9a
Added linalg methods for matmul
JimClarke5 May 2, 2021
e024f4b
add nn ops for sigmoidCrossEntropyWithLogits, softmaxCrossEntropyWith…
JimClarke5 May 2, 2021
b108b06
Moved SetOps to FrameworkOps
JimClarke5 May 2, 2021
13b6f0f
Added tensordot and reduceLogSumExp
JimClarke5 May 2, 2021
f1dbb01
Added frameworkOps for nn and linalg
JimClarke5 May 2, 2021
6174a32
Modified to use FrameworkOps
JimClarke5 May 2, 2021
5523896
move nn.raw classes to nn in core, remove nn.raw
JimClarke5 May 2, 2021
b750dd2
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
4468be2
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
eb64cd0
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
134a11d
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
1f9626c
Update FrameworkOps.java
JimClarke5 May 2, 2021
7860a71
Fix unusual regression error in confustion matrix. Needed to reduceA…
JimClarke5 May 3, 2021
d967a99
javadoc fixes
JimClarke5 May 3, 2021
e84981f
Rebase with latest master
JimClarke5 Jun 17, 2021
f69e17e
Merge branch 'Framework_Ops' of https://github.com/JimClarke5/java in…
JimClarke5 Jun 17, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add nn ops for sigmoidCrossEntropyWithLogits, softmaxCrossEntropyWith…
…Logits and sparseSoftmaxCrossEntropyWithLogits
  • Loading branch information
JimClarke5 committed May 2, 2021
commit e83d26b6cb7e4d44616efb1df249a310cabaebe2
Original file line number Diff line number Diff line change
Expand Up @@ -1811,14 +1811,14 @@ public <T extends TNumber> Softmax<T> softmax(Operand<T> logits) {

/**
* Computes softmax cross entropy cost and gradients to backpropagate.
* <p>
* Inputs are the logits, not probabilities.
*
* @param <T> data type for {@code loss()} output
* @param <T> data type for {@code loss} output
* @param features batch_size x num_classes matrix
* @param labels batch_size x num_classes matrix
* The caller must ensure that each batch of labels represents a valid
* probability distribution.
* @param <T> data type for {@code SoftmaxCrossEntropyWithLogits} output and operands
* @return a new instance of SoftmaxCrossEntropyWithLogits
*/
public <T extends TNumber> SoftmaxCrossEntropyWithLogits<T> softmaxCrossEntropyWithLogits(
Expand Down Expand Up @@ -2011,18 +2011,17 @@ public <T extends TType> SpaceToDepth<T> spaceToDepth(Operand<T> input, Long blo

/**
* Computes softmax cross entropy cost and gradients to backpropagate.
* <p>
* Unlike `SoftmaxCrossEntropyWithLogits`, this operation does not accept
* Unlike {@code SoftmaxCrossEntropyWithLogits}, this operation does not accept
* a matrix of label probabilities, but rather a single label per row
* of features. This label is considered to have probability 1.0 for the
* given row.
* <p>
* Inputs are the logits, not probabilities.
* <p>Inputs are the logits, not probabilities.
*
* @param <T> data type for {@code loss()} output
* @param <T> data type for {@code loss} output
* @param features batch_size x num_classes matrix
* @param labels batch_size vector with values in [0, num_classes).
* This is the label for the given minibatch entry.
* @param <T> data type for {@code SparseSoftmaxCrossEntropyWithLogits} output and operands
* @return a new instance of SparseSoftmaxCrossEntropyWithLogits
*/
public <T extends TNumber> SparseSoftmaxCrossEntropyWithLogits<T> sparseSoftmaxCrossEntropyWithLogits(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,57 +29,68 @@

/**
* Computes softmax cross entropy cost and gradients to backpropagate.
* <p>
* Inputs are the logits, not probabilities.
*
* @param <T> data type for {@code loss()} output
*
* @param <T> data type for {@code loss} output
*/
@Operator(group = "nn")
@Operator(
group = "nn"
)
public final class SoftmaxCrossEntropyWithLogits<T extends TNumber> extends RawOp {

/**
* The name of this op, as known by TensorFlow core engine
*/
public static final String OP_NAME = "SoftmaxCrossEntropyWithLogits";

private Output<T> loss;

private Output<T> backprop;

private SoftmaxCrossEntropyWithLogits(Operation operation) {
super(operation);
int outputIdx = 0;
loss = operation.output(outputIdx++);
backprop = operation.output(outputIdx++);
}

/**
* Factory method to create a class wrapping a new SoftmaxCrossEntropyWithLogits operation.
*
*
* @param scope current scope
* @param features batch_size x num_classes matrix
* @param labels batch_size x num_classes matrix
* The caller must ensure that each batch of labels represents a valid
* probability distribution.
* @param <T> data type for {@code SoftmaxCrossEntropyWithLogits} output and operands
* @return a new instance of SoftmaxCrossEntropyWithLogits
*/
@Endpoint(describeByClass = true)
public static <T extends TNumber> SoftmaxCrossEntropyWithLogits<T> create(Scope scope, Operand<T> features, Operand<T> labels) {
@Endpoint(
describeByClass = true
)
public static <T extends TNumber> SoftmaxCrossEntropyWithLogits<T> create(Scope scope,
Operand<T> features, Operand<T> labels) {
OperationBuilder opBuilder = scope.env().opBuilder("SoftmaxCrossEntropyWithLogits", scope.makeOpName("SoftmaxCrossEntropyWithLogits"));
opBuilder.addInput(features.asOutput());
opBuilder.addInput(labels.asOutput());
opBuilder = scope.apply(opBuilder);
return new SoftmaxCrossEntropyWithLogits<>(opBuilder.build());
}

/**
* Gets loss.
* Per example loss (batch_size vector).
* @return loss.
*/
public Output<T> loss() {
return loss;
}

/**
* Gets backprop.
* backpropagated gradients (batch_size x num_classes matrix).
* @return backprop.
*/
public Output<T> backprop() {
return backprop;
}

/** The name of this op, as known by TensorFlow core engine */
public static final String OP_NAME = "SoftmaxCrossEntropyWithLogits";

private Output<T> loss;
private Output<T> backprop;

private SoftmaxCrossEntropyWithLogits(Operation operation) {
super(operation);
int outputIdx = 0;
loss = operation.output(outputIdx++);
backprop = operation.output(outputIdx);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -29,61 +29,71 @@

/**
* Computes softmax cross entropy cost and gradients to backpropagate.
* <p>
* Unlike `SoftmaxCrossEntropyWithLogits`, this operation does not accept
* Unlike {@code SoftmaxCrossEntropyWithLogits}, this operation does not accept
* a matrix of label probabilities, but rather a single label per row
* of features. This label is considered to have probability 1.0 for the
* given row.
* <p>
* Inputs are the logits, not probabilities.
*
* @param <T> data type for {@code loss()} output
* <p>Inputs are the logits, not probabilities.
*
* @param <T> data type for {@code loss} output
*/
@Operator(group = "nn")
@Operator(
group = "nn"
)
public final class SparseSoftmaxCrossEntropyWithLogits<T extends TNumber> extends RawOp {

/**
* The name of this op, as known by TensorFlow core engine
*/
public static final String OP_NAME = "SparseSoftmaxCrossEntropyWithLogits";

private Output<T> loss;

private Output<T> backprop;

private SparseSoftmaxCrossEntropyWithLogits(Operation operation) {
super(operation);
int outputIdx = 0;
loss = operation.output(outputIdx++);
backprop = operation.output(outputIdx++);
}

/**
* Factory method to create a class wrapping a new SparseSoftmaxCrossEntropyWithLogits operation.
*
*
* @param scope current scope
* @param features batch_size x num_classes matrix
* @param labels batch_size vector with values in [0, num_classes).
* This is the label for the given minibatch entry.
* @param <T> data type for {@code SparseSoftmaxCrossEntropyWithLogits} output and operands
* @return a new instance of SparseSoftmaxCrossEntropyWithLogits
*/
@Endpoint(describeByClass = true)
public static <T extends TNumber> SparseSoftmaxCrossEntropyWithLogits<T> create(Scope scope, Operand<T> features, Operand<? extends TNumber> labels) {
@Endpoint(
describeByClass = true
)
public static <T extends TNumber> SparseSoftmaxCrossEntropyWithLogits<T> create(Scope scope,
Operand<T> features, Operand<? extends TNumber> labels) {
OperationBuilder opBuilder = scope.env().opBuilder("SparseSoftmaxCrossEntropyWithLogits", scope.makeOpName("SparseSoftmaxCrossEntropyWithLogits"));
opBuilder.addInput(features.asOutput());
opBuilder.addInput(labels.asOutput());
opBuilder = scope.apply(opBuilder);
return new SparseSoftmaxCrossEntropyWithLogits<>(opBuilder.build());
}

/**
* Gets loss.
* Per example loss (batch_size vector).
* @return loss.
*/
public Output<T> loss() {
return loss;
}

/**
* Gets backprop.
* backpropagated gradients (batch_size x num_classes matrix).
* @return backprop.
*/
public Output<T> backprop() {
return backprop;
}

/** The name of this op, as known by TensorFlow core engine */
public static final String OP_NAME = "SparseSoftmaxCrossEntropyWithLogits";

private Output<T> loss;
private Output<T> backprop;

private SparseSoftmaxCrossEntropyWithLogits(Operation operation) {
super(operation);
int outputIdx = 0;
loss = operation.output(outputIdx++);
backprop = operation.output(outputIdx);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ public class NnOps {
* @param logits the logits of type float32 or float64
* @param <T> the type of labels and logits
* @return the component-wise logistic losses.
* @throws IllegalArgumentException if logits' and labels' do not have the same shape
* @throws IllegalArgumentException if logits and labels do not have the same shape
*/
public <T extends TNumber> Operand<T> sigmoidCrossEntropyWithLogits(
Operand<T> labels, Operand<T> logits) {
Expand Down Expand Up @@ -139,7 +139,6 @@ public <T extends TNumber> Operand<T> sigmoidCrossEntropyWithLogits(
* @return the softmax cross entropy loss. Its type is the same as {@code logits} and its shape is
* the same as {@code labels} except that it does not have the last dimension of {@code
* labels}.
*
*/
public <T extends TNumber, U extends TNumber> Operand<T> softmaxCrossEntropyWithLogits(
Operand<U> labels, Operand<T> logits, int axis) {
Expand Down Expand Up @@ -181,14 +180,14 @@ public <T extends TNumber, U extends TNumber> Operand<T> softmaxCrossEntropyWith
* @param logits Per-label activations (typically a linear output) of shape {@code [d_0, d_1, ...,
* d_{r-1}, numClasses]} and dataType of {@code TFloat16}, {@code TFloat32}, or {@code
* TFloat64}. These activation energies are interpreted as unnormalized log probabilities.
* @param <T> The data type for the labels
* @param <U> The data type for the logits and loss
* @param <U> the data type for the labels
* @param <T> the data tyoe for the loss and logits.
* @return the loss
* @throws IllegalArgumentException If logits are scalars (need to have {@code rank >= 1}) or if the rank
* of the labels is not equal to the rank of the logits minus one.
* @throws IllegalArgumentException If logits are scalars (need to have {@code rank >= 1}) or if
* the rank of the labels is not equal to the rank of the logits minus one.
*/
public <T extends TNumber, U extends TNumber> Operand<U> sparseSoftmaxCrossEntropyWithLogits(
Operand<T> labels, Operand<U> logits) {
public <T extends TNumber, U extends TNumber> Operand<T> sparseSoftmaxCrossEntropyWithLogits(
Operand<U> labels, Operand<T> logits) {
return SparseSoftmaxCrossEntropyWithLogits.sparseSoftmaxCrossEntropyWithLogits(
scope, labels, logits);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,7 @@ public class SigmoidCrossEntropyWithLogits {
* independent and not mutually exclusive. For instance, one could perform multilabel
* classification where a picture can contain both an elephant and a dog at the same time.
*
* <p>For brevity, let {@code x = logits}, {@code z = labels}. The logistic loss in
* pseudo-code is
* <p>For brevity, let {@code x = logits}, {@code z = labels}. The logistic loss in pseudo-code is
*
* <pre>
* z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
Expand Down
Loading