tf.nn.softmax
bookmark_border bookmark
Stay organized with collections
Save and categorize content based on your preferences.
Computes softmax activations.
View aliases
Main aliases
tf.math.softmax
tf . nn . softmax (
logits , axis = None , name = None
)
Used for multi-class predictions. The sum of all outputs generated by softmax
is 1.
This function performs the equivalent of
softmax = tf . exp ( logits ) / tf . reduce_sum ( tf . exp ( logits ), axis , keepdims = True )
Example usage:
softmax = tf . nn . softmax ([ - 1 , 0. , 1. ])
softmax
<tf . Tensor : shape = ( 3 ,), dtype = float32 ,
numpy = array ([ 0.09003057 , 0.24472848 , 0.66524094 ], dtype = float32 ) >
sum ( softmax )
<tf . Tensor : shape = (), dtype = float32 , numpy = 1.0 >
Args
logits
A non-empty Tensor
. Must be one of the following types: half
,
float32
, float64
.
axis
The dimension softmax would be performed on. The default is -1 which
indicates the last dimension.
name
A name for the operation (optional).
Returns
A Tensor
. Has the same type and shape as logits
.
Raises
InvalidArgumentError
if logits
is empty or axis
is beyond the last
dimension of logits
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license .
Last updated 2024-01-23 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-01-23 UTC."],[],[]]