softmax Questions
4
I tried to implement soft-max with the following code (out_vec is a numpy vector of floats):
numerator = np.exp(out_vec)
denominator = np.sum(np.exp(out_vec))
out_vec = numerator/denominator
How...
2
Solved
The difference between these two functions that has been described in this pytorch post: What is the difference between log_softmax and softmax?
is: exp(x_i) / exp(x).sum()
and log softmax is: log...
Electroencephalogram asked 12/3, 2018 at 13:34
4
I trained CNN model for just one epoch with very little data. I use Keras 2.05.
Here is the CNN model's (partial) last 2 layers, number_outputs = 201. Training data output is one hot encoded 201 o...
1
Solved
I have a problem with classifying fully connected deep neural net with 2 hidden layers for MNIST dataset in pytorch.
I want to use tanh as activations in both hidden layers, but in the end, I shoul...
2
Solved
I'm trying to apply the concept of distillation, basically to train a new smaller network to do the same as the original one but with less computation.
I have the softmax outputs for every sample ...
Tresa asked 29/5, 2017 at 7:6
3
Solved
I am trying to implement softmax at the end of cnn, The output I got is nan and zeros. I am giving high input values to softmax around 10-20k I'm giving an array of X=[2345,3456,6543,-6789,-9234]...
2
Solved
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='categorical')
test_loss, test_acc = model.evaluate_generator(test_generator, step...
1
Solved
I have a logistic regression model using Pytorch 0.4.0, where my input is high-dimensional and my output must be a scalar - 0, 1 or 2.
I'm using a linear layer combined with a softmax layer to ret...
1
I have noticed that tf.nn.softmax_cross_entropy_with_logits_v2(labels, logits) mainly performs 3 operations:
Apply softmax to the logits (y_hat) in order to normalize them: y_hat_softmax = softma...
Nurture asked 20/3, 2018 at 6:19
5
I am trying to apply a softmax function to a numpy array. But I am not getting the desired results. This is the code I have tried:
import numpy as np
x = np.array([[1001,1002],[3,4]])
softmax =...
Brammer asked 8/4, 2017 at 4:20
2
Solved
I'm currently having text inputs represented by vector, and I want to classify their categories. Because they are multi-level categories, I meant to use Hierarchical Softmax.
Example:
- Computer...
Silkstocking asked 15/11, 2017 at 17:22
2
Solved
I am trying to compute the derivative of the activation function for softmax. I found this : https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function nobody seems t...
Kirkkirkcaldy asked 13/6, 2016 at 13:24
1
Solved
I am training a binary classifier using Sigmoid activation function with Binary crossentropy which gives good accuracy around 98%.
The same when I train using softmax with categorical_crossentropy ...
Spate asked 21/8, 2017 at 9:38
1
Solved
Suppose I have a tensor in Tensorflow that its values are like:
A = [[0.7, 0.2, 0.1],[0.1, 0.4, 0.5]]
How can I change this tensor into the following:
B = [[1, 0, 0],[0, 0, 1]]
In other wo...
Camilacamile asked 29/6, 2017 at 20:47
1
I'm interested in implementing a hierarchical softmax model that can handle large vocabularies, say on the order of 10M classes. What is the best way to do this to both be scalable to large class c...
Willin asked 23/5, 2017 at 16:36
1
Solved
I have read the answer given here. My exact question pertains to the accepted answer:
Variables independence : a lot of regularization and effort is put to keep your variables independent, ...
Underbred asked 28/5, 2017 at 4:48
1
Solved
TensorFlow calls each of the inputs to a softmax a logit. They go on to define the softmax's inputs/logits as: "Unscaled log probabilities."
Wikipedia and other sources say that a logit is the log...
Sherysherye asked 26/5, 2017 at 21:37
2
Solved
This seems to be a fundamental question which some of you out there must have an opinion on. I have an image classifier implemented in CNTK with 48 classes. If the image does not match any of the 4...
Hypabyssal asked 24/4, 2017 at 2:7
3
Solved
I recently came across tf.nn.sparse_softmax_cross_entropy_with_logits and I can not figure out what the difference is compared to tf.nn.softmax_cross_entropy_with_logits.
Is the only difference th...
Gadoid asked 19/5, 2016 at 1:15
1
Solved
As mentioned here, cross entropy is not a proper loss function for multi-label classification. My question is "is this fact true for cross entropy with softmax too?". If it is, how it can be matche...
Fixate asked 17/1, 2017 at 12:55
1
Solved
For example, I have CNN which tries to predict numbers from MNIST dataset (code written using Keras). It has 10 outputs, which form softmax layer. Only one of outputs can be true (independently for...
Applejack asked 11/1, 2017 at 11:2
2
I'm trying to implement something like a fully convolutional network, where the last convolution layer uses filter size 1x1 and outputs a 'score' tensor. The score tensor has shape [Batch, height, ...
Keniakenilworth asked 25/4, 2016 at 20:33
1
While defining prototxt in caffe, I found sometimes we use Softmax as the last layer type, sometimes we use SoftmaxWithLoss, I know the Softmax layer will return the probability the input data belo...
Equanimity asked 5/12, 2016 at 12:47
2
In Caffe, there is an option with its SoftmaxWithLoss function to ignore all negative labels (-1) in computing probabilities, so that only 0 or positive label probabilities add up to 1.
Is ...
Precambrian asked 23/8, 2016 at 2:32
3
Solved
I am trying to understand backpropagation in a simple 3 layered neural network with MNIST.
There is the input layer with weights and a bias. The labels are MNIST so it's a 10 class vector.
The se...
Bollworm asked 13/11, 2016 at 16:2
© 2022 - 2024 — McMap. All rights reserved.