tensorflow warning – Found untraced functions such as lstm_cell_6_layer_call_and_return_conditional_losses

I think this warning can be safely ignored as you can find the same warning even in a tutorial given by tensorflow. I often see this warning when saving custom models such as graph NNs. You should be good to go as long as you don’t want to access those non-callable functions. However, if you’re … Read more

Understanding a simple LSTM pytorch

The output for the LSTM is the output for all the hidden nodes on the final layer. hidden_size – the number of LSTM blocks per layer. input_size – the number of input features per time-step. num_layers – the number of hidden layers. In total there are hidden_size * num_layers LSTM blocks. The input dimensions are … Read more

Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29)

Setting timesteps = 1 (since, I want one timestep for each instance) and reshaping the X_train and X_test as: import numpy as np X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1])) X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1])) This worked!

Why does Keras LSTM batch size used for prediction have to be the same as fitting batch size?

Unfortunately what you want to do is impossible with Keras … I’ve also struggle a lot of time on this problems and the only way is to dive into the rabbit hole and work with Tensorflow directly to do LSTM rolling prediction. First, to be clear on terminology, batch_size usually means number of sequences that … Read more

How do I mask a loss function in Keras with the TensorFlow backend?

If there’s a mask in your model, it’ll be propagated layer-by-layer and eventually applied to the loss. So if you’re padding and masking the sequences in a correct way, the loss on the padding placeholders would be ignored. Some Details: It’s a bit involved to explain the whole process, so I’ll just break it down … Read more

ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4

I solved the problem by making input size: (95000,360,1) and output size: (95000,22) and changed the input shape to (360,1) in the code where model is defined: model = Sequential() model.add(LSTM(22, input_shape=(360,1))) model.add(Dense(22, activation=’softmax’)) model.compile(loss=”categorical_crossentropy”, optimizer=”adam”, metrics=[‘accuracy’]) print(model.summary()) model.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)