Skip to content Skip to sidebar Skip to footer

Keras Timedistributed Layer With Multiple Inputs

I'm trying to make the following lines of code working: low_encoder_out = TimeDistributed( AutoregressiveDecoder(...) )([X_tf, embeddings]) Where AutoregressiveDecoder is a custom

Solution 1:

As you mentioned TimeDistributed layer does not support multiple inputs. One (not-very-nice) workaround, considering the fact that the number of timesteps (i.e. second axis) must be the same for all the inputs, is to reshape all of them to (None, n_timsteps, n_featsN), concatenate them and then feed them as input of TimeDistributed layer:

X_tf_r = Reshape((n_timesteps, -1))(X_tf)
embeddings_r = Reshape((n_timesteps, -1))(embeddings)

concat = concatenate([X_tf_r, embeddings_r])
low_encoder_out = TimeDistributed(AutoregressiveDecoder(...))(concat)

Of course, you might need to modify the definition of your custom layer and separate the inputs back if necessary.

Post a Comment for "Keras Timedistributed Layer With Multiple Inputs"