Skip to content Skip to sidebar Skip to footer

Decoder's Weights Of Autoencoder With Tied Weights In Keras

I have implemented a tied weights Auto-encoder in Keras and have successfully trained it. My goal is to use only the decoder part of the Auto-encoder as the last layer of another

Solution 1:

It's been more than 2 years since this question was asked, but this answer might still be relevant for some.

The function Layer.get_weights() retrieves from self.trainable_weights and self.non_trainable_weights (see keras.engine.base_layer.Layer.weights). In your custom layer, your weights self.W and self.b are not being added to any of these collections and that's why the layer has 0 parameters.

You could tweak your implementation as follows:

class TiedtDense(Dense):
    def __init__(self, output_dim, master_layer, **kwargs):
        self.master_layer = master_layer
        super(TiedtDense, self).__init__(output_dim, **kwargs)

    def build(self, input_shape):
        assert len(input_shape) >= 2
        input_dim = input_shape[-1]
        self.input_dim = input_dim

        self.kernel = tf.transpose(self.master_layer.kernel)
        self.bias = K.zeros((self.units,))
        self.trainable_weights.append(self.kernel)
        self.trainable_weights.append(self.bias)

NOTE: I am excluding the regularizers and constraints for simplicity. If you want those, please refer to keras.engine.base_layer.Layer.add_weight.


Post a Comment for "Decoder's Weights Of Autoencoder With Tied Weights In Keras"