Recurrent Neural Networks (RNN) From simple RNNs to LSTMs Long/Short Term Memory RNNs Attention Beyond LSTM: Transformers Transformer-XL Compressive Transformers Introduction Compression scheme Compression training Summary This is the first post of series dedicated to Compressive Memory of Recurrent Neural Networks.
Quick Thoughts are random thoughts looking for comments
Let’s imagine a universal translator able to translate any language to any language. Sourcing a corpus of pair translation is a major hurdle.
DRAFT 1 Background Singular matrix decomposition Where next? Back to SVD Regularisation Vector coordinates Eigenvalues Threshold 2-by-2 decision matrix [TODO] Other Principal Components methods Limitations and further questions Limitations Further questions Litterature DRAFT 1 We all have laptops.