QARAC / qarac /models /layers /GlobalAttentionPoolingHead.py

Commit History

Fixed import
4a7707c

PeteBleackley commited on

Fixed import
e8324a1

PeteBleackley commited on

Factorized the weight matrix in the GlobalAttentionPoolingHead, thus reducing the number of parameters in this layer by a factor of 48
a1e9f64

PeteBleackley commited on

Using torch.nn.CosineSimilarity to simplify code
798488e

PeteBleackley commited on

Fix Einstein summation notation
7fe1144

PeteBleackley commited on

Use keepdim option when normalising vectors
738b546

PeteBleackley commited on

Make EPSILON a tensor
cf5f935

PeteBleackley commited on

torch.maximum, not torch.max
98ad67d

PeteBleackley commited on

Unsqueeze attention mask
bc77ce5

PeteBleackley commited on

Converted GlobalAttentionPoolingHead to use PyTorch
32df2f1

PeteBleackley commited on

Trainable => trainable
a8c528d

PeteBleackley commited on

Ensure weights are trainable
e556cb6

PeteBleackley commited on

Making sure RoBERTa layers have all required arguments
b2593fa

PeteBleackley commited on

Final dot product
8ec9bd9

PeteBleackley commited on

dot_prod needs to unpack arguments from tuple
8d80339

PeteBleackley commited on

Removed unnecessary complication
c284c9a

PeteBleackley commited on

Only inner function needs decorator
210f1cb

PeteBleackley commited on

More vectorized_map weirdness
eecf608

PeteBleackley commited on

tensorflow.vectorized_map might not like getting function arguments in a tuple
3f78694

PeteBleackley commited on

Broadcasting dot products
f2bd224

PeteBleackley commited on

Broadcasting dot products
825e41b

PeteBleackley commited on

Broadcasting dot products
948988c

PeteBleackley commited on

Error in invocation of tensordot
1670b0e

PeteBleackley commited on

More work on models
f16a715

PeteBleackley commited on

GlobalAttentionPoolingHead layer
8f1745b

PeteBleackley commited on