QARAC / qarac /models /QaracDecoderModel.py

Commit History

Removed unnecessary parameters
684c1d8

PeteBleackley commited on

Attention mask in decoder
69cf4c5

PeteBleackley commited on

Set use_cache argument
9052370

PeteBleackley commited on

input_embeddings not needed
a5b7b8e

PeteBleackley commited on

Removed unnecessary parameter
8172944

PeteBleackley commited on

get_input_embeddings() directly from base model
e095479

PeteBleackley commited on

There's a simpler way of doing this, I hope
858f75e

PeteBleackley commited on

Might be simpler to inherit from RobertaModel rather than PreTrainedModel
f0ad7f1

PeteBleackley commited on

Removed a base model that was causing a loop in model initialisation
87535ff

PeteBleackley commited on

Removed line that would have failed
dbfe7ff

PeteBleackley commited on

Fixed import
acda749

PeteBleackley commited on

Further changes for compatibility with HuggingFace Pytorch implementation
5b7a8ed

PeteBleackley commited on

PyTorch implementation of HugggingFace PreTrainedModel class does not allow direct setting of base_model. Rejig constructors accordingly
519dfd1

PeteBleackley commited on

Corrected inheritance
8823ce8

PeteBleackley commited on

Converted QaracDecoderModel to use PyTorch
13f1508

PeteBleackley commited on

The other layer returned a tuple as well
095f432

PeteBleackley commited on

Low level RoBERTa layers don't necessarily return what I expect them to
0941a89

PeteBleackley commited on

Fixed typo
50de02e

PeteBleackley commited on

Needed more arguments
58d8758

PeteBleackley commited on

Arguments to Concatenate layer should be in a list
30efe84

PeteBleackley commited on

Fixed arguments to decoder head
7b59e3d

PeteBleackley commited on

Attention masks, generation, and testing script
6ebe943

PeteBleackley commited on

Making sure RoBERTa layers have all required arguments
b2593fa

PeteBleackley commited on

Add an extra dimension to the input vector
7cc6121

PeteBleackley commited on

Head needed to know input embeddings
7285c8a

PeteBleackley commited on

Fix typo
c833bc7

PeteBleackley commited on

Go to correct submodule
e09ba59

PeteBleackley commited on

Fix typo
23b3790

PeteBleackley commited on

Fix typo
52c1fb4

PeteBleackley commited on

Find the layers I need
52231cc

PeteBleackley commited on

Find the layers I need
394ad0c

PeteBleackley commited on

Use base model config in superclass __init__
b762c1c

PeteBleackley commited on

Fix inheritance
6cc57a2

PeteBleackley commited on

Fix typos
28f2005

PeteBleackley commited on

Training script
63b2c6a

PeteBleackley commited on

More work on models
f16a715

PeteBleackley commited on

Encoder, Decoder and Trainer models (assuming RoBERTa base models)
f9c0522

PeteBleackley commited on