PhilipMay commited on
Commit
4e9d942
1 Parent(s): b0e3fd7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -42,7 +42,7 @@ The special characters 'ö', 'ü', 'ä' are included through the `strip_accent=F
42
 
43
  ## Creators
44
  This model was trained and open sourced in conjunction with the [**German NLP Group**](https://github.com/German-NLP-Group) in equal parts by:
45
- - [**Philip May**](https://May.la) - [Deutsche Telekom](https://www.telekom.de/)
46
  - [**Philipp Reißel**](https://www.linkedin.com/in/philipp-reissel/) - [ambeRoad](https://amberoad.de/)
47
 
48
  ## Evaluation of Version 2: GermEval18 Coarse
@@ -141,7 +141,7 @@ We tried the following approaches which we found had no positive influence:
141
  - **Decreased Batch-Size**: The original Electra was trained with a Batch Size per TPU Core of 16 whereas this Model was trained with 32 BS / TPU Core. We found out that 32 BS leads to better results when you compare metrics over computation time
142
 
143
  ## License - The MIT License
144
- Copyright 2020-2021 Philip May<br>
145
  Copyright 2020-2021 Philipp Reissel
146
 
147
  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
 
42
 
43
  ## Creators
44
  This model was trained and open sourced in conjunction with the [**German NLP Group**](https://github.com/German-NLP-Group) in equal parts by:
45
+ - [**Philip May**](https://philipmay.org) - [Deutsche Telekom](https://www.telekom.de/)
46
  - [**Philipp Reißel**](https://www.linkedin.com/in/philipp-reissel/) - [ambeRoad](https://amberoad.de/)
47
 
48
  ## Evaluation of Version 2: GermEval18 Coarse
 
141
  - **Decreased Batch-Size**: The original Electra was trained with a Batch Size per TPU Core of 16 whereas this Model was trained with 32 BS / TPU Core. We found out that 32 BS leads to better results when you compare metrics over computation time
142
 
143
  ## License - The MIT License
144
+ Copyright 2020-2021 [Philip May](https://philipmay.org)<br>
145
  Copyright 2020-2021 Philipp Reissel
146
 
147
  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: