fgrezes commited on
Commit
e0c9e48
·
1 Parent(s): c4956a2

astroBERT model ner model all_labeled_data_run01 checkpoint-169600

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
Tutorials/.ipynb_checkpoints/0_Embeddings-checkpoint.html ADDED
The diff for this file is too large to render. See raw diff
 
Tutorials/.ipynb_checkpoints/0_Embeddings-checkpoint.ipynb ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "274e6135-2d97-4244-9183-65bcb1d24c80",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "# Use the trained astroBERT model to generate embedings of text\n",
11
+ "# to be used for downstream tasks"
12
+ ]
13
+ },
14
+ {
15
+ "cell_type": "markdown",
16
+ "id": "2cc88ed3-6f52-49a2-99c0-344387758ab5",
17
+ "metadata": {},
18
+ "source": [
19
+ "# Tutorial 0: Loading astroBERT to produce text embeddings\n",
20
+ "This tutorial will show you how to load astroBERT and produce text embeddings that can be used on downstream tasks."
21
+ ]
22
+ },
23
+ {
24
+ "cell_type": "code",
25
+ "execution_count": 2,
26
+ "id": "9e65c041-9d66-4fb1-96b9-4937000da02e",
27
+ "metadata": {},
28
+ "outputs": [],
29
+ "source": [
30
+ "# 1 - load models and tokenizer"
31
+ ]
32
+ },
33
+ {
34
+ "cell_type": "code",
35
+ "execution_count": 3,
36
+ "id": "67d99e96-c532-49ef-8542-a48eef818956",
37
+ "metadata": {},
38
+ "outputs": [
39
+ {
40
+ "name": "stderr",
41
+ "output_type": "stream",
42
+ "text": [
43
+ "2022-10-20 16:07:24.705905: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\n"
44
+ ]
45
+ }
46
+ ],
47
+ "source": [
48
+ "from transformers import AutoTokenizer, AutoModel"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "code",
53
+ "execution_count": 4,
54
+ "id": "00e1d48e-9898-44ef-b00e-43e3ab7fed7d",
55
+ "metadata": {},
56
+ "outputs": [],
57
+ "source": [
58
+ "# the model path can either be the name of the Huggingface repository\n",
59
+ "remote_model_path = 'adsabs/astroBERT'\n",
60
+ "# or the local path to the directory containing model weight and tokenizer vocab\n",
61
+ "local_model_path = '../'"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 5,
67
+ "id": "9bcc6009-6009-463f-a7da-f010c5fae27e",
68
+ "metadata": {},
69
+ "outputs": [],
70
+ "source": [
71
+ "# make sure you load the tokenier with do_lower_case=False\n",
72
+ "astroBERT_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=remote_model_path,\n",
73
+ " use_auth_token=True,\n",
74
+ " add_special_tokens=True,\n",
75
+ " do_lower_case=False,\n",
76
+ " )"
77
+ ]
78
+ },
79
+ {
80
+ "cell_type": "code",
81
+ "execution_count": 6,
82
+ "id": "dbd144f0-6038-4917-94b0-aea9da72cac5",
83
+ "metadata": {},
84
+ "outputs": [
85
+ {
86
+ "data": {
87
+ "text/plain": [
88
+ "PreTrainedTokenizerFast(name_or_path='adsabs/astroBERT', vocab_size=30000, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})"
89
+ ]
90
+ },
91
+ "execution_count": 6,
92
+ "metadata": {},
93
+ "output_type": "execute_result"
94
+ }
95
+ ],
96
+ "source": [
97
+ "astroBERT_tokenizer"
98
+ ]
99
+ },
100
+ {
101
+ "cell_type": "code",
102
+ "execution_count": 7,
103
+ "id": "dd9a9257-cbe4-4908-a9f4-8e1431dc375a",
104
+ "metadata": {},
105
+ "outputs": [
106
+ {
107
+ "name": "stderr",
108
+ "output_type": "stream",
109
+ "text": [
110
+ "Some weights of the model checkpoint at adsabs/astroBERT were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight']\n",
111
+ "- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
112
+ "- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
113
+ ]
114
+ }
115
+ ],
116
+ "source": [
117
+ "# automodels: defaults to BertModel\n",
118
+ "# it's normal to get warnings as a BertModel will not load the weights used for PreTraining\n",
119
+ "astroBERT_automodel = AutoModel.from_pretrained(remote_model_path, \n",
120
+ " use_auth_token=True,\n",
121
+ " )"
122
+ ]
123
+ },
124
+ {
125
+ "cell_type": "code",
126
+ "execution_count": 8,
127
+ "id": "572ddd38-a0dc-4583-a5a6-c4f3b2cb2553",
128
+ "metadata": {},
129
+ "outputs": [],
130
+ "source": [
131
+ "# 2 - make some inference, the outputs are the embeddings"
132
+ ]
133
+ },
134
+ {
135
+ "cell_type": "code",
136
+ "execution_count": 9,
137
+ "id": "32fc0b97-4a2d-42ab-aa83-f5d8b39672b1",
138
+ "metadata": {},
139
+ "outputs": [
140
+ {
141
+ "name": "stdout",
142
+ "output_type": "stream",
143
+ "text": [
144
+ "torch.Size([3, 54])\n"
145
+ ]
146
+ }
147
+ ],
148
+ "source": [
149
+ "# list of strings for which we want embeddings\n",
150
+ "strings = ['The Chandra X-ray Observatory (CXO), previously known as the Advanced X-ray Astrophysics Facility (AXAF), is a Flagship-class space telescope launched aboard the Space Shuttle Columbia during STS-93 by NASA on July 23, 1999.',\n",
151
+ " 'Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space.',\n",
152
+ " 'This work has been developed in the framework of the ‘Darklight’ programme, supported by the European Research Council through an Advanced Research Grant to LG (Project # 291521).'\n",
153
+ " ]\n",
154
+ "\n",
155
+ "# tokenizer the strings, with padding (needed to process multiple strings efficiently)\n",
156
+ "inputs = astroBERT_tokenizer(strings, \n",
157
+ " padding=True, \n",
158
+ " return_tensors='pt'\n",
159
+ " )\n",
160
+ "\n",
161
+ "# check the shape of the inputs\n",
162
+ "print(inputs['input_ids'].shape)"
163
+ ]
164
+ },
165
+ {
166
+ "cell_type": "code",
167
+ "execution_count": 10,
168
+ "id": "8b7c9456-573a-48e7-9bc2-839fcc25631d",
169
+ "metadata": {},
170
+ "outputs": [],
171
+ "source": [
172
+ "# pass the inputs through astroBERT\n",
173
+ "import torch\n",
174
+ "# no need for gradients, since we are only doing inference\n",
175
+ "with torch.no_grad():\n",
176
+ " output = astroBERT_automodel(**inputs, \n",
177
+ " output_hidden_states=False\n",
178
+ " ) "
179
+ ]
180
+ },
181
+ {
182
+ "cell_type": "code",
183
+ "execution_count": 11,
184
+ "id": "116de57a-bb31-48d7-9556-64e01a16d56f",
185
+ "metadata": {},
186
+ "outputs": [
187
+ {
188
+ "name": "stdout",
189
+ "output_type": "stream",
190
+ "text": [
191
+ "torch.Size([3, 54, 768])\n"
192
+ ]
193
+ }
194
+ ],
195
+ "source": [
196
+ "# BertModel outputs two tensors: last_hidden_state (our embeddings) and pooler_output (to be discarded as it's not meaningful)\n",
197
+ "# see https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertModel.forward\n",
198
+ "# embeddings will have shape = (# of strings, size of tokenized strings(padded), 768 (BERT embedding size))\n",
199
+ "embeddings = output[0]\n",
200
+ "print(embeddings.shape)"
201
+ ]
202
+ },
203
+ {
204
+ "cell_type": "code",
205
+ "execution_count": 12,
206
+ "id": "38e45291-6fd7-48cf-83df-e1cc5c8a699f",
207
+ "metadata": {},
208
+ "outputs": [
209
+ {
210
+ "name": "stdout",
211
+ "output_type": "stream",
212
+ "text": [
213
+ "tensor([[ 0.5546, 0.9121, 0.6550, ..., -0.1925, 0.7077, -0.2405],\n",
214
+ " [ 0.6252, 0.3175, 1.0899, ..., 0.0576, 0.0529, 0.0603],\n",
215
+ " [ 0.1803, -0.4567, 1.2688, ..., 0.6026, -0.5718, -0.2060],\n",
216
+ " ...,\n",
217
+ " [-0.4397, -0.5334, 1.1682, ..., 0.9541, 0.4046, -0.4756],\n",
218
+ " [-0.3911, 0.7793, 0.2432, ..., 0.2268, -1.0489, -1.4864],\n",
219
+ " [-0.4529, -0.7346, 0.0675, ..., -0.3246, -0.2333, -0.6154]])\n"
220
+ ]
221
+ }
222
+ ],
223
+ "source": [
224
+ "print(embeddings[0])"
225
+ ]
226
+ },
227
+ {
228
+ "cell_type": "code",
229
+ "execution_count": 13,
230
+ "id": "26acf89f-b7fc-4872-ac81-0ee65030b465",
231
+ "metadata": {},
232
+ "outputs": [],
233
+ "source": [
234
+ "# If you wish to use the hidden states as additional embeddings, you can use output_hidden_states=True\n",
235
+ "\n",
236
+ "# no need for gradients, since we are only doing inference\n",
237
+ "with torch.no_grad():\n",
238
+ " output = astroBERT_automodel(**inputs, \n",
239
+ " output_hidden_states=True\n",
240
+ " ) "
241
+ ]
242
+ },
243
+ {
244
+ "cell_type": "code",
245
+ "execution_count": 14,
246
+ "id": "a54314e9-5dcb-4c10-b0d2-219a93c7d16e",
247
+ "metadata": {},
248
+ "outputs": [
249
+ {
250
+ "name": "stdout",
251
+ "output_type": "stream",
252
+ "text": [
253
+ "13\n",
254
+ "torch.Size([3, 54, 768])\n"
255
+ ]
256
+ }
257
+ ],
258
+ "source": [
259
+ "# This will produce 13 embeddings, one for each hidden layer\n",
260
+ "embeddings = output[2]\n",
261
+ "print(len(embeddings))\n",
262
+ "print(embeddings[0].shape)"
263
+ ]
264
+ },
265
+ {
266
+ "cell_type": "code",
267
+ "execution_count": null,
268
+ "id": "76765dcb-8035-44b2-a5a3-db181b561095",
269
+ "metadata": {},
270
+ "outputs": [],
271
+ "source": []
272
+ }
273
+ ],
274
+ "metadata": {
275
+ "kernelspec": {
276
+ "display_name": "Python 3 (ipykernel)",
277
+ "language": "python",
278
+ "name": "python3"
279
+ },
280
+ "language_info": {
281
+ "codemirror_mode": {
282
+ "name": "ipython",
283
+ "version": 3
284
+ },
285
+ "file_extension": ".py",
286
+ "mimetype": "text/x-python",
287
+ "name": "python",
288
+ "nbconvert_exporter": "python",
289
+ "pygments_lexer": "ipython3",
290
+ "version": "3.8.5"
291
+ }
292
+ },
293
+ "nbformat": 4,
294
+ "nbformat_minor": 5
295
+ }
Tutorials/.ipynb_checkpoints/1_Fill-Mask-checkpoint.ipynb ADDED
@@ -0,0 +1,425 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "33df4373-a37b-4fd0-bc67-c297812871e4",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "# Use the trained astroBERT model with the fill-mask pipeline"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "markdown",
15
+ "id": "164ee9bd-27f9-40a4-8461-3ce12fc928b0",
16
+ "metadata": {},
17
+ "source": [
18
+ "# Tutorial 1: using astroBERT with the fill-mask pipeline"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": 2,
24
+ "id": "59429414-f07e-45e5-8825-6fc6a8d26653",
25
+ "metadata": {},
26
+ "outputs": [],
27
+ "source": [
28
+ "# 1 - load models and tokenizer"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": 3,
34
+ "id": "db8ee724-6a2a-4ea5-820e-5e2aa0a0f622",
35
+ "metadata": {},
36
+ "outputs": [
37
+ {
38
+ "name": "stderr",
39
+ "output_type": "stream",
40
+ "text": [
41
+ "2022-10-17 21:17:27.369794: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\n"
42
+ ]
43
+ }
44
+ ],
45
+ "source": [
46
+ "from transformers import AutoTokenizer, BertForMaskedLM"
47
+ ]
48
+ },
49
+ {
50
+ "cell_type": "code",
51
+ "execution_count": 4,
52
+ "id": "9a98fb63-0793-4684-a202-931cad17c7ca",
53
+ "metadata": {},
54
+ "outputs": [],
55
+ "source": [
56
+ "# the model path can either be the name of the Huggingface repository\n",
57
+ "remote_model_path = 'adsabs/astroBERT'\n",
58
+ "# or the local path to the directory containing model weight and tokenizer vocab\n",
59
+ "local_model_path = '../'"
60
+ ]
61
+ },
62
+ {
63
+ "cell_type": "code",
64
+ "execution_count": 5,
65
+ "id": "25fedd16-283b-4817-9b19-2a5ff1c5ba88",
66
+ "metadata": {},
67
+ "outputs": [],
68
+ "source": [
69
+ "# make sure you load the tokenier with do_lower_case=False\n",
70
+ "astroBERT_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=remote_model_path,\n",
71
+ " use_auth_token=True,\n",
72
+ " add_special_tokens=False,\n",
73
+ " do_lower_case=False,\n",
74
+ " )"
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": 6,
80
+ "id": "fb10db03-a5f0-44f7-8d41-0285f898a90d",
81
+ "metadata": {},
82
+ "outputs": [
83
+ {
84
+ "data": {
85
+ "application/vnd.jupyter.widget-view+json": {
86
+ "model_id": "b6e0bf5ee71b4986a682adb43e994ede",
87
+ "version_major": 2,
88
+ "version_minor": 0
89
+ },
90
+ "text/plain": [
91
+ "Downloading: 0%| | 0.00/666 [00:00<?, ?B/s]"
92
+ ]
93
+ },
94
+ "metadata": {},
95
+ "output_type": "display_data"
96
+ },
97
+ {
98
+ "name": "stderr",
99
+ "output_type": "stream",
100
+ "text": [
101
+ "Some weights of the model checkpoint at adsabs/astroBERT were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']\n",
102
+ "- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
103
+ "- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
104
+ ]
105
+ }
106
+ ],
107
+ "source": [
108
+ "astroBERT_automodel_for_mlm = BertForMaskedLM.from_pretrained(pretrained_model_name_or_path=remote_model_path, \n",
109
+ " use_auth_token=True,\n",
110
+ " )"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": 7,
116
+ "id": "e8b9b073-3876-4d0b-b8b2-e46fa25c76f0",
117
+ "metadata": {},
118
+ "outputs": [],
119
+ "source": [
120
+ "# for pipeline to work you have to ensure that the model returns a dict\n",
121
+ "astroBERT_automodel_for_mlm.config.return_dict=True"
122
+ ]
123
+ },
124
+ {
125
+ "cell_type": "code",
126
+ "execution_count": 8,
127
+ "id": "94338f6f-3467-4696-bf7d-f41a12eb889d",
128
+ "metadata": {},
129
+ "outputs": [],
130
+ "source": [
131
+ "from transformers import FillMaskPipeline"
132
+ ]
133
+ },
134
+ {
135
+ "cell_type": "code",
136
+ "execution_count": 9,
137
+ "id": "7b980d9f-4d86-4b54-9324-d57dd9b4b64f",
138
+ "metadata": {},
139
+ "outputs": [],
140
+ "source": [
141
+ "astroBERT_pipeline = FillMaskPipeline(model=astroBERT_automodel_for_mlm,\n",
142
+ " tokenizer=astroBERT_tokenizer,\n",
143
+ " task='fill-mask',\n",
144
+ " )"
145
+ ]
146
+ },
147
+ {
148
+ "cell_type": "code",
149
+ "execution_count": 10,
150
+ "id": "5cb4d27b-ee3c-4ac7-ace2-4cc57ea9ce7a",
151
+ "metadata": {},
152
+ "outputs": [],
153
+ "source": [
154
+ "clean_sentences = ['M67 is one of the most studied open clusters.',\n",
155
+ "'A solar twin is a star with atmospheric parameters and chemical composition very similar to our Sun.',\n",
156
+ "'The dynamical evolution of planets close to their star is affected by tidal effects',\n",
157
+ "'The Kepler satellite collected high-precision long-term and continuous light curves for more than 100,000 solar-type stars',\n",
158
+ "'The Local Group is composed of the Milky Way, the Andromeda Galaxy, and numerous smaller satellite galaxies.',\n",
159
+ "'Cepheid variables are used to determine the distances to galaxies in the local universe.',\n",
160
+ "'Jets are created and sustained by accretion of matter onto a compact massive object.',\n",
161
+ "'A single star of one solar mass will evolve into a white dwarf.',\n",
162
+ "'The Very Large Array observes the sky at radio wavelengths.',\n",
163
+ "'Elements heavier than iron are generated in supernovae explosions.',\n",
164
+ "'Spitzer was the first spacecraft to fly in an Earth-trailing orbit.',\n",
165
+ "'Galaxy mergers can occur when two (or more) galaxies collide',\n",
166
+ "'Dark matter is a hypothetical form of matter thought to account for approximately 85% of the matter in the universe.',\n",
167
+ "'The Local Group of galaxies is pulled toward The Great Attractor.',\n",
168
+ "'The Moon is the only satellite of the Earth.',\n",
169
+ "'Galaxies are categorized according to their visual morphology as elliptical, spiral, or irregular.',\n",
170
+ "'Stars are made mostly of hydrogen.',\n",
171
+ "'Comet tails are created as comets approach the Sun.',\n",
172
+ "'Pluto is a dwarf planet in the Kuiper Belt.',\n",
173
+ "'The Milky Way has a supermassive black hole, Sagittarius A*, at its center.',\n",
174
+ "'Andromeda is the nearest large galaxy to the Milky Way and is roughly its equal in mass.',\n",
175
+ "'The interstellar medium is the gas and dust between stars.',\n",
176
+ "'The cosmic microwave background (CMB, CMBR), in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the universe.',\n",
177
+ "'The Large and Small Magellanic Clouds are irregular dwarf galaxies and are two satellite galaxies of the Milky Way.']"
178
+ ]
179
+ },
180
+ {
181
+ "cell_type": "code",
182
+ "execution_count": 11,
183
+ "id": "9f3a6fdc-182f-4edb-8ef4-7e4253c2d4db",
184
+ "metadata": {},
185
+ "outputs": [],
186
+ "source": [
187
+ "masked_sentences = ['M67 is one of the most studied [MASK] clusters.',\n",
188
+ "'A solar twin is a star with [MASK] parameters and chemical composition very similar to our Sun.',\n",
189
+ "'The dynamical evolution of planets close to their star is affected by [MASK] effects',\n",
190
+ "'The Kepler satellite collected high-precision long-term and continuous light [MASK] for more than 100,000 solar-type stars',\n",
191
+ "'The Local Group is composed of the Milky Way, the [MASK] Galaxy, and numerous smaller satellite galaxies.',\n",
192
+ "'Cepheid variables are used to determine the [MASK] to galaxies in the local universe.',\n",
193
+ "'Jets are created and sustained by [MASK] of matter onto a compact massive object.',\n",
194
+ "'A single star of one solar mass will evolve into a [MASK] dwarf.',\n",
195
+ "'The Very Large Array observes the sky at [MASK] wavelengths.',\n",
196
+ "'Elements heavier than [MASK] are generated in supernovae explosions.',\n",
197
+ "'Spitzer was the first [MASK] to fly in an Earth-trailing orbit.',\n",
198
+ "'Galaxy [MASK] can occur when two (or more) galaxies collide',\n",
199
+ "'Dark [MASK] is a hypothetical form of matter thought to account for approximately 85% of the matter in the universe.',\n",
200
+ "'The Local Group of galaxies is pulled toward The Great [MASK] .',\n",
201
+ "'The Moon is the only [MASK] of the Earth.',\n",
202
+ "'Galaxies are categorized according to their visual morphology as [MASK] , spiral, or irregular.',\n",
203
+ "'Stars are made mostly of [MASK] .',\n",
204
+ "'Comet tails are created as comets approach the [MASK] .',\n",
205
+ "'Pluto is a dwarf [MASK] in the Kuiper Belt.',\n",
206
+ "'The Milky Way has a [MASK] black hole, Sagittarius A*, at its center.',\n",
207
+ "'Andromeda is the nearest large [MASK] to the Milky Way and is roughly its equal in mass.',\n",
208
+ "'The [MASK] medium is the gas and dust between stars.',\n",
209
+ "'The cosmic microwave background (CMB, CMBR), in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the [MASK] .',\n",
210
+ "'The Large and Small Magellanic Clouds are irregular [MASK] galaxies and are two satellite galaxies of the Milky Way.',\n",
211
+ "]"
212
+ ]
213
+ },
214
+ {
215
+ "cell_type": "code",
216
+ "execution_count": 12,
217
+ "id": "d4c729ad-89f4-4e70-b433-a65b6035c10b",
218
+ "metadata": {},
219
+ "outputs": [],
220
+ "source": [
221
+ "masked_words = [x for s1,s2 in zip(clean_sentences, masked_sentences) \n",
222
+ " for x,y in zip(s1.split(), s2.split()) if y=='[MASK]']"
223
+ ]
224
+ },
225
+ {
226
+ "cell_type": "code",
227
+ "execution_count": 13,
228
+ "id": "2a07a641-61a7-42dd-b70e-62eb97ad4e4b",
229
+ "metadata": {},
230
+ "outputs": [],
231
+ "source": [
232
+ "results = astroBERT_pipeline(inputs=masked_sentences, \n",
233
+ " top_k=3\n",
234
+ " )"
235
+ ]
236
+ },
237
+ {
238
+ "cell_type": "code",
239
+ "execution_count": 14,
240
+ "id": "ec2880d9-a8ad-4919-ab5b-732f3bcc21ae",
241
+ "metadata": {},
242
+ "outputs": [
243
+ {
244
+ "name": "stdout",
245
+ "output_type": "stream",
246
+ "text": [
247
+ "M67 is one of the most studied [MASK] clusters.\n",
248
+ "original: open\n",
249
+ "\t open 0.87\n",
250
+ "\t globular 0.07\n",
251
+ "\t star 0.03\n",
252
+ "\n",
253
+ "A solar twin is a star with [MASK] parameters and chemical composition very similar to our Sun.\n",
254
+ "original: atmospheric\n",
255
+ "\t fundamental 0.56\n",
256
+ "\t physical 0.25\n",
257
+ "\t stellar 0.05\n",
258
+ "\n",
259
+ "The dynamical evolution of planets close to their star is affected by [MASK] effects\n",
260
+ "original: tidal\n",
261
+ "\t tidal 0.07\n",
262
+ "\t electromagnetic 0.05\n",
263
+ "\t electrostatic 0.04\n",
264
+ "\n",
265
+ "The Kepler satellite collected high-precision long-term and continuous light [MASK] for more than 100,000 solar-type stars\n",
266
+ "original: curves\n",
267
+ "\t curves 0.43\n",
268
+ "\t ##s 0.04\n",
269
+ "\t conditions 0.04\n",
270
+ "\n",
271
+ "The Local Group is composed of the Milky Way, the [MASK] Galaxy, and numerous smaller satellite galaxies.\n",
272
+ "original: Andromeda\n",
273
+ "\t Andromeda 0.99\n",
274
+ "\t M31 0.00\n",
275
+ "\t Sagittarius 0.00\n",
276
+ "\n",
277
+ "Cepheid variables are used to determine the [MASK] to galaxies in the local universe.\n",
278
+ "original: distances\n",
279
+ "\t distances 0.79\n",
280
+ "\t distance 0.21\n",
281
+ "\t redshifts 0.00\n",
282
+ "\n",
283
+ "Jets are created and sustained by [MASK] of matter onto a compact massive object.\n",
284
+ "original: accretion\n",
285
+ "\t accretion 0.79\n",
286
+ "\t infall 0.13\n",
287
+ "\t fall 0.02\n",
288
+ "\n",
289
+ "A single star of one solar mass will evolve into a [MASK] dwarf.\n",
290
+ "original: white\n",
291
+ "\t white 0.77\n",
292
+ "\t brown 0.19\n",
293
+ "\t red 0.02\n",
294
+ "\n",
295
+ "The Very Large Array observes the sky at [MASK] wavelengths.\n",
296
+ "original: radio\n",
297
+ "\t radio 0.29\n",
298
+ "\t centimeter 0.10\n",
299
+ "\t all 0.09\n",
300
+ "\n",
301
+ "Elements heavier than [MASK] are generated in supernovae explosions.\n",
302
+ "original: iron\n",
303
+ "\t iron 0.34\n",
304
+ "\t helium 0.16\n",
305
+ "\t oxygen 0.07\n",
306
+ "\n",
307
+ "Spitzer was the first [MASK] to fly in an Earth-trailing orbit.\n",
308
+ "original: spacecraft\n",
309
+ "\t satellite 0.42\n",
310
+ "\t spacecraft 0.20\n",
311
+ "\t observatory 0.16\n",
312
+ "\n",
313
+ "Galaxy [MASK] can occur when two (or more) galaxies collide\n",
314
+ "original: mergers\n",
315
+ "\t . 0.26\n",
316
+ "\t A 0.05\n",
317
+ "\t 1 0.04\n",
318
+ "\n",
319
+ "Dark [MASK] is a hypothetical form of matter thought to account for approximately 85% of the matter in the universe.\n",
320
+ "original: matter\n",
321
+ "\t energy 0.64\n",
322
+ "\t Energy 0.24\n",
323
+ "\t matter 0.10\n",
324
+ "\n",
325
+ "The Local Group of galaxies is pulled toward The Great [MASK] .\n",
326
+ "original: Attractor.\n",
327
+ "\t Wall 0.96\n",
328
+ "\t East 0.01\n",
329
+ "\t Planet 0.00\n",
330
+ "\n",
331
+ "The Moon is the only [MASK] of the Earth.\n",
332
+ "original: satellite\n",
333
+ "\t satellite 0.38\n",
334
+ "\t moon 0.31\n",
335
+ "\t constituent 0.07\n",
336
+ "\n",
337
+ "Galaxies are categorized according to their visual morphology as [MASK] , spiral, or irregular.\n",
338
+ "original: elliptical,\n",
339
+ "\t elliptical 0.92\n",
340
+ "\t spheroidal 0.02\n",
341
+ "\t irregular 0.01\n",
342
+ "\n",
343
+ "Stars are made mostly of [MASK] .\n",
344
+ "original: hydrogen.\n",
345
+ "\t hydrogen 0.20\n",
346
+ "\t helium 0.14\n",
347
+ "\t carbon 0.12\n",
348
+ "\n",
349
+ "Comet tails are created as comets approach the [MASK] .\n",
350
+ "original: Sun.\n",
351
+ "\t Sun 0.45\n",
352
+ "\t sun 0.23\n",
353
+ "\t Earth 0.19\n",
354
+ "\n",
355
+ "Pluto is a dwarf [MASK] in the Kuiper Belt.\n",
356
+ "original: planet\n",
357
+ "\t planet 0.96\n",
358
+ "\t satellite 0.02\n",
359
+ "\t nova 0.00\n",
360
+ "\n",
361
+ "The Milky Way has a [MASK] black hole, Sagittarius A*, at its center.\n",
362
+ "original: supermassive\n",
363
+ "\t supermassive 0.80\n",
364
+ "\t massive 0.17\n",
365
+ "\t stellar 0.00\n",
366
+ "\n",
367
+ "Andromeda is the nearest large [MASK] to the Milky Way and is roughly its equal in mass.\n",
368
+ "original: galaxy\n",
369
+ "\t galaxy 0.68\n",
370
+ "\t spiral 0.12\n",
371
+ "\t satellite 0.09\n",
372
+ "\n",
373
+ "The [MASK] medium is the gas and dust between stars.\n",
374
+ "original: interstellar\n",
375
+ "\t interstellar 0.87\n",
376
+ "\t interplanetary 0.05\n",
377
+ "\t intracluster 0.03\n",
378
+ "\n",
379
+ "The cosmic microwave background (CMB, CMBR), in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the [MASK] .\n",
380
+ "original: universe.\n",
381
+ "\t universe 0.45\n",
382
+ "\t Universe 0.26\n",
383
+ "\t expansion 0.09\n",
384
+ "\n",
385
+ "The Large and Small Magellanic Clouds are irregular [MASK] galaxies and are two satellite galaxies of the Milky Way.\n",
386
+ "original: dwarf\n",
387
+ "\t dwarf 0.68\n",
388
+ "\t satellite 0.13\n",
389
+ "\t Magellanic 0.08\n",
390
+ "\n"
391
+ ]
392
+ }
393
+ ],
394
+ "source": [
395
+ "for w, s, rs in zip(masked_words, masked_sentences, results):\n",
396
+ " print(s)\n",
397
+ " print('original: {}'.format(w))\n",
398
+ " for r in rs:\n",
399
+ " print('\\t {} {:0.2f}'.format(r['token_str'], r['score']))\n",
400
+ " print()"
401
+ ]
402
+ }
403
+ ],
404
+ "metadata": {
405
+ "kernelspec": {
406
+ "display_name": "Python 3 (ipykernel)",
407
+ "language": "python",
408
+ "name": "python3"
409
+ },
410
+ "language_info": {
411
+ "codemirror_mode": {
412
+ "name": "ipython",
413
+ "version": 3
414
+ },
415
+ "file_extension": ".py",
416
+ "mimetype": "text/x-python",
417
+ "name": "python",
418
+ "nbconvert_exporter": "python",
419
+ "pygments_lexer": "ipython3",
420
+ "version": "3.8.5"
421
+ }
422
+ },
423
+ "nbformat": 4,
424
+ "nbformat_minor": 5
425
+ }
Tutorials/0_Embeddings.ipynb CHANGED
@@ -40,7 +40,7 @@
40
  "name": "stderr",
41
  "output_type": "stream",
42
  "text": [
43
- "2022-10-19 10:05:02.842926: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\n"
44
  ]
45
  }
46
  ],
@@ -107,7 +107,7 @@
107
  "name": "stderr",
108
  "output_type": "stream",
109
  "text": [
110
- "Some weights of the model checkpoint at adsabs/astroBERT were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight']\n",
111
  "- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
112
  "- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
113
  ]
 
40
  "name": "stderr",
41
  "output_type": "stream",
42
  "text": [
43
+ "2022-10-20 16:07:24.705905: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\n"
44
  ]
45
  }
46
  ],
 
107
  "name": "stderr",
108
  "output_type": "stream",
109
  "text": [
110
+ "Some weights of the model checkpoint at adsabs/astroBERT were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight']\n",
111
  "- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
112
  "- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
113
  ]
config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
- "_name_or_path": "adsabs/astroBERT",
3
  "architectures": [
4
- "BertForPreTraining"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
@@ -9,8 +9,138 @@
9
  "hidden_act": "gelu",
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  "initializer_range": 0.02,
13
  "intermediate_size": 3072,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  "layer_norm_eps": 1e-12,
15
  "max_position_embeddings": 512,
16
  "model_type": "bert",
@@ -18,6 +148,7 @@
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
 
21
  "torch_dtype": "float32",
22
  "transformers_version": "4.17.0",
23
  "type_vocab_size": 2,
 
1
  {
2
+ "_name_or_path": "../astroBERT-Tasks/Finetuning_1_NER/trained-models/NER_astroBERT_all_labeled_data_run01/checkpoint-169600/",
3
  "architectures": [
4
+ "BertForTokenClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
 
9
  "hidden_act": "gelu",
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0",
14
+ "1": "LABEL_1",
15
+ "2": "LABEL_2",
16
+ "3": "LABEL_3",
17
+ "4": "LABEL_4",
18
+ "5": "LABEL_5",
19
+ "6": "LABEL_6",
20
+ "7": "LABEL_7",
21
+ "8": "LABEL_8",
22
+ "9": "LABEL_9",
23
+ "10": "LABEL_10",
24
+ "11": "LABEL_11",
25
+ "12": "LABEL_12",
26
+ "13": "LABEL_13",
27
+ "14": "LABEL_14",
28
+ "15": "LABEL_15",
29
+ "16": "LABEL_16",
30
+ "17": "LABEL_17",
31
+ "18": "LABEL_18",
32
+ "19": "LABEL_19",
33
+ "20": "LABEL_20",
34
+ "21": "LABEL_21",
35
+ "22": "LABEL_22",
36
+ "23": "LABEL_23",
37
+ "24": "LABEL_24",
38
+ "25": "LABEL_25",
39
+ "26": "LABEL_26",
40
+ "27": "LABEL_27",
41
+ "28": "LABEL_28",
42
+ "29": "LABEL_29",
43
+ "30": "LABEL_30",
44
+ "31": "LABEL_31",
45
+ "32": "LABEL_32",
46
+ "33": "LABEL_33",
47
+ "34": "LABEL_34",
48
+ "35": "LABEL_35",
49
+ "36": "LABEL_36",
50
+ "37": "LABEL_37",
51
+ "38": "LABEL_38",
52
+ "39": "LABEL_39",
53
+ "40": "LABEL_40",
54
+ "41": "LABEL_41",
55
+ "42": "LABEL_42",
56
+ "43": "LABEL_43",
57
+ "44": "LABEL_44",
58
+ "45": "LABEL_45",
59
+ "46": "LABEL_46",
60
+ "47": "LABEL_47",
61
+ "48": "LABEL_48",
62
+ "49": "LABEL_49",
63
+ "50": "LABEL_50",
64
+ "51": "LABEL_51",
65
+ "52": "LABEL_52",
66
+ "53": "LABEL_53",
67
+ "54": "LABEL_54",
68
+ "55": "LABEL_55",
69
+ "56": "LABEL_56",
70
+ "57": "LABEL_57",
71
+ "58": "LABEL_58",
72
+ "59": "LABEL_59",
73
+ "60": "LABEL_60",
74
+ "61": "LABEL_61",
75
+ "62": "LABEL_62"
76
+ },
77
  "initializer_range": 0.02,
78
  "intermediate_size": 3072,
79
+ "label2id": {
80
+ "LABEL_0": 0,
81
+ "LABEL_1": 1,
82
+ "LABEL_10": 10,
83
+ "LABEL_11": 11,
84
+ "LABEL_12": 12,
85
+ "LABEL_13": 13,
86
+ "LABEL_14": 14,
87
+ "LABEL_15": 15,
88
+ "LABEL_16": 16,
89
+ "LABEL_17": 17,
90
+ "LABEL_18": 18,
91
+ "LABEL_19": 19,
92
+ "LABEL_2": 2,
93
+ "LABEL_20": 20,
94
+ "LABEL_21": 21,
95
+ "LABEL_22": 22,
96
+ "LABEL_23": 23,
97
+ "LABEL_24": 24,
98
+ "LABEL_25": 25,
99
+ "LABEL_26": 26,
100
+ "LABEL_27": 27,
101
+ "LABEL_28": 28,
102
+ "LABEL_29": 29,
103
+ "LABEL_3": 3,
104
+ "LABEL_30": 30,
105
+ "LABEL_31": 31,
106
+ "LABEL_32": 32,
107
+ "LABEL_33": 33,
108
+ "LABEL_34": 34,
109
+ "LABEL_35": 35,
110
+ "LABEL_36": 36,
111
+ "LABEL_37": 37,
112
+ "LABEL_38": 38,
113
+ "LABEL_39": 39,
114
+ "LABEL_4": 4,
115
+ "LABEL_40": 40,
116
+ "LABEL_41": 41,
117
+ "LABEL_42": 42,
118
+ "LABEL_43": 43,
119
+ "LABEL_44": 44,
120
+ "LABEL_45": 45,
121
+ "LABEL_46": 46,
122
+ "LABEL_47": 47,
123
+ "LABEL_48": 48,
124
+ "LABEL_49": 49,
125
+ "LABEL_5": 5,
126
+ "LABEL_50": 50,
127
+ "LABEL_51": 51,
128
+ "LABEL_52": 52,
129
+ "LABEL_53": 53,
130
+ "LABEL_54": 54,
131
+ "LABEL_55": 55,
132
+ "LABEL_56": 56,
133
+ "LABEL_57": 57,
134
+ "LABEL_58": 58,
135
+ "LABEL_59": 59,
136
+ "LABEL_6": 6,
137
+ "LABEL_60": 60,
138
+ "LABEL_61": 61,
139
+ "LABEL_62": 62,
140
+ "LABEL_7": 7,
141
+ "LABEL_8": 8,
142
+ "LABEL_9": 9
143
+ },
144
  "layer_norm_eps": 1e-12,
145
  "max_position_embeddings": 512,
146
  "model_type": "bert",
 
148
  "num_hidden_layers": 12,
149
  "pad_token_id": 0,
150
  "position_embedding_type": "absolute",
151
+ "return_dict": false,
152
  "torch_dtype": "float32",
153
  "transformers_version": "4.17.0",
154
  "type_vocab_size": 2,
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3268b70d3529eb896bf43c0c8fa933f6282b82fa196e2659ab116d49fbbca6ac
3
- size 438904355
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:069bc02d7c193d06fc9d23ce25f59129ff2c6724d424d60d600343e52f9e12f7
3
+ size 434238321