File size: 59,319 Bytes
4d7573b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 |
{
"cells": [
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"import os; os.chdir('..')"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from keys import get_similarity_against"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"main_query= \"water intoxication\"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"sentence1= '''Water intoxication, also known as water poisoning, hyperhydration, overhydration, or water toxemia, is a potentially fatal disturbance in brain functions that results when the normal balance of electrolytes in the body is pushed outside safe limits by excessive water intake.\n",
"\n",
"Under normal circumstances, accidentally consuming too much water is exceptionally rare. Nearly all deaths related to water intoxication in normal individuals have resulted either from water-drinking contests, in which individuals attempt to consume large amounts of water, or from long bouts of exercise during which excessive amounts of fluid were consumed.[1] In addition, water cure, a method of torture in which the victim is forced to consume excessive amounts of water, can cause water intoxication.[1]\n",
"\n",
"Water, like any other substance, can be considered a poison when over-consumed in a brief period of time. Water intoxication mostly occurs when water is being consumed in a high quantity without adequate electrolyte intake.[2]\n",
"\n",
"Excess of body water may also be a result of a medical condition or improper treatment; see \"hyponatremia\" for some examples. Water is considered one of the least toxic chemical compounds, with an LD50 exceeding 90 ml/kg in rats;[3] drinking six liters in three hours has caused the death of a human.[4]'''\n",
"\n",
"\n",
"sentence2= '''Hyponatremia, colloquially termed aqua inebriation, or aqueous toxemia, represents a perilous derangement of cerebral functions. It ensues when the delicate equilibrium of bodily electrolytes is jolted beyond secure thresholds by an extravagant indulgence in aqueous libations.\n",
"\n",
"In ordinary circumstances, inadvertent indulgence in excessive aqueous elixir is exceedingly exceptional. Virtually all instances of aqueous inebriation-related demises in average individuals have stemmed either from aquatic imbiber duels, wherein contenders vie to imbibe copious volumes of water, or from prolonged stints of exertion accompanied by the immoderate ingestion of fluids. Additionally, aqua torment, a tormenting methodology in which the sufferer is coerced into partaking of profuse quantities of water, can precipitate aqueous inebriation.\n",
"\n",
"H2O, akin to any other substance, may be deemed venomous when extravagantly consumed within a concise temporal interval. Aqua intoxication predominantly manifests itself when an exorbitant quantum of water is ingested without commensurate electrolytic supplementation.\n",
"\n",
"A superabundance of corporeal aqueous content might also emerge as an outcome of a medical ailment or inept therapy; for illustrative instances, refer to \"hyponatremia.\" Water is reckoned as one of the least virulent chemical compounds, boasting an LD50 that surpasses 90 ml/kg in rodents. Consuming six liters in a mere three hours has led to the demise of a human.'''"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Water intoxication, also known as water poisoning, hyperhydration, overhydration, or water toxemia, is a potentially fatal disturbance in brain functions that results when the normal balance of electrolytes in the body is pushed outside safe limits by excessive water intake.\\n\\nUnder normal circumstances, accidentally consuming too much water is exceptionally rare. Nearly all deaths related to water intoxication in normal individuals have resulted either from water-drinking contests, in which individuals attempt to consume large amounts of water, or from long bouts of exercise during which excessive amounts of fluid were consumed.[1] In addition, water cure, a method of torture in which the victim is forced to consume excessive amounts of water, can cause water intoxication.[1]\\n\\nWater, like any other substance, can be considered a poison when over-consumed in a brief period of time. Water intoxication mostly occurs when water is being consumed in a high quantity without adequate electrolyte intake.[2]\\n\\nExcess of body water may also be a result of a medical condition or improper treatment; see \"hyponatremia\" for some examples. Water is considered one of the least toxic chemical compounds, with an LD50 exceeding 90 ml/kg in rats;[3] drinking six liters in three hours has caused the death of a human.[4]'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sentence1"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hyponatremia, colloquially termed aqua inebriation, or aqueous toxemia, represents a perilous derangement of cerebral functions. It ensues when the delicate equilibrium of bodily electrolytes is jolted beyond secure thresholds by an extravagant indulgence in aqueous libations.\\n\\nIn ordinary circumstances, inadvertent indulgence in excessive aqueous elixir is exceedingly exceptional. Virtually all instances of aqueous inebriation-related demises in average individuals have stemmed either from aquatic imbiber duels, wherein contenders vie to imbibe copious volumes of water, or from prolonged stints of exertion accompanied by the immoderate ingestion of fluids. Additionally, aqua torment, a tormenting methodology in which the sufferer is coerced into partaking of profuse quantities of water, can precipitate aqueous inebriation.\\n\\nH2O, akin to any other substance, may be deemed venomous when extravagantly consumed within a concise temporal interval. Aqua intoxication predominantly manifests itself when an exorbitant quantum of water is ingested without commensurate electrolytic supplementation.\\n\\nA superabundance of corporeal aqueous content might also emerge as an outcome of a medical ailment or inept therapy; for illustrative instances, refer to \"hyponatremia.\" Water is reckoned as one of the least virulent chemical compounds, boasting an LD50 that surpasses 90 ml/kg in rodents. Consuming six liters in a mere three hours has led to the demise of a human.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sentence2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Method 1: `Direct Similarity of Embeddings`"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"import json"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"response= requests.post(url= \"https://embeddings.paperbot.ai/get-similarity-against\",\n",
" json={\n",
" \"main_entity\": main_query, \n",
" \"compare_with\": [sentence1, sentence2]\n",
" })"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[0.94, 0.9]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"json.loads(response.content.decode(\"utf-8\"))['similarity']\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Method 1: `BERT Question-Answering`\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline\n",
"\n",
"model_name = \"deepset/roberta-base-squad2\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/ubuntu/SentenceStructureComparision/venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"\n",
"\n",
"# a) Get predictions\n",
"nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.001219886471517384,\n",
" 'start': 252,\n",
" 'end': 274,\n",
" 'answer': 'excessive water intake'}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"QA_input = {\n",
" 'question': main_query,\n",
" 'context': sentence1\n",
"}\n",
"res = nlp(QA_input)\n",
"\n",
"res"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 2.8929189284099266e-05,\n",
" 'start': 958,\n",
" 'end': 975,\n",
" 'answer': 'Aqua intoxication'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"QA_input = {\n",
" 'question': main_query,\n",
" 'context': sentence2\n",
"}\n",
"res = nlp(QA_input)\n",
"\n",
"res"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perplexity"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at deepset/roberta-base-squad2 and are newly initialized: ['lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.decoder.bias', 'lm_head.bias']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
}
],
"source": [
"import torch\n",
"from transformers import AutoTokenizer, AutoModelForMaskedLM\n",
"tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
"model = AutoModelForMaskedLM.from_pretrained(model_name).to(\"cuda\")\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"RobertaConfig {\n",
" \"_name_or_path\": \"deepset/roberta-base-squad2\",\n",
" \"architectures\": [\n",
" \"RobertaForQuestionAnswering\"\n",
" ],\n",
" \"attention_probs_dropout_prob\": 0.1,\n",
" \"bos_token_id\": 0,\n",
" \"classifier_dropout\": null,\n",
" \"eos_token_id\": 2,\n",
" \"gradient_checkpointing\": false,\n",
" \"hidden_act\": \"gelu\",\n",
" \"hidden_dropout_prob\": 0.1,\n",
" \"hidden_size\": 768,\n",
" \"initializer_range\": 0.02,\n",
" \"intermediate_size\": 3072,\n",
" \"language\": \"english\",\n",
" \"layer_norm_eps\": 1e-05,\n",
" \"max_position_embeddings\": 514,\n",
" \"model_type\": \"roberta\",\n",
" \"name\": \"Roberta\",\n",
" \"num_attention_heads\": 12,\n",
" \"num_hidden_layers\": 12,\n",
" \"pad_token_id\": 1,\n",
" \"position_embedding_type\": \"absolute\",\n",
" \"transformers_version\": \"4.34.0\",\n",
" \"type_vocab_size\": 1,\n",
" \"use_cache\": true,\n",
" \"vocab_size\": 50265\n",
"}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.config"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"def calculate_perplexity(sentence):\n",
" inputs = tokenizer(sentence, return_tensors='pt').to(\"cuda\")\n",
" with torch.no_grad():\n",
" outputs = model(**inputs, labels=inputs['input_ids'])\n",
" print(outputs)\n",
" loss = outputs.loss # cross entropy loss ----> assuming our model gives us probabilities of different words\n",
" perplexity = torch.exp(loss)\n",
" return perplexity.item()"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"MaskedLMOutput(loss=tensor(17.8039, device='cuda:0'), logits=tensor([[[ 0.3795, 0.3104, -0.5476, ..., 0.6206, 1.2048, 0.2867],\n",
" [ 6.1099, 0.2996, -3.4115, ..., 0.1186, -2.1274, -0.5033],\n",
" [ 3.1611, 0.4855, -1.8712, ..., 1.0915, -1.1092, 0.9038],\n",
" ...,\n",
" [ 4.7616, 0.5287, -4.6328, ..., 0.6480, -2.6257, 0.2898],\n",
" [-1.5800, 0.9459, -4.8465, ..., -1.5705, -3.4567, -1.7680],\n",
" [-2.2582, 0.7316, -5.1463, ..., -2.0331, -4.5188, -0.8688]]],\n",
" device='cuda:0'), hidden_states=None, attentions=None)\n",
"Perplexity of the sentence1: 53969640.0\n",
"MaskedLMOutput(loss=tensor(17.1493, device='cuda:0'), logits=tensor([[[ 0.2888, 0.1490, -0.1353, ..., 0.9108, 1.6271, 0.4736],\n",
" [ 7.6726, 0.6179, -2.9211, ..., 1.1359, -1.1669, 0.7527],\n",
" [ 3.2592, 0.6674, -4.1829, ..., -1.3289, -2.9414, 0.0205],\n",
" ...,\n",
" [ 1.4004, 0.5725, -3.3266, ..., -0.8717, -3.7206, -0.4705],\n",
" [-1.4701, 0.8628, -5.1591, ..., -1.6357, -3.4620, -1.6146],\n",
" [-2.7571, 0.6047, -5.1016, ..., -1.9740, -4.2890, -1.0212]]],\n",
" device='cuda:0'), hidden_states=None, attentions=None)\n",
"Perplexity of the sentence2: 28044616.0\n"
]
}
],
"source": [
"print(f'Perplexity of the sentence1: {calculate_perplexity(sentence1)}')\n",
"print(f'Perplexity of the sentence2: {calculate_perplexity(sentence2)}')"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input_ids': tensor([[ 0, 25589, 34205, 6, 67, 684, 25, 514, 15000, 6,\n",
" 8944, 30420, 8475, 6, 81, 30420, 8475, 6, 50, 514,\n",
" 46334, 23249, 6, 16, 10, 2905, 6484, 20771, 11, 2900,\n",
" 8047, 14, 775, 77, 5, 2340, 2394, 9, 39875, 12782,\n",
" 11, 5, 809, 16, 3148, 751, 1522, 4971, 30, 10079,\n",
" 514, 14797, 4, 50118, 50118, 17245, 2340, 4215, 6, 13636,\n",
" 16997, 350, 203, 514, 16, 20135, 3159, 4, 9221, 70,\n",
" 3257, 1330, 7, 514, 34205, 11, 2340, 2172, 33, 4596,\n",
" 1169, 31, 514, 12, 10232, 18957, 11997, 6, 11, 61,\n",
" 2172, 2120, 7, 14623, 739, 5353, 9, 514, 6, 50,\n",
" 31, 251, 24750, 9, 3325, 148, 61, 10079, 5353, 9,\n",
" 12293, 58, 13056, 31274, 134, 742, 96, 1285, 6, 514,\n",
" 13306, 6, 10, 5448, 9, 11809, 11, 61, 5, 1802,\n",
" 16, 1654, 7, 14623, 10079, 5353, 9, 514, 6, 64,\n",
" 1303, 514, 34205, 31274, 134, 742, 50118, 50118, 25589, 6,\n",
" 101, 143, 97, 6572, 6, 64, 28, 1687, 10, 17712,\n",
" 77, 81, 12, 10998, 28817, 11, 10, 4315, 675, 9,\n",
" 86, 4, 3201, 34205, 2260, 11493, 77, 514, 16, 145,\n",
" 13056, 11, 10, 239, 16363, 396, 9077, 39875, 859, 14797,\n",
" 31274, 176, 742, 50118, 50118, 9089, 19348, 9, 809, 514,\n",
" 189, 67, 28, 10, 898, 9, 10, 1131, 1881, 50,\n",
" 18418, 1416, 131, 192, 22, 33027, 261, 415, 5593, 493,\n",
" 113, 13, 103, 7721, 4, 3201, 16, 1687, 65, 9,\n",
" 5, 513, 8422, 4747, 18291, 6, 19, 41, 34744, 1096,\n",
" 17976, 1814, 36769, 73, 9043, 11, 24162, 131, 10975, 246,\n",
" 742, 4835, 411, 6474, 268, 11, 130, 722, 34, 1726,\n",
" 5, 744, 9, 10, 1050, 31274, 306, 742, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n",
" 1, 1, 1, 1, 1]])}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encodings_sentence1= tokenizer(sentence1, return_tensors=\"pt\")\n",
"encodings_sentence1"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"seq_len = 269\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" 0%| | 0/269 [00:00<?, ?it/s]\n"
]
}
],
"source": [
"import torch\n",
"from tqdm import tqdm\n",
"\n",
"# max_length = model.config.n_positions\n",
"max_length= model.config.max_position_embeddings\n",
"stride = 1\n",
"seq_len = encodings_sentence1.input_ids.size(1)\n",
"\n",
"print(f\"seq_len = {seq_len}\")\n",
"\n",
"nlls = []\n",
"prev_end_loc = 0\n",
"for begin_loc in tqdm(range(0, seq_len, stride)):\n",
" end_loc = min(begin_loc + max_length, seq_len)\n",
" trg_len = end_loc - prev_end_loc # may be different from stride on last loop\n",
" input_ids = encodings_sentence1.input_ids[:, begin_loc:end_loc].to(\"cuda\")\n",
" target_ids = input_ids.clone()\n",
" target_ids[:, :-trg_len] = -100\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(input_ids, labels=target_ids)\n",
"\n",
" # loss is calculated using CrossEntropyLoss which averages over valid labels\n",
" # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels\n",
" # to the left by 1.\n",
" neg_log_likelihood = outputs.loss\n",
"\n",
" nlls.append(neg_log_likelihood)\n",
"\n",
" prev_end_loc = end_loc\n",
" if end_loc == seq_len:\n",
" break\n",
"\n",
"ppl = torch.exp(torch.stack(nlls).mean())"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor(53969640., device='cuda:0')"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ppl"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"seq_len = 324\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" 0%| | 0/324 [00:00<?, ?it/s]\n"
]
},
{
"data": {
"text/plain": [
"tensor(28044616., device='cuda:0')"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encodings_sentence2= tokenizer(sentence2, return_tensors=\"pt\")\n",
"encodings_sentence2\n",
"import torch\n",
"from tqdm import tqdm\n",
"\n",
"# max_length = model.config.n_positions\n",
"max_length= model.config.max_position_embeddings\n",
"stride = 1\n",
"seq_len = encodings_sentence2.input_ids.size(1)\n",
"\n",
"print(f\"seq_len = {seq_len}\")\n",
"\n",
"nlls = []\n",
"prev_end_loc = 0\n",
"for begin_loc in tqdm(range(0, seq_len, stride)):\n",
" end_loc = min(begin_loc + max_length, seq_len)\n",
" trg_len = end_loc - prev_end_loc # may be different from stride on last loop\n",
" input_ids = encodings_sentence2.input_ids[:, begin_loc:end_loc].to(\"cuda\")\n",
" target_ids = input_ids.clone()\n",
" target_ids[:, :-trg_len] = -100\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(input_ids, labels=target_ids)\n",
"\n",
" # loss is calculated using CrossEntropyLoss which averages over valid labels\n",
" # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels\n",
" # to the left by 1.\n",
" neg_log_likelihood = outputs.loss\n",
"\n",
" nlls.append(neg_log_likelihood)\n",
"\n",
" prev_end_loc = end_loc\n",
" if end_loc == seq_len:\n",
" break\n",
"\n",
"ppl = torch.exp(torch.stack(nlls).mean())\n",
"ppl"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"seq_len = 4\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" 0%| | 0/4 [00:00<?, ?it/s]\n"
]
},
{
"data": {
"text/plain": [
"tensor(67252448., device='cuda:0')"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encodings_sentence2= tokenizer(\"Good morning\", return_tensors=\"pt\")\n",
"encodings_sentence2\n",
"import torch\n",
"from tqdm import tqdm\n",
"\n",
"# max_length = model.config.n_positions\n",
"max_length= model.config.max_position_embeddings\n",
"stride = 1\n",
"seq_len = encodings_sentence2.input_ids.size(1)\n",
"\n",
"print(f\"seq_len = {seq_len}\")\n",
"\n",
"nlls = []\n",
"prev_end_loc = 0\n",
"for begin_loc in tqdm(range(0, seq_len, stride)):\n",
" end_loc = min(begin_loc + max_length, seq_len)\n",
" trg_len = end_loc - prev_end_loc # may be different from stride on last loop\n",
" input_ids = encodings_sentence2.input_ids[:, begin_loc:end_loc].to(\"cuda\")\n",
" target_ids = input_ids.clone()\n",
" target_ids[:, :-trg_len] = -100\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(input_ids, labels=target_ids)\n",
"\n",
" # loss is calculated using CrossEntropyLoss which averages over valid labels\n",
" # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels\n",
" # to the left by 1.\n",
" neg_log_likelihood = outputs.loss\n",
"\n",
" nlls.append(neg_log_likelihood)\n",
"\n",
" prev_end_loc = end_loc\n",
" if end_loc == seq_len:\n",
" break\n",
"\n",
"ppl = torch.exp(torch.stack(nlls).mean())\n",
"ppl"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/ubuntu/SentenceStructureComparision/venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n",
"Downloading (…)lve/main/config.json: 100%|██████████| 666/666 [00:00<00:00, 4.11MB/s]\n",
"Downloading model.safetensors: 100%|██████████| 3.25G/3.25G [00:36<00:00, 89.3MB/s]\n",
"Downloading (…)neration_config.json: 100%|██████████| 124/124 [00:00<00:00, 75.5kB/s]\n",
"Downloading (…)olve/main/vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 4.88MB/s]\n",
"Downloading (…)olve/main/merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 51.0MB/s]\n",
"Downloading (…)/main/tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 70.6MB/s]\n"
]
}
],
"source": [
"from transformers import GPT2LMHeadModel, GPT2TokenizerFast\n",
"\n",
"device = \"cuda\"\n",
"model_id = \"gpt2-large\"\n",
"model = GPT2LMHeadModel.from_pretrained(model_id).to(device)\n",
"tokenizer = GPT2TokenizerFast.from_pretrained(model_id)\n",
"encodings = tokenizer(sentence1, return_tensors=\"pt\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
" 0%| | 0/1 [00:00<?, ?it/s]\n"
]
}
],
"source": [
"import torch\n",
"from tqdm import tqdm\n",
"\n",
"max_length = model.config.n_positions\n",
"stride = 512\n",
"seq_len = encodings.input_ids.size(1)\n",
"\n",
"nlls = []\n",
"prev_end_loc = 0\n",
"for begin_loc in tqdm(range(0, seq_len, stride)):\n",
" end_loc = min(begin_loc + max_length, seq_len)\n",
" trg_len = end_loc - prev_end_loc # may be different from stride on last loop\n",
" input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device)\n",
" target_ids = input_ids.clone()\n",
" target_ids[:, :-trg_len] = -100\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(input_ids, labels=target_ids)\n",
"\n",
" # loss is calculated using CrossEntropyLoss which averages over valid labels\n",
" # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels\n",
" # to the left by 1.\n",
" neg_log_likelihood = outputs.loss\n",
"\n",
" nlls.append(neg_log_likelihood)\n",
"\n",
" prev_end_loc = end_loc\n",
" if end_loc == seq_len:\n",
" break\n",
"\n",
"ppl = torch.exp(torch.stack(nlls).mean())"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor(12.3761, device='cuda:0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ppl"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"seq_len = 322\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" 0%| | 0/1 [00:00<?, ?it/s]\n"
]
},
{
"data": {
"text/plain": [
"tensor(30.3624, device='cuda:0')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encodings_sentence2= tokenizer(sentence2, return_tensors=\"pt\")\n",
"encodings_sentence2\n",
"import torch\n",
"from tqdm import tqdm\n",
"\n",
"# max_length = model.config.n_positions\n",
"max_length= model.config.max_position_embeddings\n",
"stride = 512\n",
"seq_len = encodings_sentence2.input_ids.size(1)\n",
"\n",
"print(f\"seq_len = {seq_len}\")\n",
"\n",
"nlls = []\n",
"prev_end_loc = 0\n",
"for begin_loc in tqdm(range(0, seq_len, stride)):\n",
" end_loc = min(begin_loc + max_length, seq_len)\n",
" trg_len = end_loc - prev_end_loc # may be different from stride on last loop\n",
" input_ids = encodings_sentence2.input_ids[:, begin_loc:end_loc].to(\"cuda\")\n",
" target_ids = input_ids.clone()\n",
" target_ids[:, :-trg_len] = -100\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(input_ids, labels=target_ids)\n",
"\n",
" # loss is calculated using CrossEntropyLoss which averages over valid labels\n",
" # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels\n",
" # to the left by 1.\n",
" neg_log_likelihood = outputs.loss\n",
"\n",
" nlls.append(neg_log_likelihood)\n",
"\n",
" prev_end_loc = end_loc\n",
" if end_loc == seq_len:\n",
" break\n",
"\n",
"ppl = torch.exp(torch.stack(nlls).mean())\n",
"ppl"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"s1= '''Imagine you’re trying to build a chatbot that helps home cooks autocomplete their grocery shopping lists based on popular flavor combinations from social media. Your goal is to let users type in what they have in their fridge, like “chicken, carrots,” then list the five or six ingredients that go best with those flavors. You’ve already scraped thousands of recipe sites for ingredient lists, and now you just need to choose the best NLP model to predict which words appear together most often. Easy, right?\n",
"\n",
"Well, not exactly. The gold standard for checking the performance of a model is extrinsic evaluation: measuring its final performance on a real-world task. In this case, that might mean letting your model generate a dataset of a thousand new recipes, then asking a few hundred data labelers to rate how tasty they sound.\n",
"\n",
"Unfortunately, you don’t have one dataset, you have one dataset for every variation of every parameter of every model you want to test. Even simple comparisons of the same basic model can lead to a combinatorial explosion: 3 different optimization functions with 5 different learning rates and 4 different batch sizes equals 120 different datasets, all with hundreds of thousands of individual data points. How can you quickly narrow down which models are the most promising to fully evaluate?\n",
"\n",
"Enter intrinsic evaluation: finding some property of a model that estimates the model’s quality independent of the specific tasks its used to perform. Specifically, enter perplexity, a metric that quantifies how uncertain a model is about the predictions it makes. Low perplexity only guarantees a model is confident, not accurate, but it often correlates well with the model’s final real-world performance, and it can be quickly calculated using just the probability distribution the model learns from the training dataset.\n",
"\n",
"In this week’s post, we’ll look at how perplexity is calculated, what it means intuitively for a model’s performance, and the pitfalls of using perplexity for comparisons across different datasets and models.\n",
"\n",
"Calculating perplexity\n",
"To understand how perplexity is calculated, let’s start with a very simple version of the recipe training dataset that only has four short ingredient lists:\n",
"\n",
"chicken, butter, pears\n",
"chicken, butter, chili\n",
"lemon, pears, shrimp\n",
"chili, shrimp, lemon\n",
"In machine learning terms, these sentences are a language with a vocabulary size of 6 (because there are a total of 6 unique words). A language model is just a function trained on a specific language that predicts the probability of a certain word appearing given the words that appeared around it.\n",
"\n",
"One of the simplest language models is a unigram model, which looks at words one at a time assuming they’re statistically independent. In other words, it returns the relative frequency that each word appears in the training data. Here’s a unigram model for the dataset above, which is especially simple because every word appears the same number of times:\n",
"\n",
"\n",
"It’s pretty obvious this isn’t a very good model. No matter which ingredients you say you have, it will just pick any new ingredient at random with equal probability, so you might as well be rolling a fair die to choose. Let’s quantify exactly how bad this is.\n",
"\n",
"We’re going to start by calculating how surprised our model is when it sees a single specific word like “chicken.” Intuitively, the more probable an event is, the less surprising it is. If you’re certain something is impossible — if its probability is 0 — then you would be infinitely surprised if it happened. Similarly, if something was guaranteed to happen with probability 1, your surprise when it happened would be 0.'''\n",
"\n",
"\n",
"\n",
"\n",
"# generated by gpt\n",
"\n",
"s2= '''Imagine you want to create a smart assistant for people who cook at home. This assistant should help them make shopping lists based on popular food combinations they find on social media. Your aim is to allow users to type in what ingredients they already have, like \"chicken, carrots,\" and then get suggestions for the five or six best ingredients that go well with those. To do this, you've collected lots of lists of ingredients from recipe websites. Now, your main task is to pick the best computer program that can predict which ingredients often appear together. Sounds easy, right?\n",
"\n",
"But it's not that simple. The usual way to check how well a computer program works is to see how well it does on a real task. In this case, that means having your program create a thousand new recipes and then asking a few hundred people to rate how good those recipes sound.\n",
"\n",
"Here's the problem: You don't have just one set of recipes to test your program on; you have many sets, each with different settings. Even when comparing different versions of the same program, there can be a huge number of combinations to test. For example, if you have three different ways to make the program work, five different speeds for the program to learn, and four different sizes of groups of data, you end up with 120 different sets of recipes to evaluate. Each set contains hundreds of thousands of individual data points. So, how do you quickly figure out which versions of the program are the most promising to test further?\n",
"\n",
"This is where \"intrinsic evaluation\" comes in. It means finding some quality of the program that tells you how good it is without having to do the real cooking task. In this case, it's about \"perplexity,\" which is a way to measure how uncertain the program is when it makes predictions. A low perplexity score means the program is pretty sure about its predictions, though it doesn't guarantee those predictions are correct. But it often matches up with how well the program does in real cooking tasks, and it's easy to calculate using the information the program learns from the training data.\n",
"\n",
"In this post, we'll explore how to calculate perplexity, what it indicates about the program's performance, and the problems you might face when using perplexity to compare different sets of data and programs.\n",
"\n",
"Calculating Perplexity:\n",
"To understand how perplexity works, let's start with a simple example. Imagine you have a small dataset of recipes with just four short lists of ingredients:\n",
"\n",
"1. Chicken, butter, pears\n",
"2. Chicken, butter, chili\n",
"3. Lemon, pears, shrimp\n",
"4. Chili, shrimp, lemon\n",
"\n",
"In machine learning terms, these sentences form a language with only six different words. A language model is like a computer program that has been trained on this language. It predicts the likelihood of a word appearing based on the words that came before it.\n",
"\n",
"One of the simplest language models is called a \"unigram model.\" It assumes that words are independent of each other and predicts each word's frequency based on how often it appears in the training data. Here's what a unigram model for this dataset would look like, and it's pretty basic because it assigns the same probability to every word:\n",
"\n",
"This model isn't very good. No matter what ingredients you mention, it will randomly pick a new ingredient with equal likelihood. It's like rolling a fair die to choose an ingredient. Let's measure exactly how bad it is.\n",
"\n",
"First, we'll calculate how surprised the model is when it sees a specific word like \"chicken.\" The more probable an event is, the less surprising it is. If something has zero chance of happening (probability of 0), you'd be incredibly surprised if it did. On the other hand, if something is guaranteed to happen (probability of 1), you wouldn't be surprised at all when it occurs.'''"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [],
"source": [
"s1= '''Basketball is a team sport played by two teams of five players each. The primary objective is to score points by shooting the basketball through the opponent's hoop, which is mounted on a backboard 10 feet (3.048 meters) above the ground. The team with the most points at the end of the game wins. Basketball is played on a rectangular court, typically indoors, with a surface made of wood or synthetic materials. The rules and regulations are governed by various organizations, such as FIBA (International Basketball Federation) and the NBA (National Basketball Association). The following is a general outline of the basic rules of basketball:\n",
"\n",
"1. Game duration: A regulation basketball game is divided into four quarters, each lasting 12 minutes in the NBA and 10 minutes in FIBA play. College basketball in the US has two 20-minute halves. If the game is tied at the end of regulation, overtime periods are played until a winner is determined.\n",
"\n",
"2. Starting play: The game begins with a jump ball at the center of the court, where the referee throws the ball into the air, and one player from each team tries to gain possession by tapping it to a teammate.\n",
"\n",
"3. Scoring: Points are scored by shooting the ball through the hoop. A field goal made from inside the three-point arc is worth two points, while a field goal made from outside the arc is worth three points. Free throws, awarded after a foul, are worth one point each.\n",
"\n",
"4. Possession and dribbling: A player in possession of the ball must either pass it to a teammate or dribble (bounce) the ball while moving.'''"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [],
"source": [
"s2= '''Political stability is the ability of a government to maintain order and authority within its borders. It is essential for economic growth, as it provides a foundation for investment and trade.\n",
"There are many factors that contribute to political stability, including:\n",
"A strong rule of law: The rule of law is the principle that everyone is subject to the same laws, regardless of their social status or political affiliation. A strong rule of law helps to prevent corruption and ensures that everyone has equal opportunity to succeed.\n",
"A well-functioning government: A well-functioning government is one that is able to provide essential services, such as security, education, and healthcare. It is also able to manage the economy effectively and to respond to crises.\n",
"A vibrant civil society: A vibrant civil society is one that is made up of active and engaged citizens. Civil society organizations can help to hold the government accountable and to promote democracy and good governance.\n",
"Political stability is not always easy to achieve, but it is essential for economic growth. By investing in political stability, we can create a foundation for long-term prosperity.\n",
"Here are some of the benefits of political stability:\n",
"Increased investment: Investors are more likely to invest in countries that are politically stable. This can lead to increased economic growth and job creation.\n",
"Improved trade: Trade between countries is easier and more efficient when there is political stability. This can lead to lower prices for consumers and increased profits for businesses.\n",
"Reduced poverty: Political stability can help to reduce poverty by creating a more conducive environment for economic growth.\n",
"Improved quality of life: Political stability can lead to improved quality of life by providing a safer and more secure environment.\n",
"Political stability is a key ingredient for a prosperous and successful society. By investing in political stability, we can create a better future for ourselves and our children.\n",
"Here are some of the challenges to political stability:\n",
"Economic inequality: Economic inequality can lead to social unrest and instability. This is because it can create a sense of injustice and resentment among those who are not benefiting from economic growth.\n",
"Corruption: Corruption can undermine the rule of law and erode public trust in government. This can lead to instability and violence.\n",
"Ethnic and religious conflict: Ethnic and religious conflict can be a major source of instability. This is because it can lead to violence, displacement, and economic disruption.\n",
"Natural disasters: Natural disasters can also be a source of instability. This is because they can displace people, damage infrastructure, and disrupt economic activity.\n",
"Despite the challenges, there are many things that can be done to promote political stability. These include:\n",
"Investing in education and healthcare: Education and healthcare can help to reduce poverty and inequality, which are two of the main causes of instability.\n",
"Promoting good governance: Good governance is essential for building trust between the government and the people. It can be promoted by strengthening the rule of law, fighting corruption, and ensuring transparency and accountability.\n",
"Resolving conflict peacefully: Conflict can be resolved peacefully through negotiation, mediation, and other means. This can help to prevent violence and instability.\n",
"Building resilience: Building resilience is essential for coping with shocks and stresses, such as economic downturns and natural disasters. It can be done by investing in infrastructure, social safety nets, and disaster preparedness.\n",
"Political stability is a complex issue, but it is essential for economic growth and prosperity. By investing in political stability, we can create a better future for ourselves and our children.'''"
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [],
"source": [
"s1= '''In a quiet, picturesque village nestled deep within the lush, rolling hills of the countryside, there stood a charming, centuries-old cottage, its timeworn facade adorned with colorful flowers that cascaded down from window boxes. The cottage, with its rustic charm, had seen generations come and go, witnessed countless stories unfold within its sturdy walls. Each morning, as the sun cast its golden rays upon the sleepy hamlet, the villagers would wake to the melodious chirping of birds, their cheerful songs serving as a gentle alarm clock. Life in the village was slow-paced, a stark contrast to the bustling cities with their constant noise and ceaseless activity. Time seemed to move differently here, as if the world beyond the village's borders existed in a parallel universe, always in a hurry, while the village embraced a rhythm that ebbed and flowed with the changing seasons. The villagers, bound by a strong sense of community, gathered for festivals, sharing laughter and stories around bonfires that crackled in the cool night air. Generations of families had lived in this idyllic haven, passing down stories, traditions, and the enduring spirit of the village from one age to the next, ensuring that the passage of time only deepened their connection to this place they called home.'''\n"
]
},
{
"cell_type": "code",
"execution_count": 86,
"metadata": {},
"outputs": [],
"source": [
"s1= \"\"\"The Mission Impossible franchise, a timeless icon in the world of espionage thrillers, has held audiences captive for decades with its electrifying fusion of high-stakes action, intricate espionage plots, and mind-boggling twists. Tom Cruise, a true embodiment of Ethan Hunt, the daring and ingenious secret agent, has forever etched his name alongside the series, leading a team of accomplished operatives on missions that, at first glance, appear insurmountably challenging, destined to thwart global threats. With every new installment, viewers are treated to a whirlwind of meticulously staged action sequences, Tom Cruise's jaw-dropping stunts – performed by the man himself, and a labyrinth of betrayals and double-crosses that keeps everyone on the edge of their seats, leaving them guessing until the very last, suspenseful moments. The franchise's enduring charm stems from its unyielding commitment to pushing cinematic action's boundaries, making sure that each mission remains an impossibly thrilling spectacle that unfolds relentlessly, offering a rollercoaster of excitement for fans of all ages.\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 87,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['The Mission Impossible franchise, a timeless icon in the world of espionage thrillers, has held audiences captive for decades with its electrifying fusion of high-stakes action, intricate espionage plots, and mind-boggling twists',\n",
" ' Tom Cruise, a true embodiment of Ethan Hunt, the daring and ingenious secret agent, has forever etched his name alongside the series, leading a team of accomplished operatives on missions that, at first glance, appear insurmountably challenging, destined to thwart global threats',\n",
" \" With every new installment, viewers are treated to a whirlwind of meticulously staged action sequences, Tom Cruise's jaw-dropping stunts – performed by the man himself, and a labyrinth of betrayals and double-crosses that keeps everyone on the edge of their seats, leaving them guessing until the very last, suspenseful moments\",\n",
" \" The franchise's enduring charm stems from its unyielding commitment to pushing cinematic action's boundaries, making sure that each mission remains an impossibly thrilling spectacle that unfolds relentlessly, offering a rollercoaster of excitement for fans of all ages\",\n",
" '']"
]
},
"execution_count": 87,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# number of sentences\n",
"# list_of_sentences1= sentence1.split('.')\n",
"s1= s1.replace('\\n', ' ')\n",
"list_of_sentences1= s1.split('.')\n",
"list_of_sentences1\n",
"\n",
"\n",
"# x= []\n",
"# for i in list_of_sentences1:\n",
"# if len(i)>10:\n",
"# x.append(i)\n",
" \n",
"list_of_sentences1"
]
},
{
"cell_type": "code",
"execution_count": 61,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Political stability is the ability of a government to maintain order and authority within its borders',\n",
" ' It is essential for economic growth, as it provides a foundation for investment and trade',\n",
" '\\nThere are many factors that contribute to political stability, including:\\nA strong rule of law: The rule of law is the principle that everyone is subject to the same laws, regardless of their social status or political affiliation',\n",
" ' A strong rule of law helps to prevent corruption and ensures that everyone has equal opportunity to succeed',\n",
" '\\nA well-functioning government: A well-functioning government is one that is able to provide essential services, such as security, education, and healthcare',\n",
" ' It is also able to manage the economy effectively and to respond to crises',\n",
" '\\nA vibrant civil society: A vibrant civil society is one that is made up of active and engaged citizens',\n",
" ' Civil society organizations can help to hold the government accountable and to promote democracy and good governance',\n",
" '\\nPolitical stability is not always easy to achieve, but it is essential for economic growth',\n",
" ' By investing in political stability, we can create a foundation for long-term prosperity',\n",
" '\\nHere are some of the benefits of political stability:\\nIncreased investment: Investors are more likely to invest in countries that are politically stable',\n",
" ' This can lead to increased economic growth and job creation',\n",
" '\\nImproved trade: Trade between countries is easier and more efficient when there is political stability',\n",
" ' This can lead to lower prices for consumers and increased profits for businesses',\n",
" '\\nReduced poverty: Political stability can help to reduce poverty by creating a more conducive environment for economic growth',\n",
" '\\nImproved quality of life: Political stability can lead to improved quality of life by providing a safer and more secure environment',\n",
" '\\nPolitical stability is a key ingredient for a prosperous and successful society',\n",
" ' By investing in political stability, we can create a better future for ourselves and our children',\n",
" '\\nHere are some of the challenges to political stability:\\nEconomic inequality: Economic inequality can lead to social unrest and instability',\n",
" ' This is because it can create a sense of injustice and resentment among those who are not benefiting from economic growth',\n",
" '\\nCorruption: Corruption can undermine the rule of law and erode public trust in government',\n",
" ' This can lead to instability and violence',\n",
" '\\nEthnic and religious conflict: Ethnic and religious conflict can be a major source of instability',\n",
" ' This is because it can lead to violence, displacement, and economic disruption',\n",
" '\\nNatural disasters: Natural disasters can also be a source of instability',\n",
" ' This is because they can displace people, damage infrastructure, and disrupt economic activity',\n",
" '\\nDespite the challenges, there are many things that can be done to promote political stability',\n",
" ' These include:\\nInvesting in education and healthcare: Education and healthcare can help to reduce poverty and inequality, which are two of the main causes of instability',\n",
" '\\nPromoting good governance: Good governance is essential for building trust between the government and the people',\n",
" ' It can be promoted by strengthening the rule of law, fighting corruption, and ensuring transparency and accountability',\n",
" '\\nResolving conflict peacefully: Conflict can be resolved peacefully through negotiation, mediation, and other means',\n",
" ' This can help to prevent violence and instability',\n",
" '\\nBuilding resilience: Building resilience is essential for coping with shocks and stresses, such as economic downturns and natural disasters',\n",
" ' It can be done by investing in infrastructure, social safety nets, and disaster preparedness',\n",
" '\\nPolitical stability is a complex issue, but it is essential for economic growth and prosperity',\n",
" ' By investing in political stability, we can create a better future for ourselves and our children',\n",
" '']"
]
},
"execution_count": 61,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# list_of_sentences2= sentence2.split('.')\n",
"# s2= s2.replace('\\n', ' ')\n",
"\n",
"list_of_sentences2= s2.split('.')\n",
"list_of_sentences2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"def calculate_burst(list_of_sentences):\n",
" arr= []\n",
" for i in list_of_sentences:\n",
" ei= tokenizer(i, return_tensors=\"pt\")\n",
" arr.append(ei.input_ids.size(1))\n",
" print(f\"arr= {(arr)}\")\n",
" return np.var(np.array(arr))\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"arr= [41, 27, 24, 37, 24, 28, 42, 35, 24, 11, 23, 21, 29, 40]\n"
]
},
{
"data": {
"text/plain": [
"74.14285714285714"
]
},
"execution_count": 78,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"calculate_burst(list_of_sentences1[:-1])"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"arr= [16, 16, 44, 18, 32, 14, 21, 17, 17, 16, 27, 10, 17, 13, 21, 23, 13, 17, 24, 21, 18, 7, 18, 14, 13, 16, 17, 31, 19, 19, 19, 8, 23, 17, 17, 17]\n"
]
},
{
"data": {
"text/plain": [
"45.57098765432099"
]
},
"execution_count": 63,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"calculate_burst(list_of_sentences2[:-1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Method 3: `Summarization via Language Model`\n"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# Use a pipeline as a high-level helper\n",
"from transformers import pipeline\n",
"\n",
"pipe = pipeline(\"summarization\", model=\"DunnBC22/flan-t5-base-text_summarization_data\", device=\"cuda\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'summary_text': 'Understand water intoxication. Recognize water poisoning. Understand the dangers of excessive water intake. Understand how water can be considered a poison. Understand that water is considered one of the least toxic chemical compounds, with an LD50 exceeding 90 ml/kg in rats.'},\n",
" {'summary_text': 'Understand hyponatremia. Recognize the dangers of aqueous intoxication. Understand the causes of aqua inebriation. Identify the cause of venomous aquious content. Describe the effects of water on human health.'}]"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pipe([sentence1, sentence2], min_length=50, max_length=200)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"# # Use a pipeline as a high-level helper\n",
"# from transformers import pipeline\n",
"\n",
"# pipe = pipeline(\"summarization\", model=\"google/pegasus-cnn_dailymail\", device=\"cuda\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'summary_text': 'Cat ear headphones are popular among otakus, streamers, gamers, and anyone who wants a uniquely cute look .<n>Design is the priority, but sound quality should also be considered .'},\n",
" {'summary_text': 'Feline-themed audio headgear enjoys favor among aficionados of anime and gaming, as well as content creators .<n>Seek out headphones delivering crystal-clear and precise audio, and assess their suitability for both mature users and youngsters'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# pipe([sentence1, sentence2], min_length=50, max_length=200)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
|