UNIST-Eunchan commited on
Commit
f6a0d49
1 Parent(s): 52a2637

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,8 +14,8 @@ widget:
14
  example_title: "PEFT (2104.08691)"
15
  - text: "Generate Question, Answer pair correspond to the following research paper. [Abstract] For the first time in the world, we succeeded in synthesizing the room-temperature superconductor (Tc≥400 K, 127∘C) working at ambient pressure with a modified lead-apatite (LK-99) structure. The superconductivity of LK-99 is proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and the Meissner effect. The superconductivity of LK-99 originates from minute structural distortion by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. The shrinkage is caused by Cu2+ substitution of Pb2+(2) ions in the insulating network of Pb(2)-phosphate and it generates the stress. It concurrently transfers to Pb(1) of the cylindrical column resulting in distortion of the cylindrical column interface, which creates superconducting quantum wells (SQWs) in the interface. The heat capacity results indicated that the new model is suitable for explaining the superconductivity of LK-99. The unique structure of LK-99 that allows the minute distorted structure to be maintained in the interfaces is the most important factor that LK-99 maintains and exhibits superconductivity at room temperatures and ambient pressure. [Introduction] Since the discovery of the first superconductor(1), many efforts to search for new roomtemperature superconductors have been carried out worldwide(2, 3) through their experimental clarity or/and theoretical perspectives(4-8). The recent success of developing room-temperature superconductors with hydrogen sulfide(9) and yttrium super-hydride(10) has great attention worldwide, which is expected by strong electron-phonon coupling theory with high-frequency hydrogen phonon modes(11, 12). However, it is difficult to apply them to actual application devices in daily life because of the tremendously high pressure, and more efforts are being made to overcome the high-pressure problem(13). For the first time in the world, we report the success in synthesizing a room-temperature and ambient-pressure superconductor with a chemical approach to solve the temperature and pressure problem. We named the first room temperature and ambient pressure superconductor LK-99. The superconductivity of LK-99 proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and Meissner effect(14, 15). Several data were collected and analyzed in detail to figure out the puzzle of superconductivity of LK-99: X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), Electron Paramagnetic Resonance Spectroscopy (EPR), Heat Capacity, and Superconducting quantum interference device (SQUID) data. Henceforth in this paper, we will report and discuss our new findings including superconducting quantum wells associated with the superconductivity of LK-99.\n Question, Answer:"
16
  example_title: "LK-99 (Not NLP)"
17
- - text: "[Abstract] Abstract Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surfacelevel features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo. [Introduction] There are many issues with the evaluation of models that generate natural language. For example, datasets are often constructed in a way that prevents measuring tail effects of robustness, and they almost exclusively cover English. Most automated metrics measure only similarity between model output and references instead of fine-grained quality aspects (and even that poorly). Human evaluations have a high variance and, due to insufficient documentation, rarely produce replicable results. These issues have become more urgent as the nature of models that generate language has changed without significant changes to how they are being evaluated. While evaluation methods can capture surface-level improvements in text generated by state-of-the-art models (such as increased fluency) to some extent, they are ill-suited to detect issues with the content of model outputs, for example if they are not attributable to input information. These ineffective evaluations lead to overestimates of model capabilities. Deeper analyses uncover that popular models fail even at simple tasks by taking shortcuts, overfitting, hallucinating, and not being in accordance with their communicative goals. Identifying these shortcomings, many recent papers critique evaluation techniques or propose new ones. But almost none of the suggestions are followed or new techniques used. There is an incentive mismatch between conducting high-quality evaluations and publishing new models or modeling techniques. While general-purpose evaluation techniques could lower the barrier of entry for incorporating evaluation advances into model development, their development requires resources that are hard to come by, including model outputs on validation and test sets or large quantities of human assessments of such outputs. Moreover, some issues, like the refinement of datasets, require iterative processes where many researchers collaborate. All this leads to a circular dependency where evaluations of generation models can be improved only if generation models use better evaluations. We find that there is a systemic difference between selecting the best model and characterizing how good this model really is. Current evaluation techniques focus on the first, while the second is required to detect crucial issues. More emphasis needs to be put on measuring and reporting model limitations, rather than focusing on producing the highest performance numbers. To that end, this paper surveys analyses and critiques of evaluation approaches (sections 3 and 4) and of commonly used NLG datasets (section 5). Drawing on their insights, we describe how researchers developing modeling techniques can help to improve and subsequently benefit from better evaluations with methods available today (section 6). Expanding on existing work on model documentation and formal evaluation processes (Mitchell et al., 2019; Ribeiro et al., 2020), we propose releasing evaluation reports which focus on demonstrating NLG model shortcomings using evaluation suites. These reports should apply a complementary set of automatic metrics, include rigorous human evaluations, and be accompanied by data releases that allow for re-analysis with improved metrics. In an analysis of 66 recent EMNLP, INLG, and ACL papers along 29 dimensions related to our suggestions (section 7), we find that the first steps toward an improved evaluation are already frequently taken at an average rate of 27%. The analysis uncovers the dimensions that require more drastic changes in the NLG community. For example, 84% of papers already report results on multiple datasets and more than 28% point out issues in them, but we found only a single paper that contributed to the dataset documentation, leaving future researchers to re-identify those issues. We further highlight typical unsupported claims and a need for more consistent data release practices. Following the suggestions and results, we discuss how incorporating the suggestions can improve evaluation research, how the suggestions differ from similar ones made for NLU, and how better metrics can benefit model development itself (section 8). \n Question, Answer:"
18
- example_title: "NLG Evaluation (2202.06935)"
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
14
  example_title: "PEFT (2104.08691)"
15
  - text: "Generate Question, Answer pair correspond to the following research paper. [Abstract] For the first time in the world, we succeeded in synthesizing the room-temperature superconductor (Tc≥400 K, 127∘C) working at ambient pressure with a modified lead-apatite (LK-99) structure. The superconductivity of LK-99 is proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and the Meissner effect. The superconductivity of LK-99 originates from minute structural distortion by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. The shrinkage is caused by Cu2+ substitution of Pb2+(2) ions in the insulating network of Pb(2)-phosphate and it generates the stress. It concurrently transfers to Pb(1) of the cylindrical column resulting in distortion of the cylindrical column interface, which creates superconducting quantum wells (SQWs) in the interface. The heat capacity results indicated that the new model is suitable for explaining the superconductivity of LK-99. The unique structure of LK-99 that allows the minute distorted structure to be maintained in the interfaces is the most important factor that LK-99 maintains and exhibits superconductivity at room temperatures and ambient pressure. [Introduction] Since the discovery of the first superconductor(1), many efforts to search for new roomtemperature superconductors have been carried out worldwide(2, 3) through their experimental clarity or/and theoretical perspectives(4-8). The recent success of developing room-temperature superconductors with hydrogen sulfide(9) and yttrium super-hydride(10) has great attention worldwide, which is expected by strong electron-phonon coupling theory with high-frequency hydrogen phonon modes(11, 12). However, it is difficult to apply them to actual application devices in daily life because of the tremendously high pressure, and more efforts are being made to overcome the high-pressure problem(13). For the first time in the world, we report the success in synthesizing a room-temperature and ambient-pressure superconductor with a chemical approach to solve the temperature and pressure problem. We named the first room temperature and ambient pressure superconductor LK-99. The superconductivity of LK-99 proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and Meissner effect(14, 15). Several data were collected and analyzed in detail to figure out the puzzle of superconductivity of LK-99: X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), Electron Paramagnetic Resonance Spectroscopy (EPR), Heat Capacity, and Superconducting quantum interference device (SQUID) data. Henceforth in this paper, we will report and discuss our new findings including superconducting quantum wells associated with the superconductivity of LK-99.\n Question, Answer:"
16
  example_title: "LK-99 (Not NLP)"
17
+ - text: "Generate Question, Answer pair correspond to the following research paper. [Abstract] Abstract Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surfacelevel features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo. [Introduction] There are many issues with the evaluation of models that generate natural language. For example, datasets are often constructed in a way that prevents measuring tail effects of robustness, and they almost exclusively cover English. Most automated metrics measure only similarity between model output and references instead of fine-grained quality aspects (and even that poorly). Human evaluations have a high variance and, due to insufficient documentation, rarely produce replicable results. These issues have become more urgent as the nature of models that generate language has changed without significant changes to how they are being evaluated. While evaluation methods can capture surface-level improvements in text generated by state-of-the-art models (such as increased fluency) to some extent, they are ill-suited to detect issues with the content of model outputs, for example if they are not attributable to input information. These ineffective evaluations lead to overestimates of model capabilities. Deeper analyses uncover that popular models fail even at simple tasks by taking shortcuts, overfitting, hallucinating, and not being in accordance with their communicative goals. Identifying these shortcomings, many recent papers critique evaluation techniques or propose new ones. But almost none of the suggestions are followed or new techniques used. There is an incentive mismatch between conducting high-quality evaluations and publishing new models or modeling techniques. While general-purpose evaluation techniques could lower the barrier of entry for incorporating evaluation advances into model development, their development requires resources that are hard to come by, including model outputs on validation and test sets or large quantities of human assessments of such outputs. Moreover, some issues, like the refinement of datasets, require iterative processes where many researchers collaborate. All this leads to a circular dependency where evaluations of generation models can be improved only if generation models use better evaluations. We find that there is a systemic difference between selecting the best model and characterizing how good this model really is. Current evaluation techniques focus on the first, while the second is required to detect crucial issues. More emphasis needs to be put on measuring and reporting model limitations, rather than focusing on producing the highest performance numbers. To that end, this paper surveys analyses and critiques of evaluation approaches (sections 3 and 4) and of commonly used NLG datasets (section 5). Drawing on their insights, we describe how researchers developing modeling techniques can help to improve and subsequently benefit from better evaluations with methods available today (section 6). Expanding on existing work on model documentation and formal evaluation processes (Mitchell et al., 2019; Ribeiro et al., 2020), we propose releasing evaluation reports which focus on demonstrating NLG model shortcomings using evaluation suites. These reports should apply a complementary set of automatic metrics, include rigorous human evaluations, and be accompanied by data releases that allow for re-analysis with improved metrics. In an analysis of 66 recent EMNLP, INLG, and ACL papers along 29 dimensions related to our suggestions (section 7), we find that the first steps toward an improved evaluation are already frequently taken at an average rate of 27%. The analysis uncovers the dimensions that require more drastic changes in the NLG community. For example, 84% of papers already report results on multiple datasets and more than 28% point out issues in them, but we found only a single paper that contributed to the dataset documentation, leaving future researchers to re-identify those issues. We further highlight typical unsupported claims and a need for more consistent data release practices. Following the suggestions and results, we discuss how incorporating the suggestions can improve evaluation research, how the suggestions differ from similar ones made for NLU, and how better metrics can benefit model development itself (section 8). \n Question, Answer:"
18
+ example_title: "NLG-Eval (2202.06935)"
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You