text
stringlengths
64
6.93k
327,15,1377593288539127808,1152338625654226944,Megan Mansfield,"Happy April 1st everyone! I'm pleased to announce that myself and Darryl Seligman have a new paper up on the arXiv... ""I Knew You Were Trouble: Emotional Trends in the Repertoire of Taylor Swift"" <LINK> <LINK> In this work, we analyze trends between happiness and strength of commitment to a relationship across 149 of Taylor's songs (about 10 hrs of music). We create metrics to quantify the amount of happiness and strength of relationship in each song. <LINK> For example, we look at the lyrics to determine whether the object of her affection returns her feelings. (See the paper for the full grading system, including example lyrics!) <LINK> We find a significant trend indicating higher happiness in a more committed relationship. <LINK> We also examine subsets of the data and conclude that boys with blue eyes and/or bad reputations may be the worst choices for long-term happiness, while boys with indigo or green eyes may provide more stability. <LINK> Finally, and perhaps most importantly, we present the taylorswift python code, which allows users to input information about their current feelings and relationship status and receive a list of five suitable Taylor Swift songs to match their mood. <LINK> This code is publicly available at <LINK> and we hope you all find it helpful in making the perfect playlist! <LINK> @huei_sears Great questions! First the songs: ""Babe"" was not included because it was written by ""Sugarland ft. Taylor Swift"" so we assumed it primarily portrayed the thoughts of Sugarland. Same thing for ""This is What You Came For"" - assumed to be primarily Calvin Harris's thoughts @huei_sears Second, I agree that evermore and folklore are much less autobiographical. One interesting further interpretation of the trends in these data, then, might be to look at the changes between the more autobiographical eras and her more recent work. Future research! 😛 @tpanurach Aw glad to hear it! @noraguidegalaxy Lol I took from the Grammy website! Their profile of her is out of date. 😝 @Wikisteff Haha after this I'll go back to my usual exoplanet research, but if someone wants to train such a model they should go for it!",https://arxiv.org/abs/2103.16737,"As a modern musician and cultural icon, Taylor Swift has earned worldwide acclaim via pieces which predominantly draw upon the complex dynamics of personal and interpersonal experiences. Here we show, for the first time, how Swift's lyrical and melodic structure have evolved in their representation of emotions over a timescale of $\tau\sim14$ yr. Previous progress on this topic has been challenging based on the sheer volume of the relevant discography, and that uniquely identifying a song that optimally describes a hypothetical emotional state represents a multi-dimensional and complex task. To quantify the emotional state of a song, we separate the criteria into the level of optimism ($H$) and the strength of commitment to a relationship ($R$), based on lyrics and chordal tones. We apply these criteria to a set of 149 pieces spanning almost the entire repertoire. We find an overall trend toward positive emotions in stronger relationships, with a best-fit linear relationship of $R=0.642^{+0.086}_{-0.053}H-1.74^{+0.39}_{-0.29}$. We find no significant trends in mean happiness ($H$) within individual albums over time. The mean relationship score ($R$) shows trends which we speculate may be due to age and the global pandemic. We provide tentative indications that partners with blue eyes and/or bad reputations may lead to overall less positive emotions, while those with green or indigo-colored eyes may produce more positive emotions and stronger relationships. However, we stress that these trends are based on small sample sizes, and more data are necessary to validate them. Finally, we present the taylorswift python package which can be used to optimize song selection according to a specific mood. ","I Knew You Were Trouble: Emotional Trends in the Repertoire of Taylor
Swift",12,"['Happy April 1st everyone! I\'m pleased to announce that myself and Darryl Seligman have a new paper up on the arXiv...\n""I Knew You Were Trouble: Emotional Trends in the Repertoire of Taylor Swift"" <LINK> <LINK>', ""In this work, we analyze trends between happiness and strength of commitment to a relationship across 149 of Taylor's songs (about 10 hrs of music). We create metrics to quantify the amount of happiness and strength of relationship in each song. https://t.co/k8AKsjLt1J"", 'For example, we look at the lyrics to determine whether the object of her affection returns her feelings. (See the paper for the full grading system, including example lyrics!) https://t.co/oCg1CGBnPd', 'We find a significant trend indicating higher happiness in a more committed relationship. https://t.co/CNR5MOdejp', 'We also examine subsets of the data and conclude that boys with blue eyes and/or bad reputations may be the worst choices for long-term happiness, while boys with indigo or green eyes may provide more stability. https://t.co/In7U8rxDsZ', 'Finally, and perhaps most importantly, we present the taylorswift python code, which allows users to input information about their current feelings and relationship status and receive a list of five suitable Taylor Swift songs to match their mood. https://t.co/47Ls7cO3af', 'This code is publicly available at https://t.co/4bij2ZWAgQ and we hope you all find it helpful in making the perfect playlist! https://t.co/vjX0xnRBLP', '@huei_sears Great questions! First the songs: ""Babe"" was not included because it was written by ""Sugarland ft. Taylor Swift"" so we assumed it primarily portrayed the thoughts of Sugarland. Same thing for ""This is What You Came For"" - assumed to be primarily Calvin Harris\'s thoughts', '@huei_sears Second, I agree that evermore and folklore are much less autobiographical. One interesting further interpretation of the trends in these data, then, might be to look at the changes between the more autobiographical eras and her more recent work. Future research! 😛', '@tpanurach Aw glad to hear it!', '@noraguidegalaxy Lol I took from the Grammy website! Their profile of her is out of date. 😝', ""@Wikisteff Haha after this I'll go back to my usual exoplanet research, but if someone wants to train such a model they should go for it!""]",21,03,2155
328,182,1337424479228669953,1055358835,M.Bülent Sarıyıldız,"Generalization is at the heart of representation learning; yet the impact of the *semantic relationship* between concepts seen during training and downstream datasets is unclear. In our recent work (<LINK>), we propose a principled way of measuring exactly that. <LINK> To measure ""concept generalization"" in a controlled manner, we use the ImageNet-21K dataset and its ontology. Defining the ImageNet-1K concepts as the set of ""seen"" concepts, we rank all other (""unseen"") ImageNet concepts in increasing semantic similarity to the set of seen ones. Based on this ordering and after discarding unsafe concepts or ones with few images, we define 5 disjoint concept generalization ***levels***, i.e. 5 imageNet-sized (1000-class, approx 1.1M images) datasets whose concepts are semantically less and less similar to ImageNet-1K. <LINK> We call the above the ***Concept Generalization (CoG) benchmark***. The basic protocol is simple: We first extract features for all 5 levels, and then train logistic regression classifiers on top for each level. Since ImageNet-1K is the set of seen concepts for CoG, we can evaluate out-of-the-box any publicly available ImageNet-1K-pretrained model! Below are some highlights after analysing the performance of 3 (semi-)supervised and 4 self-supervised (MoCo, SimCLR, SWAV, BYOL) methods. <LINK> We can see that the performance of all the models monotonically decreases as we move to levels semantically further from the seen ones. Also, despite the superiority of Sup over the self-supervised models when evaluated on ImageNet-1K, it is outperformed by them on most levels. <LINK> In the paper, we further measure generalization from a few samples per concept, and further study the topology of the feature space across levels via clustering and measuring alignment and uniformity. This is a joint work with my supervisors @dlarlus @inthebrownbag and @skamalas (THANK YOU SO MUCH!) @naverlabseurope",https://arxiv.org/abs/2012.05649,"Measuring concept generalization, i.e., the extent to which models trained on a set of (seen) visual concepts can be leveraged to recognize a new set of (unseen) concepts, is a popular way of evaluating visual representations, especially in a self-supervised learning framework. Nonetheless, the choice of unseen concepts for such an evaluation is usually made arbitrarily, and independently from the seen concepts used to train representations, thus ignoring any semantic relationships between the two. In this paper, we argue that the semantic relationships between seen and unseen concepts affect generalization performance and propose ImageNet-CoG, a novel benchmark on the ImageNet-21K (IN-21K) dataset that enables measuring concept generalization in a principled way. Our benchmark leverages expert knowledge that comes from WordNet in order to define a sequence of unseen IN-21K concept sets that are semantically more and more distant from the ImageNet-1K (IN-1K) subset, a ubiquitous training set. This allows us to benchmark visual representations learned on IN-1K out-of-the box. We conduct a large-scale study encompassing 31 convolution and transformer-based models and show how different architectures, levels of supervision, regularization techniques and use of web data impact the concept generalization performance. ",Concept Generalization in Visual Representation Learning,8,"['Generalization is at the heart of representation learning; yet the impact of the *semantic relationship* between concepts seen during training and downstream datasets is unclear. In our recent work (<LINK>), we propose a principled way of measuring exactly that. <LINK>', 'To measure ""concept generalization"" in a controlled manner, we use the ImageNet-21K dataset and its ontology. Defining the ImageNet-1K concepts as the set of ""seen"" concepts, we rank all other (""unseen"") ImageNet concepts in increasing semantic similarity to the set of seen ones.', 'Based on this ordering and after discarding unsafe concepts or ones with few images, we define 5 disjoint concept generalization ***levels***, i.e. 5 imageNet-sized (1000-class, approx 1.1M images) datasets whose concepts are semantically less and less similar to ImageNet-1K. https://t.co/C4wvy0v4E3', 'We call the above the ***Concept Generalization (CoG) benchmark***. The basic protocol is simple: We first extract features for all 5 levels, and then train logistic regression classifiers on top for each level.', 'Since ImageNet-1K is the set of seen concepts for CoG, we can evaluate out-of-the-box any publicly available ImageNet-1K-pretrained model! Below are some highlights after analysing the performance of 3 (semi-)supervised and 4 self-supervised (MoCo, SimCLR, SWAV, BYOL) methods. https://t.co/wtAhGNowyt', 'We can see that the performance of all the models monotonically decreases as we move to levels semantically further from the seen ones. Also, despite the superiority of Sup over the self-supervised models when evaluated on ImageNet-1K, it is outperformed by them on most levels. https://t.co/fLiei3NfBa', 'In the paper, we further measure generalization from a few samples per concept, and further study the topology of the feature space across levels via clustering and measuring alignment and uniformity.', 'This is a joint work with my supervisors @dlarlus @inthebrownbag and @skamalas (THANK YOU SO MUCH!) @naverlabseurope']",20,12,1935
329,37,1430961973718667272,1373472847750893569,Stanley H. Chan,"New paper! Detecting and Segmenting Adversarial Graphics Patterns from Images <LINK> @ICCV_2021 workshop @PurdueEngineers @PurdueECE #ComputerForensics #ArtificialIntelligence <LINK> In one of the visits at #Facebook, I was told that the majority of the attacks were not based on adversarial attacks published in @NeurIPSConf and @icmlconf. A layperson just uses photoshop to add simple patterns to alter the image. It turns out that defending these attacks is nontrivial because people are just very creative. Adversarial training fails miserably against the huge variety of patterns. So we came up with this simple solution to identifying the altered parts. And it becomes another (unfunded) side project that we had a lot of fun with!",https://arxiv.org/abs/2108.09383,"Adversarial attacks pose a substantial threat to computer vision system security, but the social media industry constantly faces another form of ""adversarial attack"" in which the hackers attempt to upload inappropriate images and fool the automated screening systems by adding artificial graphics patterns. In this paper, we formulate the defense against such attacks as an artificial graphics pattern segmentation problem. We evaluate the efficacy of several segmentation algorithms and, based on observation of their performance, propose a new method tailored to this specific problem. Extensive experiments show that the proposed method outperforms the baselines and has a promising generalization capability, which is the most crucial aspect in segmenting artificial graphics patterns. ",Detecting and Segmenting Adversarial Graphics Patterns from Images,4,"['New paper!\n\nDetecting and Segmenting Adversarial Graphics Patterns from Images\n\n<LINK>\n\n@ICCV_2021 workshop\n\n@PurdueEngineers @PurdueECE \n#ComputerForensics \n#ArtificialIntelligence <LINK>', 'In one of the visits at #Facebook, I was told that the majority of the attacks were not based on adversarial attacks published in @NeurIPSConf and @icmlconf.\n\nA layperson just uses photoshop to add simple patterns to alter the image.', 'It turns out that defending these attacks is nontrivial because people are just very creative. Adversarial training fails miserably against the huge variety of patterns.\n\nSo we came up with this simple solution to identifying the altered parts.', 'And it becomes another (unfunded) side project that we had a lot of fun with!']",21,08,737
330,21,1498298166894022657,356223194,Pieter Claeys,"New paper on arXiv: we further explore the connection between dual-unitarity and random matrix theory, this time combining unitary circuit dynamics with projective measurements. Joint work with @AustenLamacraft (as is tradition). A short thread. <LINK> Recent works introduced the notion of a projected ensemble: given a quantum state defined on a subsystem+bath we can perform projective measurements on the bath only, returning subsystem states with some fixed probability. Ho and Choi recently showed that after quench dynamics in the dual-unitary kicked Ising model, the resulting projected ensemble quickly becomes indistinguishable from the uniform Haar-random distribution. <LINK> Averaging over measurement outcomes is here equivalent to averaging over Haar-random states, leading to an exact quantum state design (for an infinite bath, after a surprisingly short time). We wanted to check the role of dual-unitarity, which was not immediately clear to us. <LINK> Interestingly, dual-unitarity alone does not guarantee such an emergent quantum state design! It needs to be supplemented by a 'solvable measurement scheme', a notion which we here introduce and motivate. For the kicked Ising model, it turns out that measurements in the computational basis form a solvable measurement scheme, but more involved measurement schemes can be found for all dual-unitary models (e.g. measuring Bell pair states). Using MPS calculations we verified that after a time equal to the subsystem size all k-moments of the resulting ensembles collapse to the moments of the Haar-random uniform distribution, and contrast this with generic gates/measurements. <LINK> (As an aside, @Arrr______ recently published a preprint on the emergence of approximate k-designs following thermalization to high temperature. Check it out!) The main message is that dual-unitary circuits, while exactly solvable, can be analytically shown to exhibit the random matrix behavior we expect in chaotic quantum models — and which is typically absent in solvable models. As a neat corollary, we also found some new classes of dual-unitary gates that behave in the same way as the kicked Ising model. Comments and feedback welcome! <LINK> @Arrr______ It depends! For KIM-like gates the solvable basis is product states in the computational basis, but this hinges on the special properties of these gates. General dual-unitary gates need more involved measurements, but these can be two-site product states (e.g. Bell states).",https://arxiv.org/abs/2202.12306,"Recent works have investigated the emergence of a new kind of random matrix behaviour in unitary dynamics following a quantum quench. Starting from a time-evolved state, an ensemble of pure states supported on a small subsystem can be generated by performing projective measurements on the remainder of the system, leading to a projected ensemble. In chaotic quantum systems it was conjectured that such projected ensembles become indistinguishable from the uniform Haar-random ensemble and lead to a quantum state design. Exact results were recently presented by Ho and Choi [Phys. Rev. Lett. 128, 060601 (2022)] for the kicked Ising model at the self-dual point. We provide an alternative construction that can be extended to general chaotic dual-unitary circuits with solvable initial states and measurements, highlighting the role of the underlying dual-unitarity and further showing how dual-unitary circuit models exhibit both exact solvability and random matrix behaviour. Building on results from biunitary connections, we show how complex Hadamard matrices and unitary error bases both lead to solvable measurement schemes. ","Emergent quantum state designs and biunitarity in dual-unitary circuit
dynamics",11,"['New paper on arXiv: we further explore the connection between dual-unitarity and random matrix theory, this time combining unitary circuit dynamics with projective measurements. Joint work with @AustenLamacraft (as is tradition). A short thread.\n\n<LINK>', 'Recent works introduced the notion of a projected ensemble: given a quantum state defined on a subsystem+bath we can perform projective measurements on the bath only, returning subsystem states with some fixed probability.', 'Ho and Choi recently showed that after quench dynamics in the dual-unitary kicked Ising model, the resulting projected ensemble quickly becomes indistinguishable from the uniform Haar-random distribution. https://t.co/n2tgf29mkR', 'Averaging over measurement outcomes is here equivalent to averaging over Haar-random states, leading to an exact quantum state design (for an infinite bath, after a surprisingly short time). We wanted to check the role of dual-unitarity, which was not immediately clear to us. https://t.co/waVeNoqVYH', ""Interestingly, dual-unitarity alone does not guarantee such an emergent quantum state design! It needs to be supplemented by a 'solvable measurement scheme', a notion which we here introduce and motivate."", 'For the kicked Ising model, it turns out that measurements in the computational basis form a solvable measurement scheme, but more involved measurement schemes can be found for all dual-unitary models (e.g. measuring Bell pair states).', 'Using MPS calculations we verified that after a time equal to the subsystem size all k-moments of the resulting ensembles collapse to the moments of the Haar-random uniform distribution, and contrast this with generic gates/measurements. https://t.co/KIdX55LAfK', '(As an aside, @Arrr______ recently published a preprint on the emergence of approximate k-designs following thermalization to high temperature. Check it out!)', 'The main message is that dual-unitary circuits, while exactly solvable, can be analytically shown to exhibit the random matrix behavior we expect in chaotic quantum models — and which is typically absent in solvable models.', 'As a neat corollary, we also found some new classes of dual-unitary gates that behave in the same way as the kicked Ising model. Comments and feedback welcome! https://t.co/axKnjHH6VL', '@Arrr______ It depends! For KIM-like gates the solvable basis is product states in the computational basis, but this hinges on the special properties of these gates. General dual-unitary gates need more involved measurements, but these can be two-site product states (e.g. Bell states).']",22,02,2494
331,26,1056849403074568194,2932678322,Keaton Bell,"The preprint of our new paper is out today titled ""Transition from spot to faculae domination---An alternate explanation for the dearth of intermediate Kepler rotation periods."" <LINK> 1/9 We use decades of measurements of photometric brightness (top) and a spectroscopic indicator of magnetic activity (bottom) for 30 stars to understand what features dominate the activity cycles: dark starspots or bright faculae. 2/9 <LINK> If stars are overall brighter when they are at the most active part of their cycle, they are facula dominated; if they are darker at peak activity, they are spot dominated. We fit simultaneous sinusoids to both data sets and compare phases for each star. 3/9 <LINK> We find that more active stars have spot-dominated activity cycles (top right) and less active stars have facula-dominated cycles (bottom left). The transition coincides with the Vaughan-Preston gap, where there is a noted absence of stars with intermediate activity levels. 4/9 <LINK> This transition also coincides with a Rossby number of around 1, where the rotation period approximately equals the convective turnover time. 5/9 <LINK> We also see that younger, more active stars with spot-dominated activity cycles exhibit larger amplitudes of photometric variability. 6/9 <LINK> This transition happens at a stellar age of around 2550 Myr---much older than the location of a noted dearth of stars with intermediate rotation periods at an age of around 800 Myr from gyrochronology relations. 7/9 <LINK> Many explanations have been advanced to explain this rotation period gap, but none have been fully satisfying. Our results provide useful observational context for considering the problem. The overtake of faculae as the dominant feature does not happen in this gap. 8/9 We speculate that the spot-to-faculae transition may begin ~800 Myr, with the earliest faculae being localized near spots, causing photometric cancellation and undetectable rotation periods. Later, faculae networks go global and rotation from spots can again be measured. 9/9",https://arxiv.org/abs/1810.11250,"The study of stellar activity cycles is crucial to understand the underlying dynamo and how it causes activity signatures such as dark spots and bright faculae. We study the appearance of activity signatures in contemporaneous photometric and chromospheric time series. Lomb-Scargle periodograms are used to search for cycle periods present in both time series. To emphasize the signature of the activity cycle we account for rotation-induced scatter in both data sets by fitting a quasi-periodic Gaussian process model to each observing season. After subtracting the rotational variability, cycle amplitudes and the phase difference between the two time series are obtained by fitting both time series simultaneously using the same cycle period. We find cycle periods in 27 of the 30 stars in our sample. The phase difference between the two time series reveals that the variability in fast rotating active stars is usually in anti-phase, while the variability of slowly rotating inactive stars is in phase. The photometric cycle amplitudes are on average six times larger for the active stars. The phase and amplitude information demonstrates that active stars are dominated by dark spots, whereas less active stars are dominated by bright faculae. We find the transition from spot to faculae domination at the Vaughan-Preston gap, and around a Rossby number equal to one. We conclude that faculae are the dominant ingredient of stellar activity cycles at ages >2.55 Gyr. The data further suggest that the Vaughan-Preston gap can not explain the previously detected dearth of Kepler rotation periods between 15-25 days. Nevertheless, our results led us to propose an explanation for the rotation period dearth to be due to the non-detection of periodicity caused by the cancellation of dark spots and bright faculae at 800 Myr. ","Transition from spot to faculae domination -- An alternate explanation
for the dearth of intermediate \textit{Kepler} rotation periods",9,"['The preprint of our new paper is out today titled ""Transition from spot to faculae domination---An alternate explanation for the dearth of intermediate Kepler rotation periods."" <LINK> 1/9', 'We use decades of measurements of photometric brightness (top) and a spectroscopic indicator of magnetic activity (bottom) for 30 stars to understand what features dominate the activity cycles: dark starspots or bright faculae. 2/9 https://t.co/Rnd17dpCrU', 'If stars are overall brighter when they are at the most active part of their cycle, they are facula dominated; if they are darker at peak activity, they are spot dominated. We fit simultaneous sinusoids to both data sets and compare phases for each star. 3/9 https://t.co/cWyyYXpG5K', 'We find that more active stars have spot-dominated activity cycles (top right) and less active stars have facula-dominated cycles (bottom left). The transition coincides with the Vaughan-Preston gap, where there is a noted absence of stars with intermediate activity levels. 4/9 https://t.co/Za86v6XLT8', 'This transition also coincides with a Rossby number of around 1, where the rotation period approximately equals the convective turnover time. 5/9 https://t.co/BxCpixWcko', 'We also see that younger, more active stars with spot-dominated activity cycles exhibit larger amplitudes of photometric variability. 6/9 https://t.co/5AMQjogEKJ', 'This transition happens at a stellar age of around 2550 Myr---much older than the location of a noted dearth of stars with intermediate rotation periods at an age of around 800 Myr from gyrochronology relations. 7/9 https://t.co/DUrRkvtkyN', 'Many explanations have been advanced to explain this rotation period gap, but none have been fully satisfying. Our results provide useful observational context for considering the problem. The overtake of faculae as the dominant feature does not happen in this gap. 8/9', 'We speculate that the spot-to-faculae transition may begin ~800 Myr, with the earliest faculae being localized near spots, causing photometric cancellation and undetectable rotation periods. Later, faculae networks go global and rotation from spots can again be measured. 9/9']",18,10,2046
332,29,1519951458908749824,175028318,Andrea Oddo,"#TeamBispectrum is back! Congrats to all members of my former group for this brand new paper, but double congratulations to Federico and Chiara, who recently both won the @EC_Euclid STAR Prize! Having co-authored a paper with both of them is a real honor! <LINK>",https://arxiv.org/abs/2204.13628,"We present the analysis of the halo bispectrum in redshift-space in terms of its multipoles, monopole, quadrupole and hexadecapole, measured from a large set of simulations. We fit such measurements with a tree-level model in perturbation theory that depends on linear and nonlinear bias parameters as well as on the growth rate $f$ of density fluctuations. The likelihood analysis takes advantage of a very large set of mock catalogs, enabling a robust estimation of the covariance properties for all multipoles. We compare the numerical estimate of the covariance matrix to its Gaussian prediction finding discrepancies of 10% or less for all configurations with the sole exception of the squeezed triangles in the monopole case. We find the range of validity of the tree-level model, for the total simulation volume of about 1000 $h^{-3}\, {\rm Gpc}^3$, reaches a maximum wavenumber of $0.08 \, h \, {\rm Mpc}^{-1}$ for the monopole, while it is limited to $0.06$ and $0.045\, h \, \rm{Mpc}^{-1}$ respectively for quadrupole and hexadecapole. Despite this, the addition of the quadrupole to the analysis allows for significant improvements on the determination of the model parameters and specifically on $f$, similarly to the power spectrum case. Finally, we compare our numerical estimate for the full covariance with its theoretical prediction in the Gaussian approximation and find the latter to work remarkably well in the context of simulation boxes with periodic boundary condition. ",The Halo Bispectrum Multipoles in Redshift Space,1,"['#TeamBispectrum is back! Congrats to all members of my former group for this brand new paper, but double congratulations to Federico and Chiara, who recently both won the @EC_Euclid STAR Prize! Having co-authored a paper with both of them is a real honor!\n\n<LINK>']",22,04,262
333,102,1224298279418187776,1173845537612881920,Francesc Lluis,"We propose a deep-learning method for sound field reconstruction that offers advantages in three directions: works with low number of mics, accommodates irregular mics distributions, and has efficient inference Paper: <LINK> Data & Code: <LINK> <LINK> @neokaplanis In the end, yes!:) 🔊🎙️",http://arxiv.org/abs/2001.11263,"In this paper, a deep-learning-based method for sound field reconstruction is proposed. It is shown the possibility to reconstruct the magnitude of the sound pressure in the frequency band 30-300 Hz for an entire room by using a very low number of irregularly distributed microphones arbitrarily arranged. Moreover, the approach is agnostic to the location of the measurements in the Euclidean space. In particular, the presented approach uses a limited number of arbitrary discrete measurements of the magnitude of the sound field pressure in order to extrapolate this field to a higher-resolution grid of discrete points in space with a low computational complexity. The method is based on a U-net-like neural network with partial convolutions trained solely on simulated data, which itself is constructed from numerical simulations of Green's function across thousands of common rectangular rooms. Although extensible to three dimensions and different room shapes, the method focuses on reconstructing a two-dimensional plane of a rectangular room from measurements of the three-dimensional sound field. Experiments using simulated data together with an experimental validation in a real listening room are shown. The results suggest a performance which may exceed conventional reconstruction techniques for a low number of microphones and computational requirements. ",Sound field reconstruction in rooms: inpainting meets super-resolution,2,"['We propose a deep-learning method for sound field reconstruction that offers advantages in three directions: works with low number of mics, accommodates irregular mics distributions, and has efficient inference\n\nPaper: <LINK>\nData &amp; Code: <LINK> <LINK>', '@neokaplanis In the end, yes!:) 🔊🎙️']",20,01,287
334,87,1183695437586452480,1107358308,Jack Turner,"Excited to release new paper w. @mpatacch on Gaussian Processes (GPs) for few-shot learning (with deep kernel transfer). 📝paper: <LINK> 💾 code: <LINK> (in @PyTorch and GPyTorch) (1/3) GPs are a natural fit for few-shot because they work well in low data regime and have built-in uncertainty estimation. We apply this on standard few-shot benchmarks via deep kernel learning (@andrewgwils), using output feature maps of NN as input to GP (2/3) E.g. my favourite plot. Trained on random periodic head pose trajectories,Feature Transfer (FT) and GP are tested on flat rotation with Cutout noise. FT overfits, GP predicts correctly & acknowledges uncertainty on the noisy point. Results on std few-shot benchmarks in paper. <LINK>",https://arxiv.org/abs/1910.05199,"Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task. Common approaches have taken the form of meta-learning: learning to learn on the new problem given the old. Following the recognition that meta-learning is implementing learning in a multi-level model, we present a Bayesian treatment for the meta-learning inner loop through the use of deep kernels. As a result we can learn a kernel that transfers to new tasks; we call this Deep Kernel Transfer (DKT). This approach has many advantages: is straightforward to implement as a single optimizer, provides uncertainty quantification, and does not require estimation of task-specific parameters. We empirically demonstrate that DKT outperforms several state-of-the-art algorithms in few-shot classification, and is the state of the art for cross-domain adaptation and regression. We conclude that complex meta-learning routines can be replaced by a simpler Bayesian model without loss of accuracy. ",Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels,3,"['Excited to release new paper w. @mpatacch on Gaussian Processes (GPs) for few-shot learning (with deep kernel transfer). \n\n 📝paper: <LINK>\n 💾 code: <LINK> (in @PyTorch and GPyTorch) \n\n(1/3)', 'GPs are a natural fit for few-shot because they work well in low data regime and have built-in uncertainty estimation. We apply this on standard few-shot benchmarks via deep kernel learning (@andrewgwils), using output feature maps of NN as input to GP\n\n(2/3)', 'E.g. my favourite plot. Trained on random periodic head pose trajectories,Feature Transfer (FT) and GP are tested on flat rotation with Cutout noise. FT overfits, GP predicts correctly &amp; acknowledges uncertainty on the noisy point. Results on std few-shot benchmarks in paper. https://t.co/UtPoX1k4yB']",19,10,730
335,150,1499114065892823040,610427323,Desika Narayanan,"hey galaxy astronomers - have you ever wondered how well you're deriving the physical properties of your galaxies from SED fitting? in a new paper led by super awesome grad student sidney lower, we dive into this! <LINK> [1/ now we've talked about this before (as I'm sure you remember) <LINK> where the tl;dr is that yes of course the model you implement for star formation histories matters (a lot), and non parametric SFHs get you much better derived physical properties [2/ Today we're asking the question 'how well do you derive the dust attenuation curve from SED fitting'. to do this, we're generating mock SEDs from cosmological simulations, and then pretending like we're observers and fitting those SEDs. [3/ (but, since we're not observers, we know the actual like real life true physical properties of our not actual not real life fake galaxies) [4/ now we've talked about dust attenuation before. but in case you don't remember, lemme remind you. dust attenuation is that real difficult property to characterize that folds in both dust extinction, and star-dust geometry issues in galaxies (see my keynote art below) [5/: <LINK> (in case you want to do like a real real deep dive into dust attenuation, within the page and reference limits of an ARA&A anyways, here's some light reading) <LINK> [6/ anyways, so yeah. dust attenuation folds in the complexities of stars over here, and dust over there and all mixed up in galaxies. so maybe unsurprisingly, the best thing to do in SED fitting is to try to encapsulate this effect. this is exactly what we (sidney) did [7/ we introduced a new model that includes a fraction of unobscured stellar light, which gives us a super flexible attenuation curve that we, like with flexible star formation histories, vary in the fitting process [8/ this does super well! here, we look at how well the recovered attenuation curves from SED fitting compare to the true attenuation curves from the galaxy models. L--&gt;R increases the flexibility of the attenuation curve we include in the SED fitting process [9/ <LINK> quite naturally, this also results in improved derived physical properties like SFRs compared with more traditional models like uniform screens! the cool part is - all of this is already implemented in the awesome SED fitting code prospector. [10/ so anyways, now that you've made it this far, be flexible in your SFHs...be flexible in your dust attenuation modeling, and go out and observe awesome things. [11/11]",https://arxiv.org/abs/2203.00074,"One of the most common methods for inferring galaxy attenuation curves is via spectral energy distribution (SED) modeling, where the dust attenuation properties are modeled simultaneously with other galaxy physical properties. In this paper, we assess the ability of SED modeling to infer these dust attenuation curves from broadband photometry, and suggest a new flexible model that greatly improves the accuracy of attenuation curve derivations. To do this, we fit mock SEDs generated from the Simba cosmological simulation with the Prospector SED fitting code. We consider the impact of the commonly-assumed uniform screen model and introduce a new non-uniform screen model parameterized by the fraction of unobscured stellar light. This non-uniform screen model allows for a non-zero fraction of stellar light to remain unattenuated, resulting in a more flexible attenuation curve shape by decoupling the shape of the UV attenuation curve from the optical attenuation curve. The ability to constrain the dust attenuation curve is significantly improved with the use of a non-uniform screen model, with the median offset in UV attenuation decreasing from $-0.30$ dex with a uniform screen model to $-0.17$ dex with the non-uniform screen model. With this increase in dust attenuation modeling accuracy, we also improve the star formation rates (SFRs) inferred with the non-uniform screen model, decreasing the SFR offset on average by $0.12$ dex. We discuss the efficacy of this new model, focusing on caveats with modeling star-dust geometries and the constraining power of available SED observations. ","How Well Can We Measure Galaxy Dust Attenuation Curves? The Impact of
the Assumed Star-Dust Geometry Model in SED Fitting",11,"[""hey galaxy astronomers - have you ever wondered how well you're deriving the physical properties of your galaxies from SED fitting? in a new paper led by super awesome grad student sidney lower, we dive into this! \n\n<LINK>\n\n[1/"", ""now we've talked about this before (as I'm sure you remember)\n\nhttps://t.co/Ufd6WvJmVL\n\nwhere the tl;dr is that yes of course the model you implement for star formation histories matters (a lot), and non parametric SFHs get you much better derived physical properties [2/"", ""Today we're asking the question 'how well do you derive the dust attenuation curve from SED fitting'. to do this, we're generating mock SEDs from cosmological simulations, and then pretending like we're observers and fitting those SEDs. [3/"", ""(but, since we're not observers, we know the actual like real life true physical properties of our not actual not real life fake galaxies) [4/"", ""now we've talked about dust attenuation before. but in case you don't remember, lemme remind you. dust attenuation is that real difficult property to characterize that folds in both dust extinction, and star-dust geometry issues in galaxies (see my keynote art below) [5/: https://t.co/jKjM7qHJ3r"", ""(in case you want to do like a real real deep dive into dust attenuation, within the page and reference limits of an ARA&amp;A anyways, here's some light reading)\n\nhttps://t.co/lbU8kYK3SE\n\n[6/"", 'anyways, so yeah. dust attenuation folds in the complexities of stars over here, and dust over there and all mixed up in galaxies. so maybe unsurprisingly, the best thing to do in SED fitting is to try to encapsulate this effect. this is exactly what we (sidney) did [7/', 'we introduced a new model that includes a fraction of unobscured stellar light, which gives us a super flexible attenuation curve that we, like with flexible star formation histories, vary in the fitting process [8/', 'this does super well! here, we look at how well the recovered attenuation curves from SED fitting compare to the true attenuation curves from the galaxy models. L--&gt;R increases the flexibility of the attenuation curve we include in the SED fitting process [9/ https://t.co/bja9MdQnEW', 'quite naturally, this also results in improved derived physical properties like SFRs compared with more traditional models like uniform screens! the cool part is - all of this is already implemented in the awesome SED fitting code prospector. [10/', ""so anyways, now that you've made it this far, be flexible in your SFHs...be flexible in your dust attenuation modeling, and go out and observe awesome things. [11/11]""]",22,03,2489
336,100,1182214682356174849,2352149191,Mimmo Nardiello,"Project PATHOS (""A Psf-based Approach to @NASA_TESS High quality data Of Stellar clusters"") has officially started! Check out my paper <LINK> and give a look to the first light curves of 47Tuc: <LINK> .We also report a new candidate exoplanet!🥳 <LINK> Many thanks to the co-authors @valentingranata @borsatoluca83 @Malavolta_Exo @ValerioNascim <LINK> ... and also many thanks to @ScottWFleming for the amazing work done to upload the light curves on the MAST archive :)",https://arxiv.org/abs/1910.03592,"The TESS mission will survey ~85 % of the sky, giving us the opportunity of extracting high-precision light curves of millions of stars, including stellar cluster members. In this work, we present our project ""A PSF-based Approach to TESS High quality data Of Stellar clusters"" (PATHOS), aimed at searching and characterise candidate exoplanets and variable stars in stellar clusters using our innovative method for the extraction of high-precision light curves of stars located in crowded environments. Our technique of light-curve extraction involves the use of empirical Point Spread Functions (PSFs), an input catalogue and neighbour-subtraction. The PSF-based approach allows us to minimise the dilution effects in crowded environments and to extract high-precision photometry for stars in the faint regime (G>13). For this pilot project, we extracted, corrected, and analysed the light curves of 16641 stars located in a dense region centred on the globular cluster 47 Tuc. We were able to reach the TESS magnitude T~16.5 with a photometric precision of ~1 % on the 6.5-hour timescale; in the bright regime we were able to detect transits with depth of ~34 parts per million. We searched for variables and candidate transiting exoplanets. Our pipeline detected one planetary candidate orbiting a main sequence star in the Galactic field. We analysed the period-luminosity distribution for red-giant stars of 47 Tuc and the eclipsing binaries in the field. Light curves are uploaded on the Mikulski Archive for Space Telescopes under the project PATHOS. ","A PSF-based Approach to TESS High quality data Of Stellar clusters
(PATHOS) -- I. Search for exoplanets and variable stars in the field of 47
Tuc",3,"['Project PATHOS (""A Psf-based Approach to @NASA_TESS High quality data Of Stellar clusters"") has officially started! Check out my paper <LINK> and give a look to the first light curves of 47Tuc: <LINK> .We also report a new candidate exoplanet!🥳 <LINK>', 'Many thanks to the co-authors @valentingranata @borsatoluca83 @Malavolta_Exo @ValerioNascim https://t.co/HbJtWGHR91', '... and also many thanks to @ScottWFleming for the amazing work done to upload the light curves on the MAST archive :)']",19,10,469
337,43,1419645116638183424,1035496901800587264,Edoardo Ponti,"In our new paper, @KreutzerJulia @licwu @sivareddyg and I present a method to enhance translation-based cross-lingual transfer (gains up to 2.7 per task and 5.6 per language). Pdf: <LINK>. Code: <LINK> @Mila_Quebec @CambridgeLTL @GoogleAI While often achieving SOTA results, translation-based transfer suffers from some limitations: 1) errors accumulate along the pipeline and cannot be corrected; 2) only the maximum-likelihood translation is generated, which may not suffice for the downstream task. Instead, we integrate both translator and classifier into a single model, by treating the intermediate translations as a latent random variable. By initialising both components with pre-trained models, our method is suitable for few-shot learning. <LINK> By performing inference under this model, 1) we fine-tune the translator end-to-end or via Minimum Risk Training according to the downstream task loss; 2) we draw multiple translation samples to perform ensemble prediction in the downstream task. <LINK> We evaluate our model on several classification tasks (XCOPA, XNLI, PAWS-X) and report gains especially for resource-poor languages like Haitian Creole. In fact, our model yields larger gains for languages whose BLEU scores are lower. <LINK> We also systematically compare a large set of NMT models wrt their effect on cross-lingual transfer. We find that 1) their BLUE scores vary to a large extent across models and languages; 2) lower-ranked translations do not degrade the downstream performance. <LINK>",http://arxiv.org/abs/2107.11353,"While achieving state-of-the-art results in multiple tasks and languages, translation-based cross-lingual transfer is often overlooked in favour of massively multilingual pre-trained encoders. Arguably, this is due to its main limitations: 1) translation errors percolating to the classification phase and 2) the insufficient expressiveness of the maximum-likelihood translation. To remedy this, we propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable. As a result, 1) the neural machine translation system can be fine-tuned with a variant of Minimum Risk Training where the reward is the accuracy of the downstream task classifier. Moreover, 2) multiple samples can be drawn to approximate the expected loss across all possible translations during inference. We evaluate our novel latent translation-based model on a series of multilingual NLU tasks, including commonsense reasoning, paraphrase identification, and natural language inference. We report gains for both zero-shot and few-shot learning setups, up to 2.7 accuracy points on average, which are even more prominent for low-resource languages (e.g., Haitian Creole). Finally, we carry out in-depth analyses comparing different underlying NMT models and assessing the impact of alternative translations on the downstream performance. ",Modelling Latent Translations for Cross-Lingual Transfer,6,"['In our new paper, @KreutzerJulia @licwu @sivareddyg and I present a method to enhance translation-based cross-lingual transfer (gains up to 2.7 per task and 5.6 per language). Pdf: <LINK>. Code: <LINK> @Mila_Quebec @CambridgeLTL @GoogleAI', 'While often achieving SOTA results, translation-based transfer suffers from some limitations: 1) errors accumulate along the pipeline and cannot be corrected; 2) only the maximum-likelihood translation is generated, which may not suffice for the downstream task.', 'Instead, we integrate both translator and classifier into a single model, by treating the intermediate translations as a latent random variable. By initialising both components with pre-trained models, our method is suitable for few-shot learning. https://t.co/4dVuYAYkV5', 'By performing inference under this model, 1) we fine-tune the translator end-to-end or via Minimum Risk Training according to the downstream task loss; 2) we draw multiple translation samples to perform ensemble prediction in the downstream task. https://t.co/qACQsAByBx', 'We evaluate our model on several classification tasks (XCOPA, XNLI, PAWS-X) and report gains especially for resource-poor languages like Haitian Creole. In fact, our model yields larger gains for languages whose BLEU scores are lower. https://t.co/Rab4uIevBJ', 'We also systematically compare a large set of NMT models wrt their effect on cross-lingual transfer. We find that 1) their BLUE scores vary to a large extent across models and languages; 2) lower-ranked translations do not degrade the downstream performance. https://t.co/bf7Nx1T7uD']",21,07,1519
338,107,1438050997730004992,776765039726460929,Carlo Felice Manara,Do not just look at #MAPS papers.... Today there is also #PENELLOPELP paper II on @arxiv <LINK> PENELLOPE II. CVSO 104: a pre-main sequence close binary with an optical companion in Ori OB1 by Frasca et al. A new spectroscopic binary was discovered in our data! <LINK>,https://arxiv.org/abs/2109.06305,"We present results of our study of the close pre-main sequence spectroscopic binary CVSO 104 in Ori OB1, based on data obtained within the PENELLOPE legacy program. We derive, for the first time, the orbital elements of the system and the stellar parameters of the two components. The system is composed of two early M-type stars and has an orbital period of about 5 days and a mass ratio of 0.92, but contrarily to expectations does not appear to have a tertiary companion. Both components have been (quasi-)synchronized, but the orbit is still very eccentric. The spectral energy distribution clearly displays a significant infrared excess compatible with a circumbinary disk. The analysis of HeI and Balmer line profiles, after the removal of the composite photospheric spectrum, reveals that both components are accreting at a similar level. We also observe excess emission in H$\alpha$ and H$\beta$, which appears redshifted or blueshifted by more than 100 km/s with respect to the mass center of the system depending on the orbital phase. This additional emission could be connected with accretion structures, such as funnels of matter from the circumbinary disk. We also analyze the optical companion located at about 2"".4 from the spectroscopic binary. This companion, that we named CVSO 104B, turns out to be a background Sun-like star not physically associated with the PMS system and not belonging to Ori OB1. ","PENELLOPE II. CVSO 104: a pre-main sequence close binary with an optical
companion in Ori OB1",1,['Do not just look at #MAPS papers.... Today there is also #PENELLOPELP paper II on @arxiv \n<LINK>\nPENELLOPE II. CVSO 104: a pre-main sequence close binary with an optical companion in Ori OB1 by Frasca et al.\nA new spectroscopic binary was discovered in our data! <LINK>'],21,09,268
339,5,1433825799929540619,2698179823,Yaron Lipman,"New paper: Introducing Moser Flows (MFs), a new class of continuous normalizing flows (CNFs) on manifolds based on divergences of neural nets. First generative modeling results on general curved surfaces! with Noam Rozen @adityagrover_ @mnick <LINK> 1/7 <LINK> @adityagrover_ @mnick Given two probability densities on a manifold, J. Moser (1965) constructed a flow pushing the first to the second. The flow is defined by a vector field, the divergence of which equals the difference between densities. Here is a 1D example: 2/7 <LINK> This motivates MF, a universal approximator, where the difference in the model and prior densities is expressed using the (local, easy to approximate) divergence operator applied directly to a NN. Unlike prior CNF methods, it doesn’t require ODE solvers during training! 3/7 MFs are significantly faster and more accurate at density estimation compared to FFJORD: 4/7 <LINK> MFs achieve SOTA likelihoods by a large margin on earth science benchmarks with an underlying spherical geometry. 5/7 <LINK> MF density estimation and generation over freeform 2D surface. 6/7 <LINK> We are excited with the application of MFs to new scientific domains with geometric data. Scaling Moser Flow to high dimensional manifolds is an open and interesting future work challenge! 7/7",https://arxiv.org/abs/2108.08052,"We are interested in learning generative models for complex geometries described via manifolds, such as spheres, tori, and other implicit surfaces. Current extensions of existing (Euclidean) generative models are restricted to specific geometries and typically suffer from high computational costs. We introduce Moser Flow (MF), a new class of generative models within the family of continuous normalizing flows (CNF). MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN). The divergence is a local, linear differential operator, easy to approximate and calculate on manifolds. Therefore, unlike other CNFs, MF does not require invoking or backpropagating through an ODE solver during training. Furthermore, representing the model density explicitly as the divergence of a NN rather than as a solution of an ODE facilitates learning high fidelity densities. Theoretically, we prove that MF constitutes a universal density approximator under suitable assumptions. Empirically, we demonstrate for the first time the use of flow models for sampling from general curved surfaces and achieve significant improvements in density estimation, sample quality, and training complexity over existing CNFs on challenging synthetic geometries and real-world benchmarks from the earth and climate sciences. ",Moser Flow: Divergence-based Generative Modeling on Manifolds,7,"['New paper: Introducing Moser Flows (MFs), a new class of continuous normalizing flows (CNFs) on manifolds based on divergences of neural nets. First generative modeling results on general curved surfaces! \n\nwith Noam Rozen @adityagrover_ @mnick \n\n<LINK>\n\n1/7 <LINK>', '@adityagrover_ @mnick Given two probability densities on a manifold, J. Moser (1965) constructed a flow pushing the first to the second. The flow is defined by a vector field, the divergence of which equals the difference between densities. Here is a 1D example:\n\n2/7 https://t.co/7Rnxmvsuel', 'This motivates MF, a universal approximator, where the difference in the model and prior densities is expressed using the (local, easy to approximate) divergence operator applied directly to a NN. Unlike prior CNF methods, it doesn’t require ODE solvers during training! \n\n3/7', 'MFs are significantly faster and more accurate at density estimation compared to FFJORD:\n\n4/7 https://t.co/BuXp0czbOP', 'MFs achieve SOTA likelihoods by a large margin on earth science benchmarks with an underlying spherical geometry.\n\n5/7 https://t.co/tq1MLN8sN9', 'MF density estimation and generation over freeform 2D surface.\n\n6/7 https://t.co/RAeKxNWjel', 'We are excited with the application of MFs to new scientific domains with geometric data. Scaling Moser Flow to high dimensional manifolds is an open and interesting future work challenge!\n\n7/7']",21,08,1304
340,233,1273151578040684546,1050529973751205888,jörn jacobsen,"Invertible Neural Nets (INNs) / Normalizing Flows are amazing! But are INNs always invertible? Surprisingly we find that they often violate this constraint! Below a Glow reconstruction on CelebA with @JensBehrmann @PaulVicol @kcjacksonwang @RogerGrosse <LINK> <LINK> Derivatives of inverses can become arbitrarily large =&gt; ""exploding inverse"" This can lead to analytical invertibility not carrying through to the numerics, INNs become non-invertible! We explain this effect by analysing bi-Lipschitz properties of common invertible networks <LINK> We also find striking differences between INNs. Additive coupling blocks stably train with memory-saving gradients, while affine couplings lead to incorrect gradient computation, highlighting the importance to understand influence of architectural choices on exploding inverses <LINK> Because memory-saving backprop only requires accurate invertibility on training data, we propose an architecture-agnostic solution ensuring local invertibility: bi-directional finite differences penalties But this is not enough for Normalizing Flows (NFs)! For NFs we often want density estimates on samples not from the training data =&gt; We need global invertibility! Indeed NFs can suffer from exploding inverses on OOD inputs implying meaningless density estimates. Solving this requires stable architectures like Residual Flows! <LINK> Take home messages: 1) Analytical invertibility does not necessarily imply numerical invertibility 2) Different tasks have different requirements on invertibility (e.g. local vs. global) 3) Controlling stability is crucial for principled and successful application of INNs All this and much more in our new work: ""Understanding and Mitigating Exploding Inverses in Invertible Neural Networks"" Link: <LINK> 👩‍🔬 We hope our work encourages researchers to consider stability as an important ingredient of INN design 👨‍🔬 Code here: <LINK> @emiel_hoogeboom That's very interesting! For the sake of brevity we left autoregressive models out, but it should be possible to extend our observations / bounds to this case as well. Note that affine coupling blocks are not globally Lipschitz (as we show)!",https://arxiv.org/abs/2006.09347,"Invertible neural networks (INNs) have been used to design generative models, implement memory-saving gradient computation, and solve inverse problems. In this work, we show that commonly-used INN architectures suffer from exploding inverses and are thus prone to becoming numerically non-invertible. Across a wide range of INN use-cases, we reveal failures including the non-applicability of the change-of-variables formula on in- and out-of-distribution (OOD) data, incorrect gradients for memory-saving backprop, and the inability to sample from normalizing flow models. We further derive bi-Lipschitz properties of atomic building blocks of common architectures. These insights into the stability of INNs then provide ways forward to remedy these failures. For tasks where local invertibility is sufficient, like memory-saving backprop, we propose a flexible and efficient regularizer. For problems where global invertibility is necessary, such as applying normalizing flows on OOD data, we show the importance of designing stable INN building blocks. ","Understanding and Mitigating Exploding Inverses in Invertible Neural
Networks",9,"['Invertible Neural Nets (INNs) / Normalizing Flows are amazing! But are INNs always invertible? \n\nSurprisingly we find that they often violate this constraint!\nBelow a Glow reconstruction on CelebA\n\nwith @JensBehrmann @PaulVicol @kcjacksonwang @RogerGrosse\n\n<LINK> <LINK>', 'Derivatives of inverses can become arbitrarily large =&gt; ""exploding inverse""\n\nThis can lead to analytical invertibility not carrying through to the numerics, INNs become non-invertible!\n\nWe explain this effect by analysing bi-Lipschitz properties of common invertible networks https://t.co/LbIj5woZcG', 'We also find striking differences between INNs. Additive coupling blocks stably train with memory-saving gradients, while affine couplings lead to incorrect gradient computation, highlighting the importance to understand influence of architectural choices on exploding inverses https://t.co/rcMyPymdaf', 'Because memory-saving backprop only requires accurate invertibility on training data, we propose an architecture-agnostic solution ensuring local invertibility: bi-directional finite differences penalties\n\nBut this is not enough for Normalizing Flows (NFs)!', 'For NFs we often want density estimates on samples not from the training data =&gt; We need global invertibility! \n\nIndeed NFs can suffer from exploding inverses on OOD inputs implying meaningless density estimates. Solving this requires stable architectures like Residual Flows! https://t.co/Iqa9OJtczg', 'Take home messages: \n\n1) Analytical invertibility does not necessarily imply numerical invertibility \n\n2) Different tasks have different requirements on invertibility (e.g. local vs. global)\n\n3) Controlling stability is crucial for principled and successful application of INNs', 'All this and much more in our new work: \n\n""Understanding and Mitigating Exploding Inverses in Invertible Neural Networks"" \n\nLink: https://t.co/I4OAOdqM1Z\n\n👩\u200d🔬 We hope our work encourages researchers to consider stability as an important ingredient of INN design 👨\u200d🔬', 'Code here: https://t.co/WjqsNwQ2K2', ""@emiel_hoogeboom That's very interesting! For the sake of brevity we left autoregressive models out, but it should be possible to extend our observations / bounds to this case as well. Note that affine coupling blocks are not globally Lipschitz (as we show)!""]",20,06,2177
341,91,1037251115832815616,4242812189,Adam Noel,A common design choice for a diffusive mol comm system is for receiver to be passive or reactive. A big advantage for passive is that it's much faster to accurately simulate. We propose an algorithm that can speed up absorption by orders of magnitude <LINK>,https://arxiv.org/abs/1809.00808,"A novel a priori Monte Carlo (APMC) algorithm is proposed to accurately simulate the molecules absorbed at spherical receiver(s) with low computational complexity in diffusion-based molecular communication (MC) systems. It is demonstrated that the APMC algorithm achieves high simulation efficiency since by using this algorithm, the fraction of molecules absorbed for a relatively large time step length precisely matches the analytical result. Therefore, the APMC algorithm overcomes the shortcoming of the existing refined Monte Carlo (RMC) algorithm which enables accurate simulation for a relatively small time step length only. Moreover, for the RMC algorithm, an expression is proposed to quickly predict the simulation accuracy as a function of the time step length and system parameters, which facilitates the choice of simulation time step for a given system. Furthermore, a rejection threshold is proposed for both the RMC and APMC algorithms to significantly save computational complexity while causing an extremely small loss in accuracy. ","A Novel A Priori Simulation Algorithm for Absorbing Receivers in
Diffusion-Based Molecular Communication Systems",1,"[""A common design choice for a diffusive mol comm system is for receiver to be passive or reactive. A big advantage for passive is that it's much faster to accurately simulate. We propose an algorithm that can speed up absorption by orders of magnitude <LINK>""]",18,09,257
342,167,1433429327002046465,10580512,Stanislas Polu,"📔 New MiniF2F paper! <LINK> Introduces MiniF2F a benchmark of Olympiad-level problem statements formalized in Lean/Metamath/Isabelle. GPT-f applied to MiniF2F/Metamath ~ 2% 🥶 GPT-f applied to MiniF2F/Lean ~ 29% 🔥 Code: <LINK> 👇 <LINK> This work and paper was led by @KunhaoZ during his 5 months scientific internship at @Polytechnique in collaboration with @OpenAI helped with @jessemhan's Lean superpowers. Way to go Kunhao! What an achievement 🙌 Why a benchmark of maths exercises? Because it's very hard to compare neural theorem provers since they are generally tied to specific formal systems and hence specific math libraries and their splits. Also this benchmark is definitely out of distribution compared to typical maths libraries, so hopefully it will serve as a useful measure of mathematical reasoning and generalization in the context of formal maths. This benchmark has been super useful to us @OpenAI and hopefully it will be equally useful to other teams interested in neural theorem proving. We see it as a stepping stone towards the IMO Grand Challenge (<LINK>) 🦾 Looking forward to adding HOL Light, Coq and other systems coverage. Contributions are welcome! (big thanks to @WendaLi8 for his contribution to Isabelle coverage!)",https://arxiv.org/abs/2109.00110,"We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f, a neural theorem prover based on GPT-3 and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving. ",MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics,6,"['📔 New MiniF2F paper! <LINK>\n\nIntroduces MiniF2F a benchmark of Olympiad-level problem statements formalized in Lean/Metamath/Isabelle.\n\nGPT-f applied to MiniF2F/Metamath ~ 2% 🥶\nGPT-f applied to MiniF2F/Lean ~ 29% 🔥\n\nCode: <LINK>\n\n👇 <LINK>', ""This work and paper was led by @KunhaoZ during his 5 months scientific internship at @Polytechnique in collaboration with @OpenAI helped with @jessemhan's Lean superpowers.\n\nWay to go Kunhao! What an achievement 🙌"", ""Why a benchmark of maths exercises? Because it's very hard to compare neural theorem provers since they are generally tied to specific formal systems and hence specific math libraries and their splits."", 'Also this benchmark is definitely out of distribution compared to typical maths libraries, so hopefully it will serve as a useful measure of mathematical reasoning and generalization in the context of formal maths.', 'This benchmark has been super useful to us @OpenAI and hopefully it will be equally useful to other teams interested in neural theorem proving. We see it as a stepping stone towards the IMO Grand Challenge (https://t.co/BXAU3le1YR) 🦾', 'Looking forward to adding HOL Light, Coq and other systems coverage. Contributions are welcome! (big thanks to @WendaLi8 for his contribution to Isabelle coverage!)']",21,09,1246
343,54,1205157218611535882,59595964,Nic Ross,"I know y'all are having fun at the #UKElection #GeneralElection2019 #GE2019, but there's this very nice new paper on the arXiv today:: ""The first high-redshift changing-look quasars"" <LINK> Bottom line(s): (i) CIV exhibits CLQ behaviour and (ii) even in their 'low-state', 10^9 supermassive black holes seem to accrete at a decent fraction of Eddington. 👍🔭🌌🪐",https://arxiv.org/abs/1912.05310v1,"We report on three redshift $z>2$ quasars with dramatic changes in their C IV emission lines, the first sample of changing-look quasars (CLQs) at high redshift. This is also the first time the changing-look behaviour has been seen in a high-ionisation emission line. SDSS J1205+3422, J1638+2827, and J2228+2201 show interesting behaviour in their observed optical light curves, and subsequent spectroscopy shows significant changes in the C IV broad emission line, with both line collapse and emergence being displayed on rest-frame timescales of $\sim$240-1640 days. These are rapid changes, especially when considering virial black hole mass estimates of $M_{\rm BH} > 10^{9} M_{\odot}$ for all three quasars. Continuum and emission line measurements from the three quasars show changes in the continuum-equivalent width plane with the CLQs seen to be on the edge of the full population distribution, and showing indications of an intrinsic Baldwin effect. We put these observations in context with recent state-change models, and note that even in their observed low-state, the C IV CLQs are generally above $\sim$5\% in Eddington luminosity. ",] The first high-redshift changing-look quasars,2,"['I know y\'all are having fun at the #UKElection #GeneralElection2019 #GE2019, but there\'s this very nice new paper on the arXiv today:: \n\n""The first high-redshift changing-look quasars""\n<LINK>', ""Bottom line(s): (i) CIV exhibits CLQ behaviour and (ii) even in their 'low-state', 10^9 supermassive black holes seem to accrete at a decent fraction of Eddington. \n\n👍🔭🌌🪐""]",19,12,360
344,122,1283047741019561985,1059813876454354955,Olaf Ronneberger,"(1/2) Our new paper ""Contrastive Training for Improved Out-of-Distribution Detection"" <LINK> with @jimwinkens, @BunelR, @abzz4ssj, Robert Stanforth, @vivnat, @joe_ledsam, @patmacwilliams, @pushmeet, @alan_karthi, @saakohl, @TaylanCemgilML, @arkitus <LINK> (2/2) We set a new state-of-the-art for the challenging near OOD setting without any outlier data during training. Contrastive training on in-distribution data only is sufficient to boost the performance! @AVMiceliBarone @jimwinkens @BunelR @abzz4ssj @vivnat @joe_ledsam @patmacwilliams @pushmeet @alan_karthi @saakohl @TaylanCemgilML @arkitus Yes, we use the standard Mahalanobis approach.",https://arxiv.org/abs/2007.05566,"Reliable detection of out-of-distribution (OOD) inputs is increasingly understood to be a precondition for deployment of machine learning systems. This paper proposes and investigates the use of contrastive training to boost OOD detection performance. Unlike leading methods for OOD detection, our approach does not require access to examples labeled explicitly as OOD, which can be difficult to collect in practice. We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks. By introducing and employing the Confusion Log Probability (CLP) score, which quantifies the difficulty of the OOD detection task by capturing the similarity of inlier and outlier datasets, we show that our method especially improves performance in the `near OOD' classes -- a particularly challenging setting for previous methods. ",Contrastive Training for Improved Out-of-Distribution Detection,3,"['(1/2) Our new paper ""Contrastive Training for Improved\nOut-of-Distribution Detection"" <LINK> with @jimwinkens, @BunelR, @abzz4ssj, Robert Stanforth, @vivnat, @joe_ledsam, @patmacwilliams, @pushmeet, @alan_karthi, @saakohl, @TaylanCemgilML, @arkitus <LINK>', '(2/2) We set a new state-of-the-art for the challenging near OOD setting without any outlier data during training. Contrastive training on in-distribution data only is sufficient to boost the performance!', '@AVMiceliBarone @jimwinkens @BunelR @abzz4ssj @vivnat @joe_ledsam @patmacwilliams @pushmeet @alan_karthi @saakohl @TaylanCemgilML @arkitus Yes, we use the standard Mahalanobis approach.']",20,07,646
345,163,1400864196305297408,2485053080,Swarnadeep Saha,"New #NAACL2021 paper (next Tues) on explaining compositional reasoning w/ multiple proof graphs ""multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning"" Paper: <LINK> Code: <LINK> @prateeky2806 @mohitban47 1/n <LINK> Compositional reasoning is not always unique -- there can be multiple ways of reaching the correct answer. With multiPRover, we extend our #EMNLP2020 work on PRover (which generates a single proof) by now tackling a more challenging task of generating a *set* of proof graphs. 2/n We find that PRover, when asked to generate top-k proofs, does not perform well & that multiple proofs for a ques. often have common subgraphs between them. So, to jointly learn from all proofs & exploit their correlations better, we pose it as a (graph) set-generation task. 3/n <LINK> We propose 2 multiPRover models: (1) Multilabel-multiPRover generating a set of proofs via multi-label classification & implicit conditioning between proofs, (2) Iterative-multiPRover generating proofs iteratively by explicitly conditioning on the previously generated proofs 4/n Both multiPRover models show signif. improvements on all synthetic, zero-shot, & human-paraphrased datasets (from RuleTakers). Iterative-multiPRover also outperforms PRover on a zero-shot dataset with all single-proof examples & has better generalization to higher depth ques. 5/n <LINK>",https://arxiv.org/abs/2106.01354,"We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules (Clark et al., 2020). A recent work, named PRover (Saha et al., 2020), performs such reasoning by answering a question and also generating a proof graph that explains the answer. However, compositional reasoning is not always unique and there may be multiple ways of reaching the correct answer. Thus, in our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases. Each proof provides a different rationale for the answer, thereby improving the interpretability of such reasoning systems. In order to jointly learn from all proof graphs and exploit the correlations between multiple proofs for a question, we pose this task as a set generation problem over structured output spaces where each proof is represented as a directed graph. We propose two variants of a proof-set generation model, multiPRover. Our first model, Multilabel-multiPRover, generates a set of proofs via multi-label classification and implicit conditioning between the proofs; while the second model, Iterative-multiPRover, generates proofs iteratively by explicitly conditioning on the previously generated proofs. Experiments on multiple synthetic, zero-shot, and human-paraphrased datasets reveal that both multiPRover models significantly outperform PRover on datasets containing multiple gold proofs. Iterative-multiPRover obtains state-of-the-art proof F1 in zero-shot scenarios where all examples have single correct proofs. It also generalizes better to questions requiring higher depths of reasoning where multiple proofs are more frequent. Our code and models are publicly available at this https URL ","multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning",5,"['New #NAACL2021 paper (next Tues) on explaining compositional reasoning w/ multiple proof graphs ""multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning""\n \nPaper: <LINK>\nCode: <LINK>\n \n@prateeky2806 @mohitban47\n1/n <LINK>', 'Compositional reasoning is not always unique -- there can be multiple ways of reaching the correct answer. With multiPRover, we extend our #EMNLP2020 work on PRover (which generates a single proof) by now tackling a more challenging task of generating a *set* of proof graphs.\n2/n', 'We find that PRover, when asked to generate top-k proofs, does not perform well &amp; that multiple proofs for a ques. often have common subgraphs between them. So, to jointly learn from all proofs &amp; exploit their correlations better, we pose it as a (graph) set-generation task.\n3/n https://t.co/odAjr2k0B5', 'We propose 2 multiPRover models: (1) Multilabel-multiPRover generating a set of proofs via multi-label classification &amp; implicit conditioning between proofs, (2) Iterative-multiPRover generating proofs iteratively by explicitly conditioning on the previously generated proofs\n4/n', 'Both multiPRover models show signif. improvements on all synthetic, zero-shot, &amp; human-paraphrased datasets (from RuleTakers). Iterative-multiPRover also outperforms PRover on a zero-shot dataset with all single-proof examples &amp; has better generalization to higher depth ques.\n5/n https://t.co/fcRZ9SPpq8']",21,06,1387
346,135,1228180085657591808,4666231375,Konstantin Batygin,"Our new paper on the solar system's infancy is out: <LINK> Based on the dynamical structure of the cold classical Kuiper belt, we conclude that the closest approach of a passing star within the solar system’s birth cluster must have been greater than ~240 AU! <LINK> @planefag I'm partial to disk-torquing by a primordial binary stellar companion as a plausible explanation for the solar obliquity. @ChristianPeel I think in-situ formation of P9 is unlikely even in absence of this argument. But one plausible story for P9 formation is scattering off of Jupiter, followed by modification of the orbit by the cluster gravity. For this narrative, the derived constraints are very relevant. @astrokiwi no -- saw the DDA talk a while back, but was under the impression that the concentration was on the ~80AU warp previously reported by Volk & Malhotra. Just found it on ads, and looks like Christa+ get ~1.7 deg for the cold belt as do Brown & Pan. Thanks for pointing it out. @eringreeson Wonderful! Welcome back to 'dena. Hope to see you again soon!",https://arxiv.org/abs/2002.05656,"Most planetary systems -- including our own -- are born within stellar clusters, where interactions with neighboring stars can help shape the system architecture. This paper develops an orbit-averaged formalism to characterize the cluster's mean-field effects as well as the physics of long-period stellar encounters. Our secular approach allows for an analytic description of the dynamical consequences of the cluster environment on its constituent planetary systems. We analyze special cases of the resulting Hamiltonian, corresponding to eccentricity evolution driven by planar encounters, as well as hyperbolic perturbations upon dissipative disks. We subsequently apply our results to the early evolution of our solar system, where the cluster's collective potential perturbs the solar system's plane, and stellar encounters act to increase the velocity dispersion of the Kuiper belt. Our results are two-fold: first, we find that cluster effects can alter the mean plane of the solar system by $\lesssim1\deg$, and are thus insufficient to explain the $\psi\approx6\deg$ obliquity of the sun. Second, we delineate the extent to which stellar flybys excite the orbital dispersion of the cold classical Kuiper belt, and show that while stellar flybys may grow the cold belt's inclination by the observed amount, the resulting distribution is incompatible with the data. Correspondingly, our calculations place an upper limit on the product of the stellar number density and residence time of the sun in its birth cluster, $\eta\,\tau\lesssim2\times10^4\,$Myr/pc$^3$. ","Dynamics of Planetary Systems Within Star Clusters: Aspects of the Solar
System's Early Evolution",5,"[""Our new paper on the solar system's infancy is out: <LINK> Based on the dynamical structure of the cold classical Kuiper belt, we conclude that the closest approach of a passing star within the solar system’s birth cluster must have been greater than ~240 AU! <LINK>"", ""@planefag I'm partial to disk-torquing by a primordial binary stellar companion as a plausible explanation for the solar obliquity."", '@ChristianPeel I think in-situ formation of P9 is unlikely even in absence of this argument. But one plausible story for P9 formation is scattering off of Jupiter, followed by modification of the orbit by the cluster gravity. For this narrative, the derived constraints are very relevant.', '@astrokiwi no -- saw the DDA talk a while back, but was under the impression that the concentration was on the ~80AU warp previously reported by Volk &amp; Malhotra. Just found it on ads, and looks like Christa+ get ~1.7 deg for the cold belt as do Brown &amp; Pan. Thanks for pointing it out.', ""@eringreeson Wonderful! Welcome back to 'dena. Hope to see you again soon!""]",20,02,1048
347,117,1423096848730914821,203254308,Feng Li,"New paper with Li Li & @YanfeiKang. We estimate weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecasting combination is determined by time series features from historical information. Feedbacks are welcome! <LINK>",https://arxiv.org/abs/2108.02082,"In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time series features, which is called Feature-based Bayesian Forecasting Model Averaging (FEBAMA). Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecasting combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to add weight to the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point and density forecasts. ",Bayesian forecast combination using time-varying features,1,"['New paper with Li Li &amp; @YanfeiKang.\nWe estimate weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecasting combination is determined by time series features from historical information. Feedbacks are welcome! <LINK>']",21,08,261
348,143,1427436677698670592,1219708000790876161,Datta Lab,"Interested in self-propelled living & active systems? In <LINK>, we describe recent progress in the study of active transport in complex environments, focusing on two key biological systems—bacteria & eukaryotic cells—as archetypes of active matter. (1/6) Active transport is fundamentally interesting in biology, physics, & engineering, and is important to biomedical, environmental, & industrial processes. How do complexities such as geometric constraints, mechanical cues, and external stimuli influence transport? (2/6) In this chapter to be published in a book by @RoySocChem press, we review research highlighting how such environmental factors can fundamentally alter cellular motility, hindering or promoting active transport in unexpected ways, & giving rise to fascinating new behaviors. (3/6) In parallel, we describe open questions and promising avenues for future research, and describe connections to other active systems & more general theoretical/computational models of transport processes in complex environments. (4/6) Our goal in writing this chapter was not to present a comprehensive overview of all the literature in the field, but rather, to highlight some active (pun intended) areas of research whose growth has been particularly rapid recently. (5/6) It was a lot of fun to work on this with postdocs Alejandro Martínez-Calvo and Carolina Trenado-Yuste. Please RT/share with anyone who might be interested interested. As always, any and all feedback is welcome! (6/6)",http://arxiv.org/abs/2108.07011,"The ability of many living systems to actively self-propel underlies critical biomedical, environmental, and industrial processes. While such active transport is well-studied in uniform settings, environmental complexities such as geometric constraints, mechanical cues, and external stimuli such as chemical gradients and fluid flow can strongly influence transport. In this chapter, we describe recent progress in the study of active transport in such complex environments, focusing on two prominent biological systems -- bacteria and eukaryotic cells -- as archetypes of active matter. We review research findings highlighting how environmental factors can fundamentally alter cellular motility, hindering or promoting active transport in unexpected ways, and giving rise to fascinating behaviors such as directed migration and large-scale clustering. In parallel, we describe specific open questions and promising avenues for future research. Furthermore, given the diverse forms of active matter -- ranging from enzymes and driven biopolymer assemblies, to microorganisms and synthetic microswimmers, to larger animals and even robots -- we also describe connections to other active systems as well as more general theoretical/computational models of transport processes in complex environments. ",Active transport in complex environments,6,"['Interested in self-propelled living &amp; active systems? In <LINK>, we describe recent progress in the study of active transport in complex environments, focusing on two key biological systems—bacteria &amp; eukaryotic cells—as archetypes of active matter. (1/6)', 'Active transport is fundamentally interesting in biology, physics, &amp; engineering, and is important to biomedical, environmental, &amp; industrial processes. How do complexities such as geometric constraints, mechanical cues, and external stimuli influence transport? (2/6)', 'In this chapter to be published in a book by @RoySocChem press, we review research highlighting how such environmental factors can fundamentally alter cellular motility, hindering or promoting active transport in unexpected ways, &amp; giving rise to fascinating new behaviors. (3/6)', 'In parallel, we describe open questions and promising avenues for future research, and describe connections to other active systems &amp; more general theoretical/computational models of transport processes in complex environments. (4/6)', 'Our goal in writing this chapter was not to present a comprehensive overview of all the literature in the field, but rather, to highlight some active (pun intended) areas of research whose growth has been particularly rapid recently. (5/6)', 'It was a lot of fun to work on this with postdocs Alejandro Martínez-Calvo and Carolina Trenado-Yuste. Please RT/share with anyone who might be interested interested. As always, any and all feedback is welcome! (6/6)']",21,08,1495
349,86,1293335487483006976,1032007830386012160,Paul Dalba,"New long-period, transiting exoplanet paper announcement from this week: <LINK>. And this one involves a #Kepler system! #AlwaysMoreToFindFromKepler. Check it out: Back in 2010, Kepler spotted a transit event for the star KIC 5951458. It only happened once in Kepler's entire (4 yr!!) primary mission. It looked like it could be planetary and a few papers validated the existence a transiting exoplanet (Kepler-456b). <LINK> Based on the single transit, this was thought to be a super long-period giant transiting planet, with P&gt;1000 days (with understandably large error bars). These are exactly the kinds of planets I love, so I started getting RVs with HIRES at @keckobservatory. My colleagues and I quickly noticed a HUGE linear trend in the RVs, which made us think this was actually a grazing EB. C'est la via, right? Maybe not! Subtracting the trend revealed another signal, possibly from a giant planet (note the RV units). <LINK> We employed The Joker to investigate, which model RVs in cases with sparse data. Separate Joker runs exploring the separate signals suggested that either the possible giant planet or the stellar companion could have caused the event Kepler observed. #TransitWhodunnit In either case though, Kepler data, Keck data, and even a Gaia RV data point limit the properties of whichever companion caused the single ""transit."" At the end, the guilty culprit is left as an unsolved mystery, but we can solve this efficiently if we just wait a few years. This paper also includes the development of a new method of processing Keck-HIRES RVs using an already existing template. This is super useful for faint stars (#Kepler), which need &gt;=1 hr of time for a template. It yields RVs with precision 4-8 m/s for most types of stars! Thanks so much to my colleagues who helped to make this work possible!! @ExoCytherean @awhoward BJ Fulton and Howard Isaacson!",https://arxiv.org/abs/2008.02811,"Planetary systems that show single-transit events are a critical pathway to increasing the yield of long-period exoplanets from transit surveys. From the primary Kepler mission, KIC 5951458b (Kepler-456b) was thought to be a single-transit giant planet with an orbital period of 1310 days. However, radial velocity (RV) observations of KIC 5951458 from the HIRES instrument on the Keck telescope suggest that the system is far more complicated. To extract precise RVs for this $V\approx13$ star, we develop a novel matched-template technique that takes advantage of a broad library of template spectra acquired with HIRES. We validate this technique and measure its noise floor to be 4 - 8 m s$^{-1}$ (in addition to internal RV error) for most stars that would be targeted for precision RVs. For KIC 5951458, we detect a long-term RV trend that suggests the existence of a stellar companion with an orbital period greater than a few thousand days. We also detect an additional signal in the RVs that is possibly caused by a planetary or brown dwarf companion with mass in the range of 0.6 - 82 $M_{\rm J}$ and orbital period below a few thousand days. Curiously, from just the data on hand, it is not possible to determine which object caused the single ""transit"" event. We demonstrate how a modest set of RVs allows us to update the properties of this unusual system and predict the optimal timing for future observations. ","Multiple Explanations for the Single Transit of KIC 5951458 based on
Radial Velocity Measurements Extracted with a Novel Matched-template
Technique",8,"['New long-period, transiting exoplanet paper announcement from this week:\xa0<LINK>. And this one involves a #Kepler system! #AlwaysMoreToFindFromKepler. Check it out:', ""Back in 2010, Kepler spotted a transit event for the star KIC 5951458. It only happened once in Kepler's entire (4 yr!!) primary mission. It looked like it could be planetary and a few papers validated the existence a transiting exoplanet (Kepler-456b). https://t.co/wEY8fb9dPz"", 'Based on the single transit, this was thought to be a super long-period giant transiting planet, with P&gt;1000 days (with understandably large error bars). These are exactly the kinds of planets I love, so I started getting RVs with HIRES at @keckobservatory.', ""My colleagues and I quickly noticed a HUGE linear trend in the RVs, which made us think this was actually a grazing EB. C'est la via, right? Maybe not! Subtracting the trend revealed another signal, possibly from a giant planet (note the RV units). https://t.co/Al9mZwSmJj"", 'We employed The Joker to investigate, which model RVs in cases with sparse data. Separate Joker runs exploring the separate signals suggested that either the possible giant planet or the stellar companion could have caused the event Kepler observed. #TransitWhodunnit', 'In either case though, Kepler data, Keck data, and even a Gaia RV data point limit the properties of whichever companion caused the single ""transit."" At the end, the guilty culprit is left as an unsolved mystery, but we can solve this efficiently if we just wait a few years.', 'This paper also includes the development of a new method of processing Keck-HIRES RVs using an already existing template. This is super useful for faint stars (#Kepler), which need &gt;=1 hr of time for a template. It yields RVs with precision 4-8 m/s for most types of stars!', 'Thanks so much to my colleagues who helped to make this work possible!! @ExoCytherean @awhoward BJ Fulton and Howard Isaacson!']",20,08,1889
350,122,1257795319997202432,865627769631264768,Jamie Tayar,"New paper out on the arxiv today (<LINK>) led by graduate student Don Dixon (<LINK>) <LINK> This paper started with a question from @chargedcurrent about whether the UV excess he found in his black hole- red giant binary was a sign of mass accretion onto the black hole, or whether it could come from the rapidly rotating red giant. I didn't know, so Don started digging. The first question he had to figure out was what is the normal amount of UV emission from a giant star. Since the UV is notoriously hard to model, he used an empirical locus to relate the J-K color to the NUV-J color, and defined NUV excess as vertical height above that line. <LINK> He then used a sample of TGAS stars from the @APOGEEsurvey where we could measure some amount of rotational broadening to look for a relationship between rotation rate and NUV excess. Excitingly, they seemed very correlated! <LINK> Now rotation-activity people usually think about this in the context of Rossby number, and that looks good too! We have a rising activity regime for slow rotators, a saturated regime for fast rotators, and maybe even a few super saturated, less active points on the far left. <LINK> Our giant relationship (black) doesn't exactly match what is seen in the UV for M dwarfs (yellow and points), but it does seem to have the right shape, with saturation happening around the same place. <LINK> This suggests the rotation/convection basis of the activity relationship doesn't go away for giants (😅phew all my models of stellar spin down implicitly assume this), and that there's some deep physical similarities between the chromospheres in dwarfs and giants Finally, Don was able to answer @chargedcurrent 's original question- ""is the UV excess in his system black hole accretion?"" with a definitive no! This system is exactly as NUV active as we would expect given its rotation rate! <LINK> Don did a really great job leading this project, and I also want to give a shout out to Keivan Stassun, my co mentor on this project, and to the referee, who was really helpful in refining what we found.",https://arxiv.org/abs/2005.00577,"Main sequence stars exhibit a clear rotation-activity relationship, in which rapidly rotating stars drive strong chromospheric/coronal ultraviolet and X-ray emission. While the vast majority of red giant stars are inactive, a few percent exhibit strong ultraviolet emission. Here we use a sample of 133 red giant stars observed by SDSS APOGEE and GALEX to demonstrate an empirical relationship between NUV excess and rotational velocity (vsini). Beyond this simple relationship, we find that NUV excess also correlates with rotation period and with Rossby number in a manner that shares broadly similar trends to those found in M dwarfs, including activity saturation among rapid rotators. Our data also suggest that the most extremely rapidly rotating giants may exhibit so-called ""super-saturation"", which could be caused by centrifugal stripping of these stars rotating at a high fraction of breakup speed. As an example application of our empirical rotation-activity relation, we demonstrate that the NUV emission observed from a recently reported system comprising a red giant with a black hole companion is fully consistent with arising from the rapidly rotating red giant in that system. Most fundamentally, our findings suggest a common origin of chromospheric activity in rotation and convection for cool stars from main sequence to red giant stages of evolution. ",Rotationally Driven Ultraviolet Emission of Red Giant Stars,9,"['New paper out on the arxiv today (<LINK>) led by graduate student Don Dixon (<LINK>) <LINK>', ""This paper started with a question from @chargedcurrent about whether the UV excess he found in his black hole- red giant binary was a sign of mass accretion onto the black hole, or whether it could come from the rapidly rotating red giant. I didn't know, so Don started digging."", 'The first question he had to figure out was what is the normal amount of UV emission from a giant star. Since the UV is notoriously hard to model, he used an empirical locus to relate the J-K color to the NUV-J color, and defined NUV excess as vertical height above that line. https://t.co/PCVfqFZfVa', 'He then used a sample of TGAS stars from the @APOGEEsurvey where we could measure some amount of rotational broadening to look for a relationship between rotation rate and NUV excess. Excitingly, they seemed very correlated! https://t.co/ZNgLi1DNPH', 'Now rotation-activity people usually think about this in the context of Rossby number, and that looks good too! We have a rising activity regime for slow rotators, a saturated regime for fast rotators, and maybe even a few super saturated, less active points on the far left. https://t.co/C1F2cviHOJ', ""Our giant relationship (black) doesn't exactly match what is seen in the UV for M dwarfs (yellow and points), but it does seem to have the right shape, with saturation happening around the same place. https://t.co/0t1zSOMFFk"", ""This suggests the rotation/convection basis of the activity relationship doesn't go away for giants (😅phew all my models of stellar spin down implicitly assume this), and that there's some deep physical similarities between the chromospheres in dwarfs and giants"", 'Finally, Don was able to answer @chargedcurrent \'s original question- ""is the UV excess in his system black hole accretion?"" with a definitive no! This system is exactly as NUV active as we would expect given its rotation rate! https://t.co/f2kwmtGY0h', 'Don did a really great job leading this project, and I also want to give a shout out to Keivan Stassun, my co mentor on this project, and to the referee, who was really helpful in refining what we found.']",20,05,2080
351,109,1437975500341714949,1658162341,Narayanan Rengaswamy,"First paper out of postdoc @azengineering out! See <LINK>. We propose a QEC based GHZ distillation protocol, inspired by the Bell pair distillation of @markwilde in <LINK>. Byproduct: new method to generate logical Pauli operators for codes. <LINK> Joint work with @rainarocks @nithinitzme and Prof. Bane Vasić. Implementation available online: <LINK> Besides the main protocol, we also discuss variations that might be more practical for certain network topologies. We work out a detailed example with a 3 qubit code to show the subtleties in the steps of the protocol. This example should help develop protocol variations. The key step to the protocol is a new property of GHZ states that forms our main result in Theorem 6. It considers stabilizer measurements on one subsystem and shows the equivalent code on the remaining two subsystems. It builds on an ""extended"" transpose trick from Bell pairs.",https://arxiv.org/abs/2109.06248,"Entanglement distillation is a well-studied problem in quantum information, where one typically starts with $n$ noisy Bell pairs and distills $k$ Bell pairs of higher fidelity. While distilling Bell pairs is the canonical setting, it is important to study the distillation of multipartite entangled states because these can be useful for realizing distributed algorithms on quantum networks. In this paper, we study the distillation of GHZ states using quantum error correcting codes (QECCs). Using the stabilizer formalism, we begin by explaining the QECC-based Bell pair distillation protocol in arXiv:0708.3699, which relies particularly on the transpose symmetry between Alice's and Bob's qubits in Bell states. Extending this idea, we show that, given $n$ GHZ states, performing a matrix on Alice's qubits is equivalent to performing a ""stretched"" version of the transpose of the matrix on the qubits of Bob and Charlie. We call this mapping to the stretched version of the matrix the GHZ-map, and show that it is an algebra homomorphism. Using this property, we show that Alice projecting her qubits onto an $[[n,k]]$ stabilizer code implies the simultaneous projection of Bob's and Charlie's qubits onto an induced $[[2n,k]]$ stabilizer code. Guided by this insight, we develop a GHZ distillation protocol based on local operations and classical communication that uses any stabilizer code. Inspired by stabilizer measurements on GHZ states, we also develop a new algorithm to generate logical Pauli operators of any stabilizer code and use it in the protocol. Since quantum codes with finite rate and almost linear minimum distance have recently been discovered, this paper paves the way for high-rate high-output-fidelity GHZ distillation. We provide simulation results on the $5$-qubit perfect code to emphasize the importance of the placement of a certain local Clifford operation in the protocol. ",Distilling GHZ States using Stabilizer Codes,4,"['First paper out of postdoc @azengineering out! See <LINK>. We propose a QEC based GHZ distillation protocol, inspired by the Bell pair distillation of @markwilde in <LINK>. Byproduct: new method to generate logical Pauli operators for codes. <LINK>', 'Joint work with @rainarocks @nithinitzme and Prof. Bane Vasić. Implementation available online: https://t.co/oyd2cEZs9i', 'Besides the main protocol, we also discuss variations that might be more practical for certain network topologies. We work out a detailed example with a 3 qubit code to show the subtleties in the steps of the protocol. This example should help develop protocol variations.', 'The key step to the protocol is a new property of GHZ states that forms our main result in Theorem 6. It considers stabilizer measurements on one subsystem and shows the equivalent code on the remaining two subsystems. It builds on an ""extended"" transpose trick from Bell pairs.']",21,09,903
352,213,1411958029419954177,2449351549,Suryanarayana Maddu,Our preprint for training neural networks to solve multi-scale PDEs is out. <LINK> We propose novel strategies for multi-objective optimization that fixes learning pathologies in physics-informed neural networks @MOSAICgroup1 @microbionaut @CASUSscience <LINK>,http://arxiv.org/abs/2107.00940,"We characterize and remedy a failure mode that may arise from multi-scale dynamics with scale imbalances during training of deep neural networks, such as Physics Informed Neural Networks (PINNs). PINNs are popular machine-learning templates that allow for seamless integration of physical equation models with data. Their training amounts to solving an optimization problem over a weighted sum of data-fidelity and equation-fidelity objectives. Conflicts between objectives can arise from scale imbalances, heteroscedasticity in the data, stiffness of the physical equation, or from catastrophic interference during sequential training. We explain the training pathology arising from this and propose a simple yet effective inverse-Dirichlet weighting strategy to alleviate the issue. We compare with Sobolev training of neural networks, providing the baseline of analytically $\boldsymbol{\epsilon}$-optimal training. We demonstrate the effectiveness of inverse-Dirichlet weighting in various applications, including a multi-scale model of active turbulence, where we show orders of magnitude improvement in accuracy and convergence over conventional PINN training. For inverse modeling using sequential training, we find that inverse-Dirichlet weighting protects a PINN against catastrophic forgetting. ","Inverse-Dirichlet Weighting Enables Reliable Training of Physics
Informed Neural Networks",1,['Our preprint for training neural networks to solve multi-scale PDEs is out. <LINK>\n\nWe propose novel strategies for multi-objective optimization that fixes learning pathologies in physics-informed neural networks @MOSAICgroup1 @microbionaut @CASUSscience <LINK>'],21,07,260
353,36,1443004288129667073,15163166,Sherri Rose,"Our new paper, led by @IDegtiar, develops machine learning estimators for generalizability with observational & randomized data <LINK> These methods were motivated by our interest in assessing plan-specific effects on 💲 in Medicaid Code <LINK> <LINK> @BhramarBioStat @IDegtiar @StanfordHP @timothyjlayton @jwswallace @StanfordHAI @StanfordMed @HarvardBiostats @Stanford @MathematicaNow 👏👏👏 @IDegtiar just joined @MathematicaNow following her doctoral graduation from @HarvardBiostats!",https://arxiv.org/abs/2109.13288,"While much of the causal inference literature has focused on addressing internal validity biases, both internal and external validity are necessary for unbiased estimates in a target population of interest. However, few generalizability approaches exist for estimating causal quantities in a target population when the target population is not well-represented by a randomized study but is reflected when additionally incorporating observational data. To generalize to a target population represented by a union of these data, we propose a class of novel conditional cross-design synthesis estimators that combine randomized and observational data, while addressing their respective biases. The estimators include outcome regression, propensity weighting, and double robust approaches. All use the covariate overlap between the randomized and observational data to remove potential unmeasured confounding bias. We apply these methods to estimate the causal effect of managed care plans on health care spending among Medicaid beneficiaries in New York City. ","Conditional Cross-Design Synthesis Estimators for Generalizability in
Medicaid",2,"['Our new paper, led by @IDegtiar, develops machine learning estimators for generalizability with observational &amp; randomized data <LINK>\n\nThese methods were motivated by our interest in assessing plan-specific effects on 💲 in Medicaid\n\nCode <LINK> <LINK>', '@BhramarBioStat @IDegtiar @StanfordHP @timothyjlayton @jwswallace @StanfordHAI @StanfordMed @HarvardBiostats @Stanford @MathematicaNow 👏👏👏 @IDegtiar just joined @MathematicaNow following her doctoral graduation from @HarvardBiostats!']",21,09,484
354,247,1434905466656727046,794303906901880840,Ozan Oktay,"Check out our recent study exploring why data quality matters -- its impact on (I) model's predictive performance, (II) model evaluation and (III) selection for deployment purposes. Here we explore resource-effective solutions to tackle these challenges: <LINK> <LINK> <LINK>",https://arxiv.org/abs/2109.00574,"Imperfections in data annotation, known as label noise, are detrimental to the training of machine learning models and have an often-overlooked confounding effect on the assessment of model performance. Nevertheless, employing experts to remove label noise by fully re-annotating large datasets is infeasible in resource-constrained settings, such as healthcare. This work advocates for a data-driven approach to prioritising samples for re-annotation - which we term ""active label cleaning"". We propose to rank instances according to estimated label correctness and labelling difficulty of each sample, and introduce a simulation framework to evaluate relabelling efficacy. Our experiments on natural images and on a new medical imaging benchmark show that cleaning noisy labels mitigates their negative impact on model training, evaluation, and selection. Crucially, the proposed active label cleaning enables correcting labels up to 4 times more effectively than typical random selection in realistic conditions, making better use of experts' valuable time for improving dataset quality. ","Active label cleaning for improved dataset quality under resource
constraints",1,"[""Check out our recent study exploring why data quality matters -- its impact on (I) model's predictive performance, (II) model evaluation and (III) selection for deployment purposes.\n\nHere we explore resource-effective solutions to tackle these challenges: <LINK> <LINK> <LINK>""]",21,09,275
355,196,1260035034053898245,1199782808501153793,Andrew Lampinen,"How can deep learning models flexibly reuse their knowledge? How can they adapt to new tasks zero-shot, as humans can? In our new preprint (<LINK>), we propose a new approach based on learning to transform task representations: meta-mapping. Preview in thread: Our approach can make drastic adaptations zero-shot, like switching from winning at (simplified) poker to trying to lose. It can allow a visual classification system to recognize new concepts, and can adapt a model-free reinforcement learning to new tasks, without data from them. It accomplishes this without prior domain knowledge, based only on the relationships between tasks. Specifically, it learns basic task representations, e.g. for poker, via meta-learning. It also learns meta-mappings, higher order tasks which transform these basic task reps. For example, it might learn a ""lose"" meta-mapping, from the relationship between winning and losing at games like blackjack. This meta-mapping could then be applied to the model's representation of poker, in order to lose at poker zero-shot. We show this method can allow 80-90% performance, zero-shot, in domains ranging from polynomial regression to visual classification to reinforcement learning, outperforming baselines (sometimes substantially). It even exhibits some intriguing signatures of being more systematic. This zero-shot adaptation then allows the system to master the new tasks much more efficiently. It makes an order of magnitude fewer mistakes (cumulative loss) on the way to mastering the tasks than the next-best approach we considered. We implement this all in a parsimonious, homoiconic architecture that reuses the same networks for basic tasks and meta-mappings. This improves generalization! We also show our approach works with task representations constructed from either examples of the task or language. I think that meta-mapping may offer a useful concept for building more flexible artificial intelligence systems, and better cognitive models. Thanks for making it through this thread! There's a lot more detail, experiments, and related work/implications in the paper, please check it out! I hope it will be interesting and understandable to researchers in both AI/ML and cognitive science. <LINK>",https://arxiv.org/abs/2005.04318,"An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero-shot), based on its relationship to previous tasks. Humans can exhibit this cognitive flexibility. By contrast, models that achieve superhuman performance in specific tasks often fail to adapt to even slight task alterations. To address this, we propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks. We begin by learning vector representations of tasks. To adapt to new tasks, we propose meta-mappings, higher-order tasks that transform basic task representations. We demonstrate the effectiveness of this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning. We compare to both human adaptability and language-based approaches to zero-shot learning. Across these domains, meta-mapping is successful, often achieving 80-90% performance, without any data, on a novel task, even when the new task directly contradicts prior experience. We further show that meta-mapping can not only generalize to new tasks via learned relationships, but can also generalize using novel relationships unseen during training. Finally, using meta-mapping as a starting point can dramatically accelerate later learning on a new task, and reduce learning time and cumulative error substantially. Our results provide insight into a possible computational basis of intelligent adaptability and offer a possible framework for modeling cognitive flexibility and building more flexible artificial intelligence systems. ",Transforming task representations to perform novel tasks,9,"['How can deep learning models flexibly reuse their knowledge? How can they adapt to new tasks zero-shot, as humans can? In our new preprint (<LINK>), we propose a new approach based on learning to transform task representations: meta-mapping. Preview in thread:', 'Our approach can make drastic adaptations zero-shot, like switching from winning at (simplified) poker to trying to lose. It can allow a visual classification system to recognize new concepts, and can adapt a model-free reinforcement learning to new tasks, without data from them.', 'It accomplishes this without prior domain knowledge, based only on the relationships between tasks. Specifically, it learns basic task representations, e.g. for poker, via meta-learning. It also learns meta-mappings, higher order tasks which transform these basic task reps.', 'For example, it might learn a ""lose"" meta-mapping, from the relationship between winning and losing at games like blackjack. This meta-mapping could then be applied to the model\'s representation of poker, in order to lose at poker zero-shot.', 'We show this method can allow 80-90% performance, zero-shot, in domains ranging from polynomial regression to visual classification to reinforcement learning, outperforming baselines (sometimes substantially). It even exhibits some intriguing signatures of being more systematic.', 'This zero-shot adaptation then allows the system to master the new tasks much more efficiently. It makes an order of magnitude fewer mistakes (cumulative loss) on the way to mastering the tasks than the next-best approach we considered.', 'We implement this all in a parsimonious, homoiconic architecture that reuses the same networks for basic tasks and meta-mappings. This improves generalization! We also show our approach works with task representations constructed from either examples of the task or language.', 'I think that meta-mapping may offer a useful concept for building more flexible artificial intelligence systems, and better cognitive models.', ""Thanks for making it through this thread! There's a lot more detail, experiments, and related work/implications in the paper, please check it out! I hope it will be interesting and understandable to researchers in both AI/ML and cognitive science. https://t.co/HRgMsycNQt""]",20,05,2248
356,51,1009088433418067970,3885912072,Marshall Johnson,"New paper alert! <LINK> In this paper we report the discovery, confirmation, and characterization of two new giant planets using the @NASAKepler K2 extended mission. EPIC 246911830 b is a hot Jupiter transiting a late F star. Although the host star is relatively faint, there are a number of interesting things about the system. First, there is a likely stellar companion, probably a mid-M dwarf at a projected separation of about 400 AU. <LINK> Second, it is the first K2 hot Jupiter with a secondary eclipse detected in the K2 light curve. This allowed us to show that the planet is relatively reflective--it has a geometric albedo of ~0.2. <LINK> The other planet is EPIC 201498078 b, a warm Saturn transiting a relatively bright (V=10.5), 8.8-billion-year-old G star at the main sequence turn-off. With an 11.6-day orbit, this is one of the brighter K2 host stars with a long-period planet. <LINK> These discoveries were made by the KESPRINT collaboration, & wouldn't have been possible without a lot of great work by other members of the collaboration (most aren't on Twitter, but including @oscaribv & @vaneylenv ), & thanksto @skyientist & @WtnNori for follow-up lightcurves And thanks to @justesen for all of his work on the stellar characterization!",https://arxiv.org/abs/1806.06099,"We present the discovery and confirmation of two new transiting giant planets from the Kepler extended mission K2. K2-260 b is a hot Jupiter transiting a $V=12.7$ F6V star in K2 Field 13, with a mass and radius of $M_{\star}=1.39_{-0.06}^{+0.05} M_{\odot}$ and $R_{\star}=1.69 \pm 0.03 R_{\odot}$. The planet has an orbital period of $P=2.627$ days, and a mass and radius of $M_P=1.42^{+0.31}_{-0.32} M_J$ and $R_P=1.552^{+0.048}_{-0.057} R_J$. This is the first K2 hot Jupiter with a detected secondary eclipse in the Kepler bandpass, with a depth of $71 \pm 15$ ppm, which we use to estimate a geometric albedo of $A_g\sim0.2$. We also detected a candidate stellar companion at 0.6"" from K2-260; we find that it is very likely physically associated with the system, in which case it would be an M5-6V star at a projected separation of $\sim400$ AU. K2-261 b is a warm Saturn transiting a bright ($V=10.5$) G7IV/V star in K2 Field 14. The host star is a metal-rich ([Fe/H]$=0.36 \pm 0.06$), mildly evolved $1.10_{-0.02}^{+0.01} M_{\odot}$ star with $R_{\star}=1.65 \pm 0.04 R_{\odot}$. Thanks to its location near the main sequence turn-off, we can measure a relatively precise age of $8.8_{-0.3}^{+0.4}$ Gyr. The planet has $P=11.633$ days, $M_P=0.223 \pm 0.031 M_J$, and $R_P=0.850^{+0.026}_{-0.022} R_J$, and its orbit is eccentric ($e=0.39 \pm 0.15$). Its brightness and relatively large transit depth make this one of the best known warm Saturns for follow-up observations to further characterize the planetary system. ","K2-260 b: a hot Jupiter transiting an F star, and K2-261 b: a warm
Saturn around a bright G star",6,"['New paper alert! <LINK> In this paper we report the discovery, confirmation, and characterization of two new giant planets using the @NASAKepler K2 extended mission.', 'EPIC 246911830 b is a hot Jupiter transiting a late F star. Although the host star is relatively faint, there are a number of interesting things about the system. First, there is a likely stellar companion, probably a mid-M dwarf at a projected separation of about 400 AU. https://t.co/yjPlh6Rxkl', 'Second, it is the first K2 hot Jupiter with a secondary eclipse detected in the K2 light curve. This allowed us to show that the planet is relatively reflective--it has a geometric albedo of ~0.2. https://t.co/1J3J2iZLK7', 'The other planet is EPIC 201498078 b, a warm Saturn transiting a relatively bright (V=10.5), 8.8-billion-year-old G star at the main sequence turn-off. With an 11.6-day orbit, this is one of the brighter K2 host stars with a long-period planet. https://t.co/Fn71cQ8exn', ""These discoveries were made by the KESPRINT collaboration, &amp; wouldn't have been possible without a lot of great work by other members of the collaboration (most aren't on Twitter, but including @oscaribv &amp; @vaneylenv ), &amp; thanksto @skyientist &amp; @WtnNori for follow-up lightcurves"", 'And thanks to @justesen for all of his work on the stellar characterization!']",18,06,1258
357,78,1338705690852900867,15520108,Rudi Podgornik 岳儒迪,"Our new paper ""Electrostatic interaction between SARS-CoV-2 virus and charged electret fibre” explains the detailed nature of the nanoscale interaction between the virus and the electret fibre in N95 filters. <LINK> Here is the local charge on the spike proteins of an isolated virus and a virus close to the electret (polypropylene) fibre. The charge patters are completely modified by the coupling between electric potential and S-protein amino acid dissociation equilibrium. <LINK> Almost any kind of face mask offers some protection against particles and pathogens of different sizes, but the most efficient ones make use of a layered structure where one or more layers are electrically charged. These are the electret layers in the filter material. The exact nature of electrostatic capture with respect to both the charge on the particles and the electret fibres as well as the effect of immediate environment remains unclear. This is what our work elucidates by using a detailed spike protein charge regulation model. We show how pH and salt concentration drastically change both the scale and the sign of the virus electret interaction. Surprisingly, configuration of only a few proximal spike proteins is as important for the strength of the interaction as total number on the virus. We formulated the general understanding of efficient virus filtration mechanisms on th enanoscale level and specifically the nature of electrostatic interactions in an electret filter. The crucial element appears to be amino acid charge regulation. A snapshot of the electrostatic potential around the spike protein studded SARS-SoV2 virus. <LINK> @fpaillus Not really. You put the fibers into the field of corona discharge and apparently there is some charge trapping that leads to an electret (same as magnet but for charge) state. The charge is very non-uniformly distributed and macroscopic charge is close to 0. @RalfBlossey Thanks. Much appreciated 😀",https://arxiv.org/abs/2012.07160,"While almost any kind of face mask offers some protection against particles and pathogens of different sizes, the most efficient ones make use of a layered structure where one or more layers are electrically charged. This electret layer is essential to efficient filtration of difficult-to-capture small particles, yet the exact nature of electrostatic capture with respect to both the charge on the particles and the electret fibres as well as the effect of immediate environment remains unclear. Here, we explore in detail the electrostatic interaction between the surface of a single charged electret fibre and a model of SARS-CoV-2 virus. Using Poisson-Boltzmann electrostatics coupled to a detailed spike protein charge regulation model, we show how pH and salt concentration drastically change both the scale and the sign of the interaction. Furthermore, the configuration of the few spike proteins closest to the electret fibre turns out to be as important for the strength of the interaction as their total number on the virus, a direct consequence of spike protein charge regulation. The results of our work elucidate the details of virus electrostatics and contribute to the general understanding of efficient virus filtration mechanisms. ","Electrostatic interaction between SARS-CoV-2 virus and charged electret
fibre",9,"['Our new paper ""Electrostatic interaction between SARS-CoV-2 virus and charged electret fibre” explains the detailed nature of the nanoscale interaction between the virus and the electret fibre in N95 filters. <LINK>', 'Here is the local charge on the spike proteins of an isolated virus and a virus close to the electret (polypropylene) fibre. The charge patters are completely modified by the coupling between electric potential and S-protein amino acid dissociation equilibrium. https://t.co/1HuYhsyDVb', 'Almost any kind of face mask offers some protection against particles and pathogens of different sizes, but the most efficient ones make use of a layered structure where one or more layers are electrically charged. These are the electret layers in the filter material.', 'The exact nature of electrostatic capture with respect to both the charge on the particles and the electret fibres as well as the effect of immediate environment remains unclear. This is what our work elucidates by using a detailed spike protein charge regulation model.', 'We show how pH and salt concentration drastically change both the scale and the sign of the virus electret interaction. Surprisingly, configuration of only a few proximal spike proteins is as important for the strength of the interaction as total number on the virus.', 'We formulated the general understanding of efficient virus filtration mechanisms on th enanoscale level and specifically the nature of electrostatic interactions in an electret filter. The crucial element appears to be amino acid charge regulation.', 'A snapshot of the electrostatic potential around the spike protein studded SARS-SoV2 virus. https://t.co/T1zdxomU1b', '@fpaillus Not really. You put the fibers into the field of corona discharge and apparently there is some charge trapping that leads to an electret (same as magnet but for charge) state. The charge is very non-uniformly distributed and macroscopic charge is close to 0.', '@RalfBlossey Thanks. Much appreciated 😀']",20,12,1949
358,76,1138736449820475392,216729597,Marcel S. Pawlowski,"New paper on the arXiv by Sean Fillingham, one of the amazing grad students of @UCIPhysAstro. I had the honor to be a part of it together with @cooperUCI @AstronoMouse_ @MBKplus @jbprime @SheaGKosmo and @coralrosew. <LINK> <LINK> Sean used Gaia DR2 proper motions for the Milky Way satellite galaxies to constrain their infall times, by matching them with sub-halos in the Phat ELVIS simulations that have similar binding energies and distances from their host. <LINK> He compares the infall times of the satellite galaxies with their quenching times from star formation histories, and calculates the quenching timescale, i.e. for how long after their infall do the satellites continue to form stars. Turns out most classical satellites (M* ≥ 10^5 Msun) are quenched quickly after infall, consistent with environmental quenching (e.g. ram-pressure stripping). The least massive satellites, in contrast, were quenched *before* infall, in line with quenching due to reionisation. <LINK> Looking at the orbits of sats with M*≥10^5 Msun, 2 of the 3 with the longest quenching timescales have the largest percenters and low eccentricities. Thus they have likely experienced the lowest ram-pressure stripping, letting them keep gas & form stars for longer after infall. <LINK>",https://arxiv.org/abs/1906.04180v1,"Observations of low-mass satellite galaxies in the nearby Universe point towards a strong dichotomy in their star-forming properties relative to systems with similar mass in the field. Specifically, satellite galaxies are preferentially gas poor and no longer forming stars, while their field counterparts are largely gas rich and actively forming stars. Much of the recent work to understand this dichotomy has been statistical in nature, determining not just that environmental processes are most likely responsible for quenching these low-mass systems but also that they must operate very quickly after infall onto the host system, with quenching timescales $\lesssim 2~ {\rm Gyr}$ at ${M}_{\star} \lesssim 10^{8}~{\rm M}_{\odot}$. This work utilizes the newly-available $Gaia$ DR2 proper motion measurements along with the Phat ELVIS suite of high-resolution, cosmological, zoom-in simulations to study low-mass satellite quenching around the Milky Way on an object-by-object basis. We derive constraints on the infall times for $37$ of the known low-mass satellite galaxies of the Milky Way, finding that $\gtrsim~70\%$ of the `classical' satellites of the Milky Way are consistent with the very short quenching timescales inferred from the total population in previous works. The remaining classical Milky Way satellites have quenching timescales noticeably longer, with $\tau_{\rm quench} \sim 6 - 8~{\rm Gyr}$, highlighting how detailed orbital modeling is likely necessary to understand the specifics of environmental quenching for individual satellite galaxies. Additionally, we find that the $6$ ultra-faint dwarf galaxies with publicly available $HST$-based star-formation histories are all consistent with having their star formation shut down prior to infall onto the Milky Way -- which, combined with their very early quenching times, strongly favors quenching driven by reionization. ","] Characterizing the Infall Times and Quenching Timescales of Milky Way
Satellites with $Gaia$ Proper Motions",5,"['New paper on the arXiv by Sean Fillingham, one of the amazing grad students of @UCIPhysAstro. I had the honor to be a part of it together with @cooperUCI @AstronoMouse_ @MBKplus @jbprime @SheaGKosmo and @coralrosew. <LINK> <LINK>', 'Sean used Gaia DR2 proper motions for the Milky Way satellite galaxies to constrain their infall times, by matching them with sub-halos in the Phat ELVIS simulations that have similar binding energies and distances from their host. https://t.co/2q0zh9tayi', 'He compares the infall times of the satellite galaxies with their quenching times from star formation histories, and calculates the quenching timescale, i.e. for how long after their infall do the satellites continue to form stars.', 'Turns out most classical satellites (M* ≥ 10^5 Msun) are quenched quickly after infall, consistent with environmental quenching (e.g. ram-pressure stripping). The least massive satellites, in contrast, were quenched *before* infall, in line with quenching due to reionisation. https://t.co/tHxvKDWaXV', 'Looking at the orbits of sats with M*≥10^5 Msun, 2 of the 3 with the longest quenching timescales have the largest percenters and low eccentricities. Thus they have likely experienced the lowest ram-pressure stripping, letting them keep gas &amp; form stars for longer after infall. https://t.co/QT4HqZVNIk']",19,06,1270
359,249,1320626674833235969,1020920111476236288,Haggai Maron,"New paper! Most existing NN architectures for processing microphone arrays deal with fixed arrays. We present an architecture for microphone arrays on which no prior knowledge is presumed and demonstrate its applicability to speech dereverberation. <LINK> <LINK> We harness the DSS framework and suggest an architecture that enhances the reverberant log-spectrum. Our experiments show that the proposed position-agnostic setup performs comparably with the position-aware framework and sometimes slightly better, even with fewer microphones. In addition, it substantially improves performance over a single microphone architecture. Led by @YochaiYemini, with @EthanFetaya and Sharon Gannot Fix: led by @YeminiYochai",https://arxiv.org/abs/2010.11875,"Neural networks (NNs) have been widely applied in speech processing tasks, and, in particular, those employing microphone arrays. Nevertheless, most existing NN architectures can only deal with fixed and position-specific microphone arrays. In this paper, we present an NN architecture that can cope with microphone arrays whose number and positions of the microphones are unknown, and demonstrate its applicability in the speech dereverberation task. To this end, our approach harnesses recent advances in deep learning on set-structured data to design an architecture that enhances the reverberant log-spectrum. We use noisy and noiseless versions of a simulated reverberant dataset to test the proposed architecture. Our experiments on the noisy data show that the proposed scene-agnostic setup outperforms a powerful scene-aware framework, sometimes even with fewer microphones. With the noiseless dataset we show that, in most cases, our method outperforms the position-aware network as well as the state-of-the-art weighted linear prediction error (WPE) algorithm. ",Scene-Agnostic Multi-Microphone Speech Dereverberation,4,"['New paper! Most existing NN architectures for processing microphone arrays deal with fixed arrays. \nWe present an architecture for microphone arrays on which no prior knowledge is presumed and demonstrate its applicability to speech dereverberation. \n<LINK> <LINK>', 'We harness the DSS framework and suggest an architecture that enhances the reverberant log-spectrum. Our experiments show that the proposed position-agnostic setup performs comparably with the position-aware framework and sometimes slightly better, even with fewer microphones.', 'In addition, it substantially improves performance over a single microphone architecture.\nLed by @YochaiYemini, with @EthanFetaya and Sharon Gannot', 'Fix: led by @YeminiYochai']",20,10,714
360,24,1222268281932828672,989251872107085824,Quoc Le,"New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. ""Perplexity is all a chatbot needs"" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: <LINK> Blog: <LINK> <LINK> @xpearhead @lmthang You can find some sample conversations with the bot here: <LINK>",https://arxiv.org/abs/2001.09977,"We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated. ",Towards a Human-like Open-Domain Chatbot,2,"['New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways:\n\n1. ""Perplexity is all a chatbot needs"" ;)\n2. We\'re getting closer to a high-quality chatbot that can chat about anything\n\nPaper: <LINK>\nBlog: <LINK> <LINK>', '@xpearhead @lmthang You can find some sample conversations with the bot here:\nhttps://t.co/SP9HO0HpL9']",20,01,307
361,16,1244979364153917442,4211253189,Rohit Bhattacharya,Happy to share @raziehnabi and my paper on semiparametric inference in hidden variable causal graphical models! This is a rough time for all. Writing this paper was one way for us to stop obsessing over every new reported case of #Covid_19. Stay safe all! <LINK>,https://arxiv.org/abs/2003.12659,"Identification theory for causal effects in causal models associated with hidden variable directed acyclic graphs (DAGs) is well studied. However, the corresponding algorithms are underused due to the complexity of estimating the identifying functionals they output. In this work, we bridge the gap between identification and estimation of population-level causal effects involving a single treatment and a single outcome. We derive influence function based estimators that exhibit double robustness for the identified effects in a large class of hidden variable DAGs where the treatment satisfies a simple graphical criterion; this class includes models yielding the adjustment and front-door functionals as special cases. We also provide necessary and sufficient conditions under which the statistical model of a hidden variable DAG is nonparametrically saturated and implies no equality constraints on the observed data distribution. Further, we derive an important class of hidden variable DAGs that imply observed data distributions observationally equivalent (up to equality constraints) to fully observed DAGs. In these classes of DAGs, we derive estimators that achieve the semiparametric efficiency bounds for the target of interest where the treatment satisfies our graphical criterion. Finally, we provide a sound and complete identification algorithm that directly yields a weight based estimation strategy for any identifiable effect in hidden variable causal models. ","Semiparametric Inference For Causal Effects In Graphical Models With
Hidden Variables",1,['Happy to share @raziehnabi and my paper on semiparametric inference in hidden variable causal graphical models! This is a rough time for all. Writing this paper was one way for us to stop obsessing over every new reported case of #Covid_19. Stay safe all!\n\n<LINK>'],20,03,262
362,113,1513472978595549184,3131175701,Tsai Shang-Min (Shami),An elevator pitch for our new paper: We devised a mini-chemical network for H2-atmospheres with only 10 forward reactions. Affordable for 3D GCMs. <LINK> (1/2) a (weak) analogy: It's done by having a handful of generals (net reactions) instead of thousands of soldiers (conventional kinetics) fighting a battle. Photochemistry on the way (2/2) <LINK>,https://arxiv.org/abs/2204.04201,"Growing evidence has indicated that the global composition distribution plays an indisputable role in interpreting observational data. 3D general circulation models (GCMs) with a reliable treatment of chemistry and clouds are particularly crucial in preparing for the upcoming observations. In the effort of achieving 3D chemistry-climate modeling, the challenge mainly lies in the expensive computing power required for treating a large number of chemical species and reactions. Motivated by the need for a robust and computationally efficient chemical scheme, we devise a mini-chemical network with a minimal number of species and reactions for H$_2$-dominated atmospheres. We apply a novel technique to simplify the chemical network from a full kinetics model -- VULCAN by replacing a large number of intermediate reactions with net reactions. The number of chemical species is cut down from 67 to 12, with the major species of thermal and observational importance retained, including H$_2$O, CH$_4$, CO, CO$_2$, C$_2$H$_2$, NH$_3$, and HCN. The size of the total reactions is greatly reduced from $\sim$ 800 to 20. The mini-chemical scheme is validated by verifying the temporal evolution and benchmarking the predicted compositions in four exoplanet atmospheres (GJ 1214b, GJ 436b, HD 189733b, HD 209458b) against the full kinetics of VULCAN. It reproduces the chemical timescales and composition distributions of the full kinetics well within an order of magnitude for the major species in the pressure range of 1 bar -- 0.1 mbar across various metallicities and carbon-to-oxygen (C/O) ratios. The small scale of the mini-chemical scheme permits simple use and fast computation, which is optimal for implementation in a 3D GCM or a retrieval framework. We focus on the thermochemical kinetics of net reactions in this paper and address photochemistry in a follow-up paper. ","A Mini-Chemical Scheme with Net Reactions for 3D GCMs I.: Thermochemical
Kinetics",2,"['An elevator pitch for our new paper: We devised a mini-chemical network for H2-atmospheres with only 10 forward reactions. Affordable for 3D GCMs. <LINK> (1/2)', ""a (weak) analogy: It's done by having a handful of generals (net reactions) instead of thousands of soldiers (conventional kinetics) fighting a battle. Photochemistry on the way (2/2) https://t.co/tL0PHto1Y4""]",22,04,350
363,149,1257985932374036485,123440015,P. Ajie Utama,"A new paper in #acl2020nlp paper about ""Debiasing NLU Models without Degrading the In-distribution Performance""! Here we address the issue of out-of-distribution/adversarial improvement which is at odds with the in-distribution performance. <LINK> <LINK> We propose a confidence regularization method that helps models to avoid exploiting biases by discouraging them to be overconfident on examples containing artifacts <LINK> This simple method is shown to be effective in obtaining equally high accuracy on 3 OOD evaluations, e.g., MNLI &gt; HANS, Fever &gt; SymmetricFever, QQP &gt; PAWS; while maintaining its in-distribution performance <LINK> By preventing overconfidence, we also show that the resulting models are being better calibrated, i.e., probabilities assigned to the predicted label are more aligned with their accuracy <LINK> Joint work with amazing collaborators in UKP @NafiseSadat & Iryna Gurevych!",https://arxiv.org/abs/2005.00315,"Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution. Recently, several proposed debiasing methods are shown to be very effective in improving out-of-distribution performance. However, their improvements come at the expense of performance drop when models are evaluated on the in-distribution data, which contain examples with higher diversity. This seemingly inevitable trade-off may not tell us much about the changes in the reasoning and understanding capabilities of the resulting models on broader types of examples beyond the small subset represented in the out-of-distribution data. In this paper, we address this trade-off by introducing a novel debiasing method, called confidence regularization, which discourage models from exploiting biases while enabling them to receive enough incentive to learn from all the training examples. We evaluate our method on three NLU tasks and show that, in contrast to its predecessors, it improves the performance on out-of-distribution datasets (e.g., 7pp gain on HANS dataset) while maintaining the original in-distribution accuracy. ","Mind the Trade-off: Debiasing NLU Models without Degrading the
In-distribution Performance",5,"['A new paper in #acl2020nlp paper about ""Debiasing NLU Models without Degrading the In-distribution Performance""! Here we address the issue of out-of-distribution/adversarial improvement which is at odds with the in-distribution performance. <LINK> <LINK>', 'We propose a confidence regularization method that helps models to avoid exploiting biases by discouraging them to be overconfident on examples containing artifacts https://t.co/fwXStC2GNj', 'This simple method is shown to be effective in obtaining equally high accuracy on 3 OOD evaluations, e.g., MNLI &gt; HANS, Fever &gt; SymmetricFever, QQP &gt; PAWS; while maintaining its in-distribution performance https://t.co/pxiWrA2IHx', 'By preventing overconfidence, we also show that the resulting models are being better calibrated, i.e., probabilities assigned to the predicted label are more aligned with their accuracy https://t.co/Na80t5902j', 'Joint work with amazing collaborators in UKP @NafiseSadat &amp; Iryna Gurevych!']",20,05,918
364,96,1336692204929212420,804397363402067968,Timothy Raben,"Super excited about this new paper <LINK>, that should excite all you #odderon fans! This is my first collaboration with experimentalists from large collaborations (D0 & TOTEM). The goal of this paper was to look at the difference between proton-proton, and proton-antiproton scattering. Comparing these two processes at similar energies has the potential to reveal some fun physics! Unfortunately we don't usually build a new machine/experiment to just switch from protons to anti-protons, we usually go to higher energies too. So this study looked at several center of mass energies with the TOTEM experiment (proton-proton) and extrapolated down to the highest D0 center of mass energy (proton-antiproton). The specific ""object"" of interest here is the differential cross section as a function of transverse momentum. I.e. how does the probability of scattering change as the transverse momentum of the interaction change. This dip and bump is a characteristic of this process. <LINK> It has long been suspected that the size/dip-ness depends on whether you look at proton-proton or proton-antiproton scattering. As you can see from the picture, it certainly looks like it! In fact we claim the difference in the above graph leads to a 3.4sigma difference. But wait, that's not all! Totem has already measured the total cross section for this process (and something called rho). This measurement is also dominated by the very same physics that we are measuring here. In fact, since the measurements take place in different... transverse momentum region (i.e. not measuring the exact same effect), we can combine the two measurements and say We have &gt;5sigma evidence for the t-channel exchange of a colorless C-odd gluonic compound! Woo! Above 5 sigma! That means we ""know"" there is some real physics going on here. What about all the jargon at the end of the previous tweet? That's about as specific as we can get about what exactly causes this 5 sigma discrepancy. The leading idea is that this is an ""odderon"" effect (#teamodderon), but exactly what an odderon is, how many there are, what other effects there could be at play, and many more questions will have to wait to future studies.",https://arxiv.org/abs/2012.03981,"We describe an analysis comparing the $p\bar{p}$ elastic cross section as measured by the D0 Collaboration at a center-of-mass energy of 1.96 TeV to that in $pp$ collisions as measured by the TOTEM Collaboration at 2.76, 7, 8, and 13 TeV using a model-independent approach. The TOTEM cross sections extrapolated to a center-of-mass energy of $\sqrt{s} =$ 1.96 TeV are compared with the D0 measurement in the region of the diffractive minimum and the second maximum of the $pp$ cross section. The two data sets disagree at the 3.4$\sigma$ level and thus provide evidence for the $t$-channel exchange of a colorless, $C$-odd gluonic compound, also known as the odderon. We combine these results with a TOTEM analysis of the same $C$-odd exchange based on the total cross section and the ratio of the real to imaginary parts of the forward elastic scattering amplitude in $pp$ scattering. The combined significance of these results is larger than 5$\sigma$ and is interpreted as the first observation of the exchange of a colorless, $C$-odd gluonic compound. ","Comparison of $pp$ and $p \bar{p}$ differential elastic cross sections
and observation of the exchange of a colorless $C$-odd gluonic compound",11,"['Super excited about this new paper <LINK>, that should excite all you #odderon fans!\n\nThis is my first collaboration with experimentalists from large collaborations (D0 &amp; TOTEM).', 'The goal of this paper was to look at the difference between proton-proton, and proton-antiproton scattering. Comparing these two processes at similar energies has the potential to reveal some fun physics!', ""Unfortunately we don't usually build a new machine/experiment to just switch from protons to anti-protons, we usually go to higher energies too."", 'So this study looked at several center of mass energies with the TOTEM experiment (proton-proton) and extrapolated down to the highest D0 center of mass energy (proton-antiproton).', 'The specific ""object"" of interest here is the differential cross section as a function of transverse momentum. I.e. how does the probability of scattering change as the transverse momentum of the interaction change.', 'This dip and bump is a characteristic of this process. https://t.co/A5HeZ18y9h', 'It has long been suspected that the size/dip-ness depends on whether you look at proton-proton or proton-antiproton scattering. As you can see from the picture, it certainly looks like it! In fact we claim the difference in the above graph leads to a 3.4sigma difference.', ""But wait, that's not all! Totem has already measured the total cross section for this process (and something called rho). This measurement is also dominated by the very same physics that we are measuring here. In fact, since the measurements take place in different..."", 'transverse momentum region (i.e. not measuring the exact same effect), we can combine the two measurements and say\n\nWe have &gt;5sigma evidence for the t-channel exchange of a colorless C-odd gluonic compound!', 'Woo! Above 5 sigma! That means we ""know"" there is some real physics going on here.\n\nWhat about all the jargon at the end of the previous tweet? That\'s about as specific as we can get about what exactly causes this 5 sigma discrepancy.', 'The leading idea is that this is an ""odderon"" effect (#teamodderon), but exactly what an odderon is, how many there are, what other effects there could be at play, and many more questions will have to wait to future studies.']",20,12,2196
365,90,1260010161063616514,251682258,Dr. Knicole Colón,"I am so excited to share our new paper on the inflated sub-Saturn KELT-11b! We used Hubble, Spitzer, and TESS data and find that KELT-11b has a low-amplitude water feature with an unusual shape, suggestive of a sub-solar atmospheric water abundance. <LINK> (1/2) <LINK> I am so grateful for all the work our team put into this project, a team which included @lkreidberg, Mike Line, @luis_wel, @exomadhu, @tgbeatty, @PatrickTamburo, @kevinbstevenson, Avi Mandell, @Astro_JRod, @mrtommyb, and many more! (2/2) <LINK>",https://arxiv.org/abs/2005.05153,"We present an optical-to-infrared transmission spectrum of the inflated sub-Saturn KELT-11b measured with the Transiting Exoplanet Survey Satellite (TESS), the Hubble Space Telescope (HST) Wide Field Camera 3 G141 spectroscopic grism, and the Spitzer Space Telescope (Spitzer) at 3.6 $\mu$m, in addition to a Spitzer 4.5 $\mu$m secondary eclipse. The precise HST transmission spectrum notably reveals a low-amplitude water feature with an unusual shape. Based on free retrieval analyses with varying molecular abundances, we find strong evidence for water absorption. Depending on model assumptions, we also find tentative evidence for other absorbers (HCN, TiO, and AlO). The retrieved water abundance is generally $\lesssim 0.1\times$ solar (0.001--0.7$\times$ solar over a range of model assumptions), several orders of magnitude lower than expected from planet formation models based on the solar system metallicity trend. We also consider chemical equilibrium and self-consistent 1D radiative-convective equilibrium model fits and find they too prefer low metallicities ($[M/H] \lesssim -2$, consistent with the free retrieval results). However, all the retrievals should be interpreted with some caution since they either require additional absorbers that are far out of chemical equilibrium to explain the shape of the spectrum or are simply poor fits to the data. Finally, we find the Spitzer secondary eclipse is indicative of full heat redistribution from KELT-11b's dayside to nightside, assuming a clear dayside. These potentially unusual results for KELT-11b's composition are suggestive of new challenges on the horizon for atmosphere and formation models in the face of increasingly precise measurements of exoplanet spectra. ","An Unusual Transmission Spectrum for the Sub-Saturn KELT-11b Suggestive
of a Sub-Solar Water Abundance",2,"['I am so excited to share our new paper on the inflated sub-Saturn KELT-11b! We used Hubble, Spitzer, and TESS data and find that KELT-11b has a low-amplitude water feature with an unusual shape, suggestive of a sub-solar atmospheric water abundance. <LINK> (1/2) <LINK>', 'I am so grateful for all the work our team put into this project, a team which included @lkreidberg, Mike Line, @luis_wel, @exomadhu, @tgbeatty, @PatrickTamburo, @kevinbstevenson, Avi Mandell, @Astro_JRod, @mrtommyb, and many more! (2/2) https://t.co/B7PmzVUSpX']",20,05,514
366,16,1123109107538378752,913238472357437445,Fuminobu TAKAHASHI,Oue new paper on the bouncing universe appeared today. The bounce takes place in 4D Einstein gravity w/o singularity nor violating NEC. A scalar field with a flat potential is the key for the bounce. The slow-roll inflation naturally follows the bounce. <LINK>,https://arxiv.org/abs/1904.12312,"We find a class of solutions for a homogeneous and isotropic universe in which the initially expanding universe stops expanding, experiences contraction, and then expands again (the ""bounce""), in the framework of Einstein gravity with a real scalar field without violating the null energy condition nor encountering any singularities. Two essential ingredients for the bouncing universe are the positive spatial curvature and the scalar potential which becomes flatter at large field values. Depending on the initial condition, either the positive curvature or the negative potential stops the cosmic expansion and begins the contraction phase. The flat potential plays a crucial role in triggering the bounce. After the bounce, the flat potential naturally allows the universe to enter the slow-roll inflation regime, thereby making the bouncing universe compatible with observations. If the e-folding of the subsequent inflation is just enough, a positive spatial curvature may be found in the future observations. Our scenario nicely fits with the creation of the universe from nothing, which leads to the homogeneous and isotropic universe with positive curvature. As a variant of the mechanism, we also find solutions representing a cyclic universe. ",Bouncing Universe from Nothing,1,['Oue new paper on the bouncing universe appeared today. The bounce takes place in 4D Einstein gravity w/o singularity nor violating NEC. A scalar field with a flat potential is the key for the bounce. The slow-roll inflation naturally follows the bounce.\n<LINK>'],19,04,260
367,73,1406217438367264770,1319333124086648834,A. Tuan Nguyen,"New paper <LINK> on Domain Adaptation, with Toan Tran, @yaringal, Phil Torr, and @atilimgunes [1/n] We propose a generalization bound of a model's loss on the target domain, based on the training loss and the reverse KL divergence between the source and target distributions. Different from some existing bounds in the literature, our bound works for all cases of ... [2/n] supervised learning, makes no assumptions about the labeling mechanism, works with virtually all predictive distributions commonly used in practice, and thus works with all common loss functions (cross-entropy, squared error, l1). [3/n] Based on the bound, we propose an algorithm to minimize the KL term to improve the generalization performance. Different from other distance metrics, the KL divergence can be estimated with samples, leading to an efficient and stable alignment technique. More importantly,...[4/n] the reverse KL has zero-forcing and mode-seeking effects, which allow for a flexible alignment between the domains, while still efficiently prevent out-of-distribution data at test time. Experiments show that our method outperforms other marginal alignment techniques. [n/n]",https://arxiv.org/abs/2106.07780,"Domain adaptation is an important problem and often needed for real-world applications. In this problem, instead of i.i.d. training and testing datapoints, we assume that the source (training) data and the target (testing) data have different distributions. With that setting, the empirical risk minimization training procedure often does not perform well, since it does not account for the change in the distribution. A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain. However, these approaches often require additional networks and/or optimizing an adversarial (minimax) objective, which can be very expensive or unstable in practice. To improve upon these marginal alignment techniques, in this paper, we first derive a generalization bound for the target loss based on the training loss and the reverse Kullback-Leibler (KL) divergence between the source and the target representation distributions. Based on this bound, we derive an algorithm that minimizes the KL term to obtain a better generalization to the target domain. We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples without any additional network or a minimax objective. This leads to a theoretically sound alignment method which is also very efficient and stable in practice. Experimental results also suggest that our method outperforms other representation-alignment approaches. ",KL Guided Domain Adaptation,5,"['New paper <LINK> on Domain Adaptation, with Toan Tran, @yaringal, Phil Torr, and @atilimgunes [1/n]', ""We propose a generalization bound of a model's loss on the target domain, based on the training loss and the reverse KL divergence between the source and target distributions. Different from some existing bounds in the literature, our bound works for all cases of ... [2/n]"", 'supervised learning, makes no assumptions about the labeling mechanism, works with virtually all predictive distributions commonly used in practice, and thus works with all common loss functions (cross-entropy, squared error, l1). [3/n]', 'Based on the bound, we propose an algorithm to minimize the KL term to improve the generalization performance. Different from other distance metrics, the KL divergence can be estimated with samples, leading to an efficient and stable alignment technique. More importantly,...[4/n]', 'the reverse KL has zero-forcing and mode-seeking effects, which allow for a flexible alignment between the domains, while still efficiently prevent out-of-distribution data at test time. Experiments show that our method outperforms other marginal alignment techniques. [n/n]']",21,06,1166
368,30,1398300354807226370,750727628294848512,Malena Rice,"Exciting new coauthor paper on arxiv today, led by Songhu Wang. High obliquities of giant exoplanet host stars are almost exclusively associated with wide-separation planets (a/R* &gt; 10) or hot stars, providing evidence for tidal damping of hot Jupiters <LINK>",https://arxiv.org/abs/2105.12902,"Measuring the obliquity distribution of stars hosting warm Jupiters may help us to understand the formation of close-orbiting gas giants. Few such measurements have been performed due to practical difficulties in scheduling observations of the relatively infrequent and long-duration transits of warm Jupiters. Here, we report a measurement of the Rossiter-McLaughlin effect for K2-232b, a warm Jupiter (M_P=0.39 M_Jup) on an 11.17-day orbit with an eccentricity of 0.26. The data were obtained with the Automated Planet Finder during two separate transits. The planet's orbit appears to be well-aligned with the spin axis of the host star, with a projected spin-orbit angle of lambda = -11.1+/-6.6 deg. Combined with the other available data, we find that high obliquities are almost exclusively associated with planets that either have an orbital separation greater than 10 stellar radii or orbit stars with effective temperatures hotter than 6,000K. This pattern suggests that the obliquities of the closest-orbiting giant planets around cooler stars have been damped by tidal effects. ",The Aligned Orbit of the Eccentric Warm Jupiter K2-232b,1,"['Exciting new coauthor paper on arxiv today, led by Songhu Wang. High obliquities of giant exoplanet host stars are almost exclusively associated with wide-separation planets (a/R* &gt; 10) or hot stars, providing evidence for tidal damping of hot Jupiters <LINK>']",21,05,262
369,12,1167361191531995136,793871446049185792,Franziska Schmidt,"New paper on the arxiv today by my colleague @FKirchschlager: ""Dust survival rates in clumps passing through the Cas A reverse shock I: results for a range of clump densities"", accepted by MNRAS (<LINK>)! see this thread for a tl;dr! 😀 The origin of cosmic dust (especially in the early universe) is still highly uncertain with one potential source being core-collapse supernovae (massive stars that collapse and then explode at the end of their lives, forming beautiful remnants such as Cassiopeia A). [Img: Wiki] <LINK> Observations of such remnants have shown that dust is indeed produced in the inner regions of these remnants, however once formed the dust can easily be destroyed again through interactions with the so-called reverse shock (a shock formed by interactions between the expanding remnant and the interstellar material around it). Exactly how much of the dust manages to survive under these conditions is currently not well-constraint. To investigate dust destruction processes in supernova remnants, we are using hydrodynamics simulations (such as this one: <LINK>) representing the remnant combined with Dr Kirchschlager's dust post-processing code PAPERBOATS. PAPERBOATS comes with a more complete suite of physics than previous studies including drag effects and destruction by either sputtering (grain erosion due to interactions between dust and gas) or collisions (grain growth or destruction due to interactions between grains). We find that within a reasonable parameter ranges up to 30% of the carbon dust can survive the remnants and the results show quite clearly that grain-grain collisions (which are often neglected in similar studies) play a crucial role and should be considered in realistic models. <LINK>",https://arxiv.org/abs/1908.10875,"The reverse shock in the ejecta of core-collapse supernovae is potentially able to destroy newly formed dust material. In order to determine dust survival rates, we have performed a set of hydrodynamic simulations using the grid-based code AstroBEAR in order to model a shock wave interacting with clumpy supernova ejecta. Dust motions and destruction rates were computed using our newly developed external, post-processing code Paperboats, which includes gas drag, grain charging, sputtering and grain-grain collisions. We have determined dust destruction rates for the oxygen-rich supernova remnant Cassiopeia A as a function of initial grain sizes and clump gas density. We found that up to 30 % of the carbon dust mass is able to survive the passage of the reverse shock if the initial grain size distribution is narrow with radii around ~10 - 50 nm for high gas densities, or with radii around ~0.5 - 1.5 ${\mu}$m for low and medium gas densities. Silicate grains with initial radii around 10 - 30 nm show survival rates of up to 40 % for medium and high density contrasts, while silicate material with micron sized distributions is mostly destroyed. For both materials, the surviving dust mass is rearranged into a new size distribution that can be approximated by two components: a power-law distribution of small grains and a log-normal distribution of grains having the same size range as the initial distribution. Our results show that grain-grain collisions and sputtering are synergistic and that grain-grain collisions can play a crucial role in determining the surviving dust budget in supernova remnants. ","Dust survival rates in clumps passing through the Cas A reverse shock I:
results for a range of clump densities",7,"['New paper on the arxiv today by my colleague @FKirchschlager: ""Dust survival rates in clumps passing through the Cas A reverse shock I: results for a range of clump densities"", accepted by MNRAS (<LINK>)!\nsee this thread for a tl;dr! 😀', 'The origin of cosmic dust (especially in the early universe) is still highly uncertain with one potential source being core-collapse supernovae (massive stars that collapse and then explode at the end of their lives, forming beautiful remnants such as Cassiopeia A). [Img: Wiki] https://t.co/A6kOsBx2XB', 'Observations of such remnants have shown that dust is indeed produced in the inner regions of these remnants, however once formed the dust can easily be destroyed again through interactions with the so-called reverse shock', '(a shock formed by interactions between the expanding remnant and the interstellar material around it). Exactly how much of the dust manages to survive under these conditions is currently not well-constraint.', ""To investigate dust destruction processes in supernova remnants, we are using hydrodynamics simulations (such as this one: https://t.co/DfrdTuDLiM) representing the remnant combined with Dr Kirchschlager's dust post-processing code PAPERBOATS."", 'PAPERBOATS comes with a more complete suite of physics than previous studies including drag effects and destruction by either sputtering (grain erosion due to interactions between dust and gas) or collisions (grain growth or destruction due to interactions between grains).', 'We find that within a reasonable parameter ranges up to 30% of the carbon dust can survive the remnants and the results show quite clearly that grain-grain collisions (which are often neglected in similar studies) play a crucial role and should be considered in realistic models. https://t.co/HlrbLqNO7B']",19,08,1741
370,124,1489149555249713152,1015534555384766464,Thomas Van Riet,"'You gotta keep em separated' <LINK> New paper with ""El Risitas"" (@MigMontero), Timm Wrase and our former master student, Fien Apers, who is really behind this all! She is now a PhD student at Oxford, working with @JosephPConlon @Zugzwang_Sor thanks!",https://arxiv.org/abs/2202.00682,"AdS flux vacua with a parametric separation between the AdS and KK scales have been conjectured to be in the Swampland. We study flux compactifications of massive IIA supergravity with O6 planes which are claimed to allow moduli-stabilised and scale separated AdS$_3$ and AdS$_4$ vacua at arbitrary weak coupling and large volume. A recent refinement of the AdS Distance Conjecture is shown to be inconsistent with the class of AdS$_3$ vacua because the requisite discrete higher form symmetries are absent. We further perform a tree-level study of non-perturbative decays for the nonsupersymmetric versions of the AdS$_3$ solutions, and find that the vacua are stable within this approximation. Finally, we provide an initial investigation of the would-be dual CFT$_2$'s and CFT$_3$'s. We study roughly a dozen different models and find for all AdS$_4$ DGKT-type vacua that the dual operators to the lightest scalars have integer dimensions. For the putative CFT$_2$ dual theories of the AdS$_3$ vacua we find no integer dimensions for the operators. ",Comments on classical AdS flux vacua with scale separation,2,"['\'You gotta keep em separated\' \n\n<LINK>\n\nNew paper with ""El Risitas"" (@MigMontero), Timm Wrase and our former master student, Fien Apers, who is really behind this all! She is now a PhD student at Oxford, working with @JosephPConlon', '@Zugzwang_Sor thanks!']",22,02,251
371,61,1275529437573009408,390847005,Antony Alexos,"With my research colleagues we wrote a paper on statistical machine learning. More specifically we proposed a new method that combines local competition and a novel uncertainty mechanism, for adversarial robustness. <LINK> #deep_learning #machine_learning",https://arxiv.org/abs/2006.10620,"This work attempts to address adversarial robustness of deep networks by means of novel learning arguments. Specifically, inspired from results in neuroscience, we propose a local competition principle as a means of adversarially-robust deep learning. We argue that novel local winner-takes-all (LWTA) nonlinearities, combined with posterior sampling schemes, can greatly improve the adversarial robustness of traditional deep networks against difficult adversarial attack schemes. We combine these LWTA arguments with tools from the field of Bayesian non-parametrics, specifically the stick-breaking construction of the Indian Buffet Process, to flexibly account for the inherent uncertainty in data-driven modeling. As we experimentally show, the new proposed model achieves high robustness to adversarial perturbations on MNIST and CIFAR10 datasets. Our model achieves state-of-the-art results in powerful white-box attacks, while at the same time retaining its benign accuracy to a high degree. Equally importantly, our approach achieves this result while requiring far less trainable model parameters than the existing state-of-the-art. ","Local Competition and Uncertainty for Adversarial Robustness in Deep
Learning",1,"['With my research colleagues we wrote a paper on statistical machine learning. More specifically we proposed a new method that combines local competition and a novel uncertainty mechanism, for adversarial robustness.\n<LINK> \n#deep_learning #machine_learning']",20,06,255
372,8,1290286645992779776,1327377308,Milagros Miceli,"📢📢Our new paper has been accepted for #CSCW2020!!! 📢📢 Through fieldwork at two companies, @martnsch, Tianling Yang, and I explored power imbalances in data annotation for computer vision products. A preprint is available here: <LINK> <LINK> In this paper, we discuss the deeply normative nature of data classification and its effects on datasets: * What structures shape the sensemaking of data? * Who decides what labels best define an image? tl;dr Here is a short video featuring this research <LINK> Annotators interpret data according to imposed classifications that follow the demands of clients, their product and their revenue plan. Power imbalances related to the possession of capital not only translate into asymmetrical labor conditions but also concretely shape datasets Finally, we layout implications for practitioners and researchers. We argue for power-reflexive documentation practices, as a method to restore context and improve transparency and accountability in dataset production.",https://arxiv.org/abs/2007.14886,"The interpretation of data is fundamental to machine learning. This paper investigates practices of image data annotation as performed in industrial contexts. We define data annotation as a sense-making practice, where annotators assign meaning to data through the use of labels. Previous human-centered investigations have largely focused on annotators subjectivity as a major cause for biased labels. We propose a wider view on this issue: guided by constructivist grounded theory, we conducted several weeks of fieldwork at two annotation companies. We analyzed which structures, power relations, and naturalized impositions shape the interpretation of data. Our results show that the work of annotators is profoundly informed by the interests, values, and priorities of other actors above their station. Arbitrary classifications are vertically imposed on annotators, and through them, on data. This imposition is largely naturalized. Assigning meaning to data is often presented as a technical matter. This paper shows it is, in fact, an exercise of power with multiple implications for individuals and society. ","Between Subjectivity and Imposition: Power Dynamics in Data Annotation
for Computer Vision",4,"['📢📢Our new paper has been accepted for #CSCW2020!!! 📢📢\nThrough fieldwork at two companies, @martnsch, Tianling Yang, and I explored power imbalances in data annotation for computer vision products. \nA preprint is available here: <LINK> <LINK>', 'In this paper, we discuss the deeply normative nature of data classification and its effects on datasets:\n* What structures shape the sensemaking of data?\n* Who decides what labels best define an image? \n\ntl;dr\nHere is a short video featuring this research\nhttps://t.co/xxtRMqakom', 'Annotators interpret data according to imposed classifications that follow the demands of clients, their product and their revenue plan. \nPower imbalances related to the possession of capital not only translate into asymmetrical labor conditions but also concretely shape datasets', 'Finally, we layout implications for practitioners and researchers. We argue for power-reflexive documentation practices, as a method to restore context and improve transparency and accountability in dataset production.']",20,07,1002
373,15,1245152811672715265,556151596,Lawrence M. Krauss,"Happy about this new research paper by the first large experimental collaboration I have been a member of, proposed to search for particle dark matter and an exotic process called double beta decay. It is rather technical, but here is the link. <LINK> @IVerboten meh @nadine_feiler This expt is designed to detect very rare events depositing small amounts of energy in an underground detector. Two exotic sources: particle dark matter that collides with particles in the detector, or very rare radioactive decays inside the detector. Both involve new physics. @FarrimondRobert yes.. but actually these types of things have already been used as deep underground observatories, looking for neutrinos and dark matter, for decades.. This device is bigger, and more sensitive...and has multiple uses. @IVerboten meh, I have looked at it enough to feel it isn't interesting. @FarrimondRobert Gran Sasso, in Italy is a tunnel lab built for expts like this.",https://arxiv.org/abs/2003.13407,"The DARWIN observatory is a proposed next-generation experiment to search for particle dark matter and for the neutrinoless double beta decay of $^{136}$Xe. Out of its 50$\,$t total natural xenon inventory, 40$\,$t will be the active target of a time projection chamber which thus contains about 3.6 t of $^{136}$Xe. Here, we show that its projected half-life sensitivity is $2.4\times10^{27}\,$yr, using a fiducial volume of 5t of natural xenon and 10$\,$yr of operation with a background rate of less than 0.2$~$events/(t$\cdot$yr) in the energy region of interest. This sensitivity is based on a detailed Monte Carlo simulation study of the background and event topologies in the large, homogeneous target. DARWIN will be comparable in its science reach to dedicated double beta decay experiments using xenon enriched in $^{136}$Xe. ","Sensitivity of the DARWIN observatory to the neutrinoless double beta
decay of $^{136}$Xe",6,"['Happy about this new research paper by the first large experimental collaboration I have been a member of, proposed to search for particle dark matter and an exotic process called double beta decay. It is rather technical, but here is the link. <LINK>', '@IVerboten meh', '@nadine_feiler This expt is designed to detect very rare events depositing small amounts of energy in an underground detector. Two exotic sources: particle dark matter that collides with particles in the detector, or very rare radioactive decays inside the detector. Both involve new physics.', '@FarrimondRobert yes.. but actually these types of things have already been used as deep underground observatories, looking for neutrinos and dark matter, for decades.. This device is bigger, and more sensitive...and has multiple uses.', ""@IVerboten meh, I have looked at it enough to feel it isn't interesting."", '@FarrimondRobert Gran Sasso, in Italy is a tunnel lab built for expts like this.']",20,03,949
374,92,1047108038849638400,822867138,Bradley Kavanagh,"""Faint Light from #DarkMatter"": <LINK> In a new paper out today, we try to classify & update constraints on different ways that DM could interact with the Standard Model photon In an EFT or in specific UV models, DM interacting with light is a bit of a mess <LINK> The whole project started w/ the question ""What do DM-photon interactions look like in experiments?"" Then: ""What are all the ways DM can interact with light?"" Turns out that there are some other interactions you need for everything to be consistent. Pretty soon you end up here: <LINK> DM-photon couplings (& interactions they bring along for the ride) can be constrained by direct/indirect-detection+colliders In some cases, constraints aren't very strong. For Majorana DM, New Physics mediating these interactions (charged scalars?) could be as light as 100 GeV <LINK>",https://arxiv.org/abs/1810.00033,"Even if Dark Matter (DM) is neutral under electromagnetism, it can still interact with the Standard Model (SM) via photon exchange from higher-dimensional operators. Here we classify the general effective operators coupling DM to photons, distinguishing between Dirac/Majorana fermion and complex/real scalar DM. We provide model-independent constraints on these operators from direct and indirect detection. We also constrain various DM-lepton operators, which induce DM-photon interactions via RG running or which typically arise in sensible UV-completions. This provides a simple way to quickly assess constraints on any DM model that interacts mainly via photon exchange or couples to SM leptons. ","Faint Light from Dark Matter: Classifying and Constraining Dark
Matter-Photon Effective Operators",3,"['""Faint Light from #DarkMatter"": <LINK>\n\nIn a new paper out today, we try to classify &amp; update constraints on different ways that DM could interact with the Standard Model photon\n\nIn an EFT or in specific UV models, DM interacting with light is a bit of a mess <LINK>', 'The whole project started w/ the question ""What do DM-photon interactions look like in experiments?""\n\nThen: ""What are all the ways DM can interact with light?""\n\nTurns out that there are some other interactions you need for everything to be consistent. Pretty soon you end up here: https://t.co/qu60h0AKCM', ""DM-photon couplings (&amp; interactions they bring along for the ride) can be constrained by direct/indirect-detection+colliders\n\nIn some cases, constraints aren't very strong. For Majorana DM, New Physics mediating these interactions (charged scalars?) could be as light as 100 GeV https://t.co/WM41Kuc1UI""]",18,10,835
375,126,1489222128587218944,1149606436084699136,Ulrich Pennig,One of the papers that I worked on with D. Evans during lockdown is now on the arXiv: <LINK> We study the symmetry groups of certain infinite tensor product algebras equipped with a circle action and reveal the rich topological structure of this group. I find the result very satisfying and it is close to a complete answer in the case of circle actions (with lots of room for generalisations). I am tempted to write a little explanatory blurb about it on my homepage.,https://arxiv.org/abs/2201.13364,"We develop an equivariant Dixmier-Douady theory for locally trivial bundles of $C^*$-algebras with fibre $D \otimes \mathbb{K}$ equipped with a fibrewise $\mathbb{T}$-action, where $\mathbb{T}$ denotes the circle group and $D = \operatorname{End}\left(V\right)^{\otimes \infty}$ for a $\mathbb{T}$-representation $V$. In particular, we show that the group of $\mathbb{T}$-equivariant $*$-automorphisms $\operatorname{Aut}_{\mathbb{T}}(D \otimes \mathbb{K})$ is an infinite loop space giving rise to a cohomology theory $E^*_{D,\mathbb{T}}(X)$. Isomorphism classes of equivariant bundles then form a group with respect to the fibrewise tensor product that is isomorphic to $E^1_{D,\mathbb{T}}(X) \cong [X, B\operatorname{Aut}_{\mathbb{T}}(D \otimes \mathbb{K})]$. We compute this group for tori and compare the case $D = \mathbb{C}$ to the equivariant Brauer group for trivial actions on the base space. ","Equivariant higher Dixmier-Douady Theory for circle actions on
UHF-algebras",2,"['One of the papers that I worked on with D. Evans during lockdown is now on the arXiv: <LINK> We study the symmetry groups of certain infinite tensor product algebras equipped with a circle action and reveal the rich topological structure of this group.', 'I find the result very satisfying and it is close to a complete answer in the case of circle actions (with lots of room for generalisations). I am tempted to write a little explanatory blurb about it on my homepage.']",22,01,468
376,3,1312709885499305985,897971748,barnabe.eth,"Our new paper, Data-Driven Models of Selfish Routing, was accepted for #wine2020! It follows work on routing games started during my PhD that grew to become a combination of experimental and theoretical results <LINK> It starts from an observation we made in a previous paper (Routing games in the wild, also chapter 5 of my thesis here: <LINK>) where we bounded empirically the price of anarchy of a real routing system, Singapore, via a large-scale data collection experiment Price of anarchy is a widely studied measure of system performance, comparing system congestion between a centralised setting where agents follow a benevolent dictator's orders and a decentralised setting where agents optimise for themselves. More here! <LINK> Our data looked at the routing system, namely the travel time of students going to school in the morning. The bound we determined empirically was quite a bit lower from the well-known theoretical worst case bound. Is there something particular about real world routing systems? Turns out there is, and the reason is very ""micro"". We make the assumption that agents rule out routes that are just too unreasonable (for instance, you wouldn't go through Kuala Lumpur to reach the other side of Singapore). Commuters don't have infinite knowledge after all! <LINK> When agents only consider routes that are almost as good as the ""fastest"" route (details in the paper), it rules out the really bad routing networks that give rise to the theoretical worst case bounds (eg Pigou network), and we get more realistic bounds <LINK> The model our team in Singapore derived from the data matched work done by another team in Italy, who discovered independently the same, with much deeper theoretical results. We joined forces to combine their insights with our data and produced this paper (Fin)",https://arxiv.org/abs/2009.12871,"We investigate traffic routing both from the perspective of theory as well as real world data. First, we introduce a new type of games: $\theta$-free flow games. Here, commuters only consider, in their strategy sets, paths whose free-flow costs (informally their lengths) are within a small multiplicative $(1+\theta)$ constant of the optimal free-flow cost path connecting their source and destination, where $\theta\geq0$. We provide an exhaustive analysis of tight bounds on PoA($\theta$) for arbitrary classes of cost functions, both in the case of general congestion/routing games as well as in the special case of path-disjoint networks. Second, by using a large mobility dataset in Singapore, we inspect minute-by-minute decision-making of thousands of commuters, and find that $\theta=1$ is a good estimate of agents' route (pre)selection mechanism. In contrast, in Pigou networks, the ratio of the free-flow costs of the routes, and thus $\theta$, is \textit{infinite}; so, although such worst case networks are mathematically simple, they correspond to artificial routing scenarios with little resemblance to real world conditions, opening the possibility of proving much stronger Price of Anarchy guarantees by explicitly studying their dependency on $\theta$. For example, in the case of the standard Bureau of Public Roads (BPR) cost model, where$c_e(x)= a_e x^4+b_e$, and for quartic cost functions in general, the standard PoA bound for $\theta=\infty$ is $2.1505$, and this is tight both for general networks as well as path-disjoint and even parallel-edge networks. In comparison, for $\theta=1$, the PoA in the case of general networks is only $1.6994$, whereas for path-disjoint/parallel-edge networks is even smaller ($1.3652$), showing that both the route geometries as captured by the parameter $\theta$ as well as the network topology have significant effects on PoA. ","Data-Driven Models of Selfish Routing: Why Price of Anarchy Does Depend
on Network Topology",7,"['Our new paper, Data-Driven Models of Selfish Routing, was accepted for #wine2020! It follows work on routing games started during my PhD that grew to become a combination of experimental and theoretical results <LINK>', 'It starts from an observation we made in a previous paper (Routing games in the wild, also chapter 5 of my thesis here: https://t.co/fXfsq7V7px) where we bounded empirically the price of anarchy of a real routing system, Singapore, via a large-scale data collection experiment', ""Price of anarchy is a widely studied measure of system performance, comparing system congestion between a centralised setting where agents follow a benevolent dictator's orders and a decentralised setting where agents optimise for themselves. More here! https://t.co/14mWz4z8Nt"", 'Our data looked at the routing system, namely the travel time of students going to school in the morning. The bound we determined empirically was quite a bit lower from the well-known theoretical worst case bound. Is there something particular about real world routing systems?', 'Turns out there is, and the reason is very ""micro"". We make the assumption that agents rule out routes that are just too unreasonable (for instance, you wouldn\'t go through Kuala Lumpur to reach the other side of Singapore). Commuters don\'t have infinite knowledge after all! https://t.co/UTJkmAtoS0', 'When agents only consider routes that are almost as good as the ""fastest"" route (details in the paper), it rules out the really bad routing networks that give rise to the theoretical worst case bounds (eg Pigou network), and we get more realistic bounds https://t.co/gUZpNBRUcY', 'The model our team in Singapore derived from the data matched work done by another team in Italy, who discovered independently the same, with much deeper theoretical results. We joined forces to combine their insights with our data and produced this paper (Fin)']",20,09,1822
377,281,1402559980566315008,878343763764039680,M Akash Kumar,"How can we optimize language generation models for both quality and diversity? Find out in our KDD'21 paper titled ""Diversity driven Query Rewriting in Search Advertising"". Paper: <LINK> (1/n) In this paper, we describe recent improvements we’ve made to algorithms that match search queries to ads in Microsoft Bing. Specifically, we focus on the problem of rewriting a user search query into multiple same-intent keywords. (2/n) We propose CLOVER, a framework for optimizing human judgments on rewrite quality while also being able to control the desired diversity. We use diversity driven RL algorithms where the optimization objective enforces generating multiple diverse and high-quality rewrites. (3/n) <LINK> We perform online A/B experiments on Bing, which shows that our approach leads to (i) better user engagement with an average increase in clicks by 12.83% accompanied with an average defect reduction by 13.97%, and (ii) improved revenue by 21.29%.",https://arxiv.org/abs/2106.03816,"Retrieving keywords (bidwords) with the same intent as query, referred to as close variant keywords, is of prime importance for effective targeted search advertising. For head and torso search queries, sponsored search engines use a huge repository of same intent queries and keywords, mined ahead of time. Online, this repository is used to rewrite the query and then lookup the rewrite in a repository of bid keywords contributing to significant revenue. Recently generative retrieval models have been shown to be effective at the task of generating such query rewrites. We observe two main limitations of such generative models. First, rewrites generated by these models exhibit low lexical diversity, and hence the rewrites fail to retrieve relevant keywords that have diverse linguistic variations. Second, there is a misalignment between the training objective - the likelihood of training data, v/s what we desire - improved quality and coverage of rewrites. In this work, we introduce CLOVER, a framework to generate both high-quality and diverse rewrites by optimizing for human assessment of rewrite quality using our diversity-driven reinforcement learning algorithm. We use an evaluation model, trained to predict human judgments, as the reward function to finetune the generation policy. We empirically show the effectiveness of our proposed approach through offline experiments on search queries across geographies spanning three major languages. We also perform online A/B experiments on Bing, a large commercial search engine, which shows (i) better user engagement with an average increase in clicks by 12.83% accompanied with an average defect reduction by 13.97%, and (ii) improved revenue by 21.29%. ",Diversity driven Query Rewriting in Search Advertising,4,"['How can we optimize language generation models for both quality and diversity?\n\nFind out in our KDD\'21 paper titled ""Diversity driven Query Rewriting in Search Advertising"". \n\nPaper: <LINK>\n\n(1/n)', 'In this paper, we describe recent improvements we’ve made to algorithms that match search queries to ads in Microsoft Bing. Specifically, we focus on the problem of rewriting a user search query into multiple same-intent keywords. \n\n(2/n)', 'We propose CLOVER, a framework for optimizing human\njudgments on rewrite quality while also being able to control the desired diversity. We use diversity driven RL algorithms where the optimization objective enforces generating multiple diverse and high-quality rewrites. \n\n(3/n) https://t.co/4CnYQKdt6M', 'We perform online A/B experiments on Bing, which shows that our approach leads to (i) better user engagement with an average increase in clicks by 12.83% accompanied with an average defect reduction by 13.97%, and (ii) improved revenue by 21.29%.']",21,06,964
378,245,1271442953039417344,962748619194617856,Alan Morningstar,"New preprint: We find that the 1D MBL transition---under our model assumptions---is in a totally new universality class! <LINK> @RyanPlestid Thanks! Within our model the RG flow shares some features with KT universality, with the critical point being the endpoint of an MBL fixed line. The flow is similarly slow along the critical manifold, but fast perpendicular to it, unlike KT. @RyanPlestid Ya so the divergence of the ""correlation length"" is a different form, for example. The driving physics in our model is the proliferation of locally thermal regions (<LINK>) at longer and longer timescales (lengthscales). @RyanPlestid Will do! (Also maybe a better reference than the previous one I sent: <LINK>.)",https://arxiv.org/abs/2006.04825,"We examine the many-body localization (MBL) phase transition in one-dimensional quantum systems with quenched randomness and short-range interactions. Following recent works, we use a strong-randomness renormalization group (RG) approach where the phase transition is due to the so-called avalanche instability of the MBL phase. We show that the critical behavior can be determined analytically within this RG. On a rough $\textit{qualitative}$ level the RG flow near the critical fixed point is similar to the Kosterlitz-Thouless (KT) flow as previously shown, but there are important differences in the critical behavior. Thus we show that this MBL transition is in a new universality class that is different from KT. The divergence of the correlation length corresponds to critical exponent $\nu \rightarrow \infty$, but the divergence is weaker than for the KT transition. ",Many-body localization near the critical point,4,"['New preprint: We find that the 1D MBL transition---under our model assumptions---is in a totally new universality class! <LINK>', '@RyanPlestid Thanks! Within our model the RG flow shares some features with KT universality, with the critical point being the endpoint of an MBL fixed line. The flow is similarly slow along the critical manifold, but fast perpendicular to it, unlike KT.', '@RyanPlestid Ya so the divergence of the ""correlation length"" is a different form, for example. The driving physics in our model is the proliferation of locally thermal regions (https://t.co/uWfsP0jApx) at longer and longer timescales (lengthscales).', '@RyanPlestid Will do! (Also maybe a better reference than the previous one I sent: https://t.co/nRHKGR2kG1.)']",20,06,708
379,71,1394748134773956612,19510090,Julian Togelius,"Can you use deep learning to model nonlinear partial differential equations? In a new paper led by @ruben_torrado of @AiOrigen, with contributions from @Bumblebor and me, we show that we can model the Buckley-Leverett equation using attention mechanisms. <LINK> <LINK> The Buckley-Leverett equation, which describes liquid flow through a porous medium, is hugely important to many engineering equations in e.g. reservoir modeling. It is also expensive to approximate. Until now, no-one has effectively modeled it with deep learning. The trick here is to use an attention mechanism, which allows the network to effectively model the discontinuity of the shock that travels through the medium. In our experiments, we could see the attention mechanism tracking the shock front, yielding a physical interpretation. <LINK> If you are interested in this research, and want to contribute to pushing the limits of modeling physical systems with deep learning, @AiOrigen is currently hiring deep learning researchers. <LINK> Read more about what the company does here: <LINK>",https://arxiv.org/abs/2105.07898,"Physics-Informed Neural Networks (PINNs) have enabled significant improvements in modelling physical processes described by partial differential equations (PDEs). PINNs are based on simple architectures, and learn the behavior of complex physical systems by optimizing the network parameters to minimize the residual of the underlying PDE. Current network architectures share some of the limitations of classical numerical discretization schemes when applied to non-linear differential equations in continuum mechanics. A paradigmatic example is the solution of hyperbolic conservation laws that develop highly localized nonlinear shock waves. Learning solutions of PDEs with dominant hyperbolic character is a challenge for current PINN approaches, which rely, like most grid-based numerical schemes, on adding artificial dissipation. Here, we address the fundamental question of which network architectures are best suited to learn the complex behavior of non-linear PDEs. We focus on network architecture rather than on residual regularization. Our new methodology, called Physics-Informed Attention-based Neural Networks, (PIANNs), is a combination of recurrent neural networks and attention mechanisms. The attention mechanism adapts the behavior of the deep neural network to the non-linear features of the solution, and break the current limitations of PINNs. We find that PIANNs effectively capture the shock front in a hyperbolic model problem, and are capable of providing high-quality solutions inside and beyond the training set. ","Physics-informed attention-based neural network for solving non-linear
partial differential equations",5,"['Can you use deep learning to model nonlinear partial differential equations? In a new paper led by @ruben_torrado of @AiOrigen, with contributions from @Bumblebor and me, we show that we can model the Buckley-Leverett equation using attention mechanisms. \n<LINK> <LINK>', 'The Buckley-Leverett equation, which describes liquid flow through a porous medium, is hugely important to many engineering equations in e.g. reservoir modeling. It is also expensive to approximate. Until now, no-one has effectively modeled it with deep learning.', 'The trick here is to use an attention mechanism, which allows the network to effectively model the discontinuity of the shock that travels through the medium. In our experiments, we could see the attention mechanism tracking the shock front, yielding a physical interpretation. https://t.co/Hl1Pyb1BBh', 'If you are interested in this research, and want to contribute to pushing the limits of modeling physical systems with deep learning, @AiOrigen is currently hiring deep learning researchers.\nhttps://t.co/0pCKHMfGnz', 'Read more about what the company does here:\nhttps://t.co/yV1HpoSobH']",21,05,1066
380,166,1448668532632432641,4020498861,Sarah Schwettmann,"New for #ICCV21: Toward a Visual Concept Vocabulary for GAN Latent Space ✨w/ @evanqed @davidbau @metasj @jacobandreas & torralba Paper: <LINK> Website: <LINK> What visual concepts are shared by humans and GANs? How can we discover them? 1/n <LINK> Our bottom-up approach captures concepts at different levels of abstraction in a single vocabulary. Directions represent not only details like color, texture & rotation, but also higher-level aspects of visual experience, like what makes a scene more ‘welcoming’ or ‘spooky’ 3/n <LINK> Perceptual salience is built-in: our vocabulary is learned from a new dataset of directions labeled with their semantics. 4/n <LINK> We disentangle these annotations into a glossary of “primitive” visual transformations associated with single concepts. The concepts generalize across latent space + image class, and compose! Compound concepts not present in annotations are recognized across observers. 5/n <LINK>",https://arxiv.org/abs/2110.04292,"A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images. But existing techniques for identifying these transformations rely on either a fixed vocabulary of pre-specified visual concepts, or on unsupervised disentanglement techniques whose alignment with human judgments about perceptual salience is unknown. This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space. Our approach is built from three components: (1) automatic identification of perceptually salient directions based on their layer selectivity; (2) human annotation of these directions with free-form, compositional natural language descriptions; and (3) decomposition of these annotations into a visual concept vocabulary, consisting of distilled directions labeled with single words. Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers, and enabling fine-grained manipulation of image style and content. ",Toward a Visual Concept Vocabulary for GAN Latent Space,4,"['New for #ICCV21: Toward a Visual Concept Vocabulary for GAN Latent Space \n✨w/ @evanqed @davidbau @metasj @jacobandreas &amp; torralba \n\nPaper: <LINK>\nWebsite: <LINK>\n\nWhat visual concepts are shared by humans and GANs?\nHow can we discover them? \n1/n <LINK>', 'Our bottom-up approach captures concepts at different levels of abstraction in a single vocabulary.\n\nDirections represent not only details like color, texture &amp; rotation, but also higher-level aspects of visual experience, like what makes a scene more ‘welcoming’ or ‘spooky’\n3/n https://t.co/LMpf6ZLky7', 'Perceptual salience is built-in: our vocabulary is learned from a new dataset of directions labeled with their semantics. \n\n4/n https://t.co/w8MFPOvngj', 'We disentangle these annotations into a glossary of “primitive” visual transformations associated with single concepts. \n\nThe concepts generalize across latent space + image class, and compose! Compound concepts not present in annotations are recognized across observers.\n5/n https://t.co/kQHN6I8hA8']",21,10,950
381,103,1325802040447000577,15612654,Alan Stern,"#PI_Daily Sometimes data you take yields major collateral discoveries. Check out this cool new paper led by @TodLauer advancing a longstanding problem in extragalactic astronomy, care of @NewHorizons2015 data made for other purposes! #Science #Space #NASA <LINK> <LINK>",https://arxiv.org/abs/2011.03052?fbclid=IwAR3mUGNUCUu__yhLO3vb8boBoFdcxFMUdMjWMWL6zTV211PQxrViM9D6n-I,"We used existing data from the New Horizons LORRI camera to measure the optical-band ($0.4\lesssim\lambda\lesssim0.9{\rm\mu m}$) sky brightness within seven high galactic latitude fields. The average raw level measured while New Horizons was 42 to 45 AU from the Sun is $33.2\pm0.5{\rm ~nW ~m^{-2} ~sr^{-1}}.$ This is $\sim10\times$ darker than the darkest sky accessible to the {\it Hubble Space Telescope}, highlighting the utility of New Horizons for detecting the cosmic optical background (COB). Isolating the COB contribution to the raw total requires subtracting scattered light from bright stars and galaxies, faint stars below the photometric detection-limit within the fields, and diffuse Milky Way light scattered by infrared cirrus. We remove newly identified residual zodiacal light from the IRIS $100\mu$m all sky maps to generate two different estimates for the diffuse galactic light (DGL). Using these yields a highly significant detection of the COB in the range ${\rm 15.9\pm 4.2\ (1.8~stat., 3.7~sys.) ~nW ~m^{-2} ~sr^{-1}}$ to ${\rm 18.7\pm 3.8\ (1.8~stat., 3.3 ~sys.)~ nW ~m^{-2} ~sr^{-1}}$ at the LORRI pivot wavelength of 0.608 $\mu$m. Subtraction of the integrated light of galaxies (IGL) fainter than the photometric detection-limit from the total COB level leaves a diffuse flux component of unknown origin in the range ${\rm 8.8\pm4.9\ (1.8 ~stat., 4.5 ~sys.) ~nW ~m^{-2} ~sr^{-1}}$ to ${\rm 11.9\pm4.6\ (1.8 ~stat., 4.2 ~sys.) ~nW ~m^{-2} ~sr^{-1}}$. Explaining it with undetected galaxies requires the galaxy-count faint-end slope to steepen markedly at $V>24$ or that existing surveys are missing half the galaxies with $V< 30.$ ",New Horizons Observations of the Cosmic Optical Background,1,"['#PI_Daily Sometimes data you take yields major collateral discoveries. Check out this cool new paper led by @TodLauer advancing a longstanding problem in extragalactic astronomy, care of @NewHorizons2015 data made for other purposes! #Science #Space #NASA <LINK> <LINK>']",20,11,269
382,51,1275959462218489857,598117717,Farhad Farokhi,"In the paper below, we have rewritten machine learning with locally-differentially private datasets as a distributionally-robust optimization problem. This results in an entirely new regularizer for training regression models that can be posed as SDP. <LINK>",https://arxiv.org/abs/2006.13488,"We consider machine learning, particularly regression, using locally-differentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The ambiguity set is shown to contain the probability distribution of unperturbed, clean data. The radius of the ambiguity set is a function of the privacy budget, spread of the data, and the size of the problem. Hence, machine learning with locally-differentially private datasets can be rewritten as a distributionally-robust optimization. For general distributions, the distributionally-robust optimization problem can relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For linear and logistic regression, this regularizer is the dual norm of the model parameters. For Gaussian data, the distributionally-robust optimization problem can be solved exactly to find an optimal regularizer. This approach results in an entirely new regularizer for training linear regression models. Training with this novel regularizer can be posed as a semi-definite program. Finally, the performance of the proposed distributionally-robust machine learning training is demonstrated on practical datasets. ","Distributionally-Robust Machine Learning Using Locally
Differentially-Private Data",1,"['In the paper below, we have rewritten machine learning with locally-differentially private datasets as a distributionally-robust optimization problem. This results in an entirely new regularizer for training regression models that can be posed as SDP.\n<LINK>']",20,06,258
383,18,1044945666877747200,194377912,Brian Keating,New research post with @andyfriedman2: “Constraints on Lorentz Invariance and CPT Violation using Optical Photometry and Polarimetry of Active Galaxies BL Lacertae and S5 B0716+714”. Really fun to write my first paper in optical astronomy! <LINK> <LINK>,http://arxiv.org/abs/1809.08356,"Various quantum gravity approaches that extend beyond the standard model predict Lorentz Invariance and Charge-Parity-Time Violation at energies approaching the Planck scale. These models frequently predict a wavelength dependent speed of light, which would result in time delays between promptly emitted photons at different energies, as well as a wavelength-dependent rotation of the plane of linear polarization for photons resulting from vacuum birefringence. Here, we describe a pilot program with an automated system of small telescopes that can simultaneously conduct high cadence optical photometry and polarimetry of Active Galactic Nuclei (AGN) in multiple passbands. We use these observations as a proof-of-principle to demonstrate how such data can be used to test various Lorentz Violation models, including special cases of the Standard Model Extension (SME). In our initial campaign with this system, the Array Photo Polarimeter, we observed two AGN sources, including BL Lacertae at redshift z = 0.069, and S5 B0716+714 at z = 0.31. We demonstrate that optical polarimetry with a broadband Luminance filter combined with simultaneous $I_c$-band observations yields SME parameter constraints that are up to ~10 and ~30 times more sensitive than with a standard $I_c$-band filter, for SME models with mass dimension d = 5 and d = 6, respectively. Using only a small system of telescopes with an effective 0.45-m aperture, we further demonstrate d = 5 constraints for individual lines of sight that are within a factor of ~1-10 in sensitivity to comparable constraints from optical polarimetry with a 3.6-m telescope. Such an approach could significantly improve existing SME constraints via a polarimetric all-sky survey of AGN with multiple 1-meter class telescopes. ","Constraints on Lorentz Invariance and CPT Violation using Optical
Photometry and Polarimetry of Active Galaxies BL Lacertae and S5 B0716+714",1,['New research post with @andyfriedman2:\n“Constraints on Lorentz Invariance and CPT Violation using Optical Photometry and Polarimetry of Active Galaxies BL Lacertae and S5 B0716+714”. Really fun to write my first paper in optical astronomy! <LINK> <LINK>'],18,09,253
384,45,1163438346389086208,1138144242533044225,Andrew S Maxwell,"Check out our new paper, a method for detecting #parity in #atoms and #molecules using #photoelectron #holography. With both experimental and theoretical demonstrations. <LINK> #photoelectronholography For more information on #photoelectronholography see our review and previous publications: <LINK> <LINK> <LINK> <LINK>",https://arxiv.org/abs/1908.03860,"We introduce a novel and concise methodology to detect the parity of atomic and molecular orbitals based on photoelectron holography, which is more general than the existing schemes. It fully accounts for the Coulomb distortions of electron trajectories, does not require sculpted fields to retrieve phase information and, in principle, is applicable to a broad range of electron momenta. By comparatively measuring the differential photoelectron spectra from strong-field ionization of N$_{2}$ molecules and their companion atoms of Ar, some photoelectron holography patterns are found to be dephased for both targets. This is well reproduced by the full-dimensional time-dependent Schr\""{o}dinger equation and the Coulomb quantum-orbit strong-field approximation (CQSFA) simulation. Using the CQSFA, we trace back our observations to different parities of the 3$p$ orbital of Ar and the highest-occupied molecular orbital of N$_{2}$ via interfering Coulomb-distorted quantum orbits carrying different initial phases. This method could in principle be used to extract bound-state phases from any holographic structure, with a wide range of potential applications in recollision physics and spectroscopy. ",Holographic detection of parity in atomic and molecular orbitals,2,"['Check out our new paper, a method for detecting #parity in #atoms and #molecules using #photoelectron #holography. With both experimental and theoretical demonstrations.\n<LINK>\n#photoelectronholography', 'For more information on #photoelectronholography see our review and previous publications:\nhttps://t.co/NfNOREgoR5\nhttps://t.co/HU1zxUpc24\nhttps://t.co/gH4zw9kRyL\nhttps://t.co/OvjG6gdoI3']",19,08,320
385,87,1491837547131154437,334231357,Joris Witstok,"What is a redshift machine? 🧮 If you're curious, have a look at this paper led by Sander Schouws @UniLeiden, presenting new @almaobs observations of six extremely distant galaxies resulting in the discovery of far-infrared [C II] lines in three of them! <LINK>",http://arxiv.org/abs/2202.04080,"The [CII]$_{158\mu m}$ line has long been proposed as a promising line to spectroscopically confirm galaxies in the epoch of reionization. In this paper we present the results of new ALMA observations spectral scanning for [CII] in six particularly luminous Lyman Break Galaxies at $z\sim7$. The six sources were drawn from a sample of bright $z\sim7$ galaxies identified using the wide-area optical, near-IR, and Spitzer/IRAC data over the COSMOS/UltraVISTA field and were targeted on the basis of tight constraints on their redshifts from their IRAC [3.6]-[4.5] colors. We detect significant ($>9\sigma$) [CII] lines in three of our six targets ($50\%$) co-spatial with the rest-$UV$ emission from the ground/space-based near-IR imaging. The luminosities of the [CII] lines lie in the range $5.6$ to $8.8\times10^{8}L_{\odot}$, consistent with the local [CII]-SFR relation. Meanwhile, their [CII]/$L_{IR}\sim1-3\times10^{-3}$ ratios are slightly elevated compared to local (U)LIRGS. This could be due to lower dust-to-gas or dust-to-metal ratios. We also find that our sources display a large kinematic diversity, with one source showing signs of rotation, one source a likely major merger and one dispersion dominated source that might contain a bright star-forming clump. Our results highlight the effectiveness of spectral scans with ALMA in spectroscopically confirming luminous galaxies in the epoch of reionization, something that is being be applied on a significantly larger sample in the on-going REBELS large program. ","ALMA as a Redshift Machine: Using [CII] to Efficiently Confirm Galaxies
in the Epoch of Reionization",1,"[""What is a redshift machine? 🧮 If you're curious, have a look at this paper led by Sander Schouws @UniLeiden, presenting new @almaobs observations of six extremely distant galaxies resulting in the discovery of far-infrared [C II] lines in three of them!\n<LINK>""]",22,02,260
386,341,1313109648023719936,182730982,Andreas Rücklé,"I'm excited to share “MultiCQA”, accepted at @EMNLP2020 We train 140 models on different domains and surprisingly find that neither domain similarity nor data size are critical factors for the best zero-shot transferability. <LINK> \w @PfeiffJo IGurevych <LINK> We train text matching models on all English StackExchange forums with self-supervision. The majority of our 140 models outperforms common IR baselines on non-factoid answer selection and question similarity tasks. <LINK> Our zero-shot MultiCQA model incorporates self-supervised and supervised multi-task learning on all source domains, and outperforms the in-domain SoTA on six evaluation benchmarks. @lintool @emnlp2020 @PfeiffJo That's great work, thanks for pointing it out! We must have missed it and will add it to the camera ready version!",https://arxiv.org/abs/2010.00980,"We study the zero-shot transfer capabilities of text matching models on a massive scale, by self-supervised training on 140 source domains from community question answering forums in English. We investigate the model performances on nine benchmarks of answer selection and question similarity tasks, and show that all 140 models transfer surprisingly well, where the large majority of models substantially outperforms common IR baselines. We also demonstrate that considering a broad selection of source domains is crucial for obtaining the best zero-shot transfer performances, which contrasts the standard procedure that merely relies on the largest and most similar domains. In addition, we extensively study how to best combine multiple source domains. We propose to incorporate self-supervised with supervised multi-task learning on all available source domains. Our best zero-shot transfer model considerably outperforms in-domain BERT and the previous state of the art on six benchmarks. Fine-tuning of our model with in-domain data results in additional large gains and achieves the new state of the art on all nine benchmarks. ","MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on
a Massive Scale",4,"[""I'm excited to share “MultiCQA”, accepted at @EMNLP2020\n\nWe train 140 models on different domains and surprisingly find that neither domain similarity nor data size are critical factors for the best zero-shot transferability.\n\n<LINK>\n\\w @PfeiffJo IGurevych <LINK>"", 'We train text matching models on all English StackExchange forums with self-supervision. The majority of our 140 models outperforms common IR baselines on non-factoid answer selection and question similarity tasks. https://t.co/7bWWYOoyh1', 'Our zero-shot MultiCQA model incorporates self-supervised and supervised multi-task learning on all source domains, and outperforms the in-domain SoTA on six evaluation benchmarks.', ""@lintool @emnlp2020 @PfeiffJo That's great work, thanks for pointing it out! We must have missed it and will add it to the camera ready version!""]",20,10,809
387,53,1205157782707589121,1658162341,Narayanan Rengaswamy,New paper out on adaptive strategies for discriminating tensor product quantum states! We focus on cases where the individual subsystems might not be copies of each other. See <LINK> for details. This is primarily Sarah Brandsen's project. @kenbrownquantum,https://arxiv.org/abs/1912.05087,"Discrimination between quantum states is a fundamental task in quantum information theory. Given two arbitrary tensor-product quantum states (TPQS) $\rho_{\pm} = \rho_{\pm}^{(1)} \otimes \cdots \otimes \rho_{\pm}^{(N)}$, determining the joint $N$-system measurement to optimally distinguish between the two states is a hard problem. Thus, there is great interest in identifying local measurement schemes that are optimal or close-to-optimal. In this work, we focus on distinguishing between two general TPQS. We begin by generalizing previous work by Acin et al. (Phys. Rev. A 71, 032338) to show that a locally greedy (LG) scheme using Bayesian updating can optimally distinguish between two states that can be written as tensor products of arbitrary pure states. Then, we show that even in the limit of large $N$ the same algorithm cannot distinguish tensor products of mixed states with vanishing error probability. This poor asymptotic behavior occurs because the Helstrom measurement becomes trivial for sufficiently biased priors. Based on this, we introduce a modified locally greedy (MLG) scheme with strictly better performance. In the second part of this work, we compare these simple local schemes with a general dynamic programming (DP) approach that finds the optimal series of local measurements to distinguish the two states. When the subsystems are non-identical, we demonstrate that the ordering of the systems affects performance and we extend the DP technique to determine the optimal ordering adaptively. Finally, in contrast to the binary optimal collective measurement, we show that adaptive protocols on sufficiently large (e.g., qutrit) subsystems must contain non-binary measurements to be optimal. (The code that produced the simulation results in this paper can be found at: this https URL) ","Adaptive Procedures for Discrimination Between Arbitrary Tensor-Product
Quantum States",1,"[""New paper out on adaptive strategies for discriminating tensor product quantum states! We focus on cases where the individual subsystems might not be copies of each other. See <LINK> for details. This is primarily Sarah Brandsen's project. @kenbrownquantum""]",19,12,256
388,170,1473751912373633024,990433714948661250,Sergey Levine,"In the real world, humans learn autonomously, without a simulator, and without magically resetting the world instantly to retry. Can we shift from episodic RL, and study ""autonomous"" RL instead that works more like that? Paper: <LINK> Archit's thread below 👇 <LINK>",https://arxiv.org/abs/2112.09605,"Reinforcement learning (RL) provides a naturalistic framing for learning through trial and error, which is appealing both because of its simplicity and effectiveness and because of its resemblance to how humans and animals acquire skills through experience. However, real-world embodied learning, such as that performed by humans and animals, is situated in a continual, non-episodic world, whereas common benchmark tasks in RL are episodic, with the environment resetting between trials to provide the agent with multiple attempts. This discrepancy presents a major challenge when attempting to take RL algorithms developed for episodic simulated environments and run them on real-world platforms, such as robots. In this paper, we aim to address this discrepancy by laying out a framework for Autonomous Reinforcement Learning (ARL): reinforcement learning where the agent not only learns through its own experience, but also contends with lack of human supervision to reset between trials. We introduce a simulated benchmark EARL around this framework, containing a set of diverse and challenging simulated tasks reflective of the hurdles introduced to learning when only a minimal reliance on extrinsic intervention can be assumed. We show that standard approaches to episodic RL and existing approaches struggle as interventions are minimized, underscoring the need for developing new algorithms for reinforcement learning with a greater focus on autonomy. ",Autonomous Reinforcement Learning: Formalism and Benchmarking,1,"['In the real world, humans learn autonomously, without a simulator, and without magically resetting the world instantly to retry. Can we shift from episodic RL, and study ""autonomous"" RL instead that works more like that?\nPaper: <LINK>\nArchit\'s thread below 👇 <LINK>']",21,12,265
389,52,1321279153002455041,1230965221407150080,Noah Golowich,"New paper with Sarath Pattathil & Costis Daskalakis: <LINK>. We answer the question: at what rate can players' actions converge to equilibrium if each plays according a no-regret algorithm in a smooth monotone game? We study the no-regret Optimistic Gradient (OG) algorithm, and show that its T-th iterate converges at a rate of 1/sqrt(T). We also prove a matching lower bound. Previous work established either rates for on-average convergence or showed last-iterate convergence but without rates To prove our upper bound we introduce a potential function that depends on the global structure of the game -- we call it an adaptive potential function. ""System-augmentation"" type approaches previously used for showing convergence of OG (w/o rates) seem not to work.",https://arxiv.org/abs/2010.13724,"We study the question of obtaining last-iterate convergence rates for no-regret learning algorithms in multi-player games. We show that the optimistic gradient (OG) algorithm with a constant step-size, which is no-regret, achieves a last-iterate rate of $O(1/\sqrt{T})$ with respect to the gap function in smooth monotone games. This result addresses a question of Mertikopoulos & Zhou (2018), who asked whether extra-gradient approaches (such as OG) can be applied to achieve improved guarantees in the multi-agent learning setting. The proof of our upper bound uses a new technique centered around an adaptive choice of potential function at each iteration. We also show that the $O(1/\sqrt{T})$ rate is tight for all $p$-SCLI algorithms, which includes OG as a special case. As a byproduct of our lower bound analysis we additionally present a proof of a conjecture of Arjevani et al. (2015) which is more direct than previous approaches. ","Tight last-iterate convergence rates for no-regret learning in