id
stringlengths 1
169
| pr-title
stringlengths 2
190
| pr-article
stringlengths 0
65k
| pr-summary
stringlengths 47
4.27k
| sc-title
stringclasses 2
values | sc-article
stringlengths 0
2.03M
| sc-abstract
stringclasses 2
values | sc-section_names
sequencelengths 0
0
| sc-sections
sequencelengths 0
0
| sc-authors
sequencelengths 0
0
| source
stringclasses 2
values | Topic
stringclasses 10
values | Citation
stringlengths 4
4.58k
| Paper_URL
stringlengths 4
213
| News_URL
stringlengths 4
119
| pr-summary-and-article
stringlengths 49
66.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10.1038/s41928-022-00748-4 | A new solution to cool electronic devices and prevent them from overheating | Electronic devices, including smartphones and tablet portable computers, are becoming increasingly advanced and compact. As their performance increases and their size decreases, these devices generate more heat, which can reduce their safety and cause them to break. In recent years, engineers have thus been trying to develop strategies that could prevent electronics from overheating. One proposed solution entails the use of heat spreaders, layers that promote the spread and dissipation of heat inside devices. Researchers at University of Illinois at Urbana-Champaign and University of California, Berkeley (UC Berkeley) have recently devised an alternative strategy that could cool electronics more efficiently than other existing solutions. Their strategy, introduced in a paper published in Nature Electronics, is based on the use of heat spreaders comprised of an electrical insulating layer of poly (2-chloro-p-xylylene) (Parylene C) and a coating of copper. "Our recent paper was the culmination of our efforts to produce coating heat spreaders for high-efficiency electronics cooling," Tarek Gebrael, one of the researchers who carried out the study, told TechXplore. "The motivation was to enable effective heat dissipation from power-dense electronics." Heat spreaders are cooling systems comprised of materials with a high-thermal conductivity, such as copper and aluminum. These systems can spread the heat generated by the devices across a larger surface area, making it easier for them to dissipate heat into the surrounding environment. "The advantage of using our conformal coating heat spreaders is that they cover the electronic device entirely, including the top, bottom, and sides of the device," Gebrael explained. "This is impossible with standard heat spreaders which are usually added on top of the device or with standard PCB copper planes. By achieving those conformal coatings, we were able to provide more routes for the heat to leave the electronic device, which translates into a better cooling performance." In the past, teams had developed similar techniques that prevent overheating by opening more "routes" for heat to leave electronic devices. Previously proposed solutions, however, utilize very expensive materials, such as diamond. This makes them difficult to develop and implement on a large scale. Gebrael and his colleagues evaluated their copper coated-heat spreaders in a series of tests and found that they performed extremely well. Specifically, their solution achieved up to a 740% increase in the power per unit volume compared to standard air-cooled copper heat sinks used today. "This remarkable result derives from our spreaders' effectiveness in dissipating the heat, as well as the compact volume they occupy when applied on printed circuit boards," Gebrael said. "This feature enables fitting more electronics in a smaller space without overheating issues, which is essential to create the platforms of future technologies (AI, augmented reality, etc.)." In the future, the heat spreaders developed by this team of researchers could be used to cool down electronic devices more efficiently, without requiring expensive materials. Notably, the coating recipe they proposed combines processes that are already in use in the electronics industry. This could further facilitate its application in real-world settings and its commercialization. "We are now investigating the reliability and durability of our coatings in specific environments (boiling water, boiling dielectric fluids, thermal cycling, and high-voltage environments) for long periods of time," Gebrael added. "We want to make sure that our coatings retain their superior cooling performance. We are also implementing the coatings with full-scale power modules and GPU cards, whereas we used only simple test boards in the initial work." | Researchers at the University of Illinois at Urbana-Champaign and University of California, Berkeley have developed a new strategy for cooling electronic devices, which involves using heat spreaders comprised of an electrical insulating layer and a copper coating. This conformal coating heat spreader covers the entire device, including the top, bottom, and sides, providing more routes for heat to dissipate and achieving a 740% increase in power per unit volume compared to standard air-cooled copper heat sinks. The team's solution is more efficient and cost-effective than previous proposals, which used expensive materials like diamond, and could be used to cool down electronic devices without overheating issues, enabling the creation of smaller, more powerful devices for applications such as AI and augmented reality. | None | Abstract Electrification is critical to decarbonizing society, but managing increasing power densification in electrical systems will require the development of new thermal management technologies. One approach is to use monolithic-metal-based heat spreaders that reduce thermal resistance and temperature fluctuation in electronic devices. However, their electrical conductivity makes them challenging to implement. Here we report co-designed electronic systems that monolithically integrate copper directly on electronic devices for heat spreading and temperature stabilization. The approach first coats the devices with an electrical insulating layer of poly(2-chloro- p -xylylene) (parylene C) and then a conformal coating of copper. This allows the copper to be in close proximity to the heat-generating elements, eliminating the need for thermal interface materials and providing improved cooling performance compared with existing technologies. We test the approach with gallium nitride power transistors, and show that it can be used in systems operating at up to 600 V and provides a low junction-to-ambient specific thermal resistance of 2.3 cm 2 K W –1 in quiescent air and 0.7 cm 2 K W –1 in quiescent water. Main Thermal management infrastructure plays a key role in decreasing the energy consumption of electronic systems in, for example, data centres 1 , 2 and electric vehicles 3 , 4 , 5 . Efficient thermal management techniques using indirect liquid cooling 6 , 7 , 8 and immersion cooling 9 , 10 , 11 can enable heat removal with low energy consumption. Thermal management is also important for keeping device temperatures below their reliable operating limits, leading to increased reliability and higher system power densities 12 . Emergent wide-bandgap devices such as gallium nitride (GaN) and silicon carbide (SiC) transistors can tolerate junction temperatures up to 150 °C, but other components located nearby are often rated for lower operating temperatures (<100 °C), thus requiring efficient cooling techniques at the chip, board and system levels 13 . Conventionally, heat spreading is accomplished by adding a high-thermal-conductivity component to the heat-generating device 14 , thus reducing the overall junction-to-coolant thermal resistance 15 , 16 , 17 . Laptops, for example, use heat-spreading graphite, which is pressed against the electronics to facilitate heat dissipation 18 . One drawback of conventional heat spreaders is their inability to reach shadowed regions underneath devices. The absence of contact between the heat source (the junction) and the heat-spreading medium leads to less efficient cooling. Although heat-spreading methods based on diamond 19 and graphene 20 address this problem, they are not scalable. In this Article, we report a board-level heat-spreading technology in which the heat-spreading material can reach confined regions underneath devices on circuit boards and systems. The on-board devices are first coated with a high-dielectric-strength poly(2-chloro- p -xylylene) (parylene C) for electrical insulation 11 , 21 . Then, using successive depositions with thermal evaporation, electroless plating and electroplating, a conformal copper coating is monolithically grown on parylene C. The conformal copper reaches underneath the devices, creates contact with heat-generating regions and provides thermal dissipation routes from the top, bottom and sides of the package. The monolithic integration of copper also eliminates the need for a thermal interface material (TIM) 22 , 23 , typically required to fill air crevices between two mating solid surfaces even with efficient heat-spreading methods using heat pipes 24 and vapour chambers 25 . The application of TIMs requires compression to improve contact and reduce thermal impedance, which can compromise the reliability of chip-scale packages. To illustrate the capabilities of our approach, we integrate copper heat spreaders directly on GaN devices, and then characterize their electrothermal performance during steady-state and transient operations. Cooling in quiescent air or water highlights the potential for ultraefficient passive (non-pumped) cooling. Our approach offers improved performance compared with established copper heat sinks and copper-plane heat spreaders; further, by removing the need for large heat sinks, it could potentially be used to create compact and power-dense electronics. Heat spreader fabrication To demonstrate the integration of our coating (Fig. 1 ) with devices using different soldering methods, we designed printed circuit boards (PCBs) with two different GaN power transistors: a top-cooled GS66508T surface-mount device (SMD) and a top-cooled EPC2034 ball grid array (BGA) device. We start by depositing an electrical insulating layer of parylene C to prevent the electrical shorting of devices by the Cu coating (Fig. 1a ). Since parylene is deposited by chemical vapour deposition (CVD), it conformally covers all the exposed circuits, ensuring the electrical insulation of PCBs. Next, we proceed with growing Cu on top of parylene. We first deposit a nanometric seed metal layer via physical vapour deposition (PVD) such as thermal evaporation 26 . Since PVD methods cannot reach the shadowed regions underneath devices (Fig. 1b ), if we directly proceed with electroplating after thermal evaporation, the discontinuous film of Cu fails to drive the electrical current to the top of the devices, resulting in uncoated devices. To overcome this challenge, we added an electroless deposition step that bridges the PVD film and creates a continuous coating (Fig. 1c ). We attempted to use electroless deposition without the PVD step but failed to achieve high-quality coatings. After the electroless deposition step, we increase the thickness of the Cu coating to the desired level by electroplating Cu, resulting in a monolithically integrated Cu-coated heat spreader (Fig. 1d ). Fig. 1: Cu-coated heat spreader fabrication. a , Schematic showing the coating of GaN devices and PCBs with a layer of parylene C for electrical insulation. Parylene C is deposited via CVD, conformally covering the boards and reaching underneath the devices. b , Schematic of the deposition of a 20-nm-thick Cr layer followed by a 50-nm-thick Cu layer via PVD. The PVD Cu layer acts as a seed layer for the micrometre-thick electroless Cu deposition. c , Schematic of the electroless deposition of Cu to cover the shadowed regions underneath the devices and create a continuous Cu film that can drive electrical current from FR-4 to the top of the device. d , Schematic showing further growth of Cu using d.c. Cu electroplating. The schematic is not to scale. e , Calculated thermal resistance R P of parylene C and specific thermal resistance based on the measured thickness t P of parylene C. f , Maximum voltage drop applied across the parylene C layer for different devices tested. The solid green bars and shaded red bars correspond to layers that passed (leakage current, <1 µA) and failed (leakage current, >1 µA) the voltage test, respectively. The device number corresponds to an experiment with a specific GaN device and parylene C and Cu plating duration (Supplementary Table 2 ). g , Cu coating thickness as a function of electroplating time. Linear regression shows a Cu plating rate (PR) of 10.58 ± 0.32 µm h –1 . Full size image Parylene C and Cu coating characterization The deposited parylene C coatings were 8.49 ± 0.51 and 9.83 ± 0.27 µm thick. The approximate specific thermal resistance R' ′ P of these layers is equivalent to 1.01 and 1.17 cm 2 K W –1 , respectively (Fig. 1e ), obtained by dividing the thickness of the parylene layer by thermal conductivity k P = 0.084 ± 0.002 W m −1 K −1 (refs. 27 , 28 ). The thermal resistance R P of parylene is device specific and obtained by dividing R' ′ P by the total area of the device across which the heat flows from the device to the Cu coating (Supplementary Section 2 provides the calculation details). We measured the leakage current through the parylene C layers to characterize their current-blocking performance under voltages up to 600 V (Methods). Figure 1f shows the maximum voltage applied across the parylene layer of 14 GaN devices and identifies failure (red) or success (solid green) for blocking the leakage current. The coatings resulting in leakage currents below 1 µA were considered successful in electrically insulating the devices. Each device number corresponds to an experiment with a specific GaN device, parylene thickness and Cu plating duration (Supplementary Table 2 ). Here 600 V corresponds to a maximum electric field of 70.7 V µm –1 , which is less than the dielectric strength of parylene C (~220 V µm –1 ) 27 , 29 . The true maximum electric field that the parylene film experiences is higher than 70.7 V µm –1 , especially in locations with electric-field concentration. Three of the four failed parylene layers (devices 3, 10 and 11) were electroplated with Cu for 47 h, the longest electroplating duration used in this study, meaning that thicker Cu coatings lead to a lower parylene voltage rating. The electrical failure mechanism for these four devices remains unclear and presents an avenue for future investigations of failure mechanisms. The Cu thickness is dominated by the electroplating step; therefore, we correlated the measured total Cu thickness with the electroplating duration and we obtained a nearly linear relationship with a deposition rate of 10.58 ± 0.32 µm h −1 (Fig. 1g ). The deposited Cu coatings are free of voids and defects of sizes larger than the micrometre range (Supplementary Fig. 2b ). The current density of 74 A m –2 used here produced dense Cu films and ensured void-free deposition 30 . Experimental four-point probe measurements were used to obtain a deposited Cu thermal conductivity of 127 ± 14 W m −1 K −1 (Methods and Supplementary Section 2 ). Steady-state cooling performance We tested the cooling performance of the fabricated coated PCBs in both quiescent air and quiescent water at ambient temperature ( T amb ≈ 22 ± 1 °C). The cooling in these two media is passive where heat is dissipated from the PCB into natural convection streams of the fluid. We compared the thermal performance of Cu coating (Fig. 2b,e ) with commercial 70-µm-thick solder-coated Cu plane fabricated with the PCB (Fig. 2a ) as well as commercial 1.4 × 1.4 × 1.4 cm 3 heat sinks (Fig. 2c,f ). Fig. 2: Photographs of the tested configurations. a – c , Photographs of 4.8 × 2.5 cm 2 70-µm-thick solder-coated Cu-plane heat spreader ( a ), Cu-coated heat spreader ( b ) and pair of 1.4 × 1.4 × 1.4 cm 3 Cu heat sinks ( c ). The insets show the schematic of the cross-sectional material stackup of the solder-coated Cu plane (top left) and Cu heat sinks (top right). d , For the experiments, we designed and fabricated custom PCBs having two GaN power transistors: a top-cooled SMD from GaN Systems (GS66508T) and a top-cooled BGA device from Efficient Power Conversion (EPC2034). e , Top-view photograph of the 5.4 × 2.5 cm 2 Cu-coated heat spreader. f , To ensure good thermal contact between the GaN devices and Cu heat sinks, we added layers of gap filler followed by a thermal paste. All scale bars correspond to 1 cm. Full size image We measured the temperature of the outer device surface ( T s ) that contacts the cooling fluid. This includes the Cu surface for Cu coatings, parylene surface for bare devices in water and top surface of the device in the other cases. We measured the temperature difference (Δ T = T s – T amb ) versus the Joule heating power in the device and the heat flux based on the device footprint area (Methods and Supplementary Fig. 3 ). The slopes of the linear regression curves of these data correspond to the surface-to-ambient thermal resistance. The measured thermal resistances are summarized for EPC2034 (Fig. 3a,b ), where we add the junction-to-case thermal resistance R JC and the calculated thermal resistance of the 8.49-µm-thick parylene layer R P to determine the total junction-to-ambient thermal resistance. The thermal resistances of both GaN devices are summarized in Supplementary Fig. 4 . Water has a lower thermal resistance due to its higher heat transfer coefficient ( h ) compared with air. For the two devices in both air and water, the spreading capability of Cu increases with thicker coatings. The thermal resistance decreases with a larger coating thickness t s , until it reaches a plateau when the spreading is not considerably affected by the thickness (Fig. 3a,b ). We see that the asymptotic behaviour is the same for both GS66508T and EPC2034 devices whether they are operated in air (~15 K W –1 ) or in water (~2 K W –1 ). As the coating becomes thicker, it efficiently spreads the heat such that the device footprint area does not contribute considerably in the spreading process compared with spreading within the coating. Moreover, we see that the Cu-plane heat spreader—with almost the same area as the Cu coatings—does not provide a substantial reduction in thermal resistance over that of the bare device (device with no spreader), demonstrating a higher thermal resistance compared with thinner Cu coatings. The poor heat transfer performance of the Cu plane is due to the gap that separates the Cu plane from the devices to allow space for the circuitry (Fig. 2a ). The commercial Cu heat sink provided a junction-to-ambient thermal resistance that was comparable to that of the thickest Cu coating in air. However, the Cu coating offers substantial volume saving compared with a commercial Cu heat sink. Fig. 3: Thermal performance of EPC2034 monolithically integrated with copper. a , b , Thermal resistance ( R , left axis) and specific thermal resistance ( R ″, right axis) as a function of Cu coating thickness ( t s ) for the air-cooled EPC2034 GaN device ( a ) and water-immersion-cooled EPC2034 GaN device ( b ). The thermal-resistance error bars obtained from linear regression are smaller than the symbol size and are not shown for clarity. The thickness error bars correspond to standard deviation from the Cu thickness measurements (Supplementary Table 1 ). TC in the inset of a corresponds to the place of the thermocouple in the thermal resistor network. T J , T amb , and R JC in the inset of a correspond to the junction temperature of the device, ambient temperature of the fluid, and junction-to-case thermal resistance of the device, respectively. c , d , Time constant ( τ ) as a function of Cu coating thickness ( t s ) for the air-cooled EPC2034 GaN device ( c ) and water-immersion-cooled EPC2034 GaN device ( d ). The time-constant error bars were obtained from the curve-fitting analysis. The thickness error bars correspond to the standard deviation from the Cu thickness measurements (Supplementary Table 1 ). e , f , Temperature-swing (Δ T = T s – T amb ) response as a function of time for the air-cooled EPC2034 GaN device with the Cu heat sink, 55-µm-thick Cu coating, 172-µm-thick Cu coating and 476-µm-thick Cu coating at a pulsed heat load of 1.00 Hz ( e ) and 0.33 Hz ( f ). The energy per pulse was set to 6.09 ± 1.46 J with a duty cycle of 50%. The error bars in the measured Δ T are ±1.1 °C and are not shown for clarity. Full size image Transient cooling performance We measured the characteristic time constants of the Cu-coated PCBs, the Cu-plane PCB and the Cu-heat-sink PCB to quantify the device-temperature stability and thermal mass. First, we turn on the device and wait until its temperature reaches a steady value. Then, we turn the device off and record the temperature as it decays with time (Methods). Since the maximum temperature ( T max ) is different for every experiment, we non-dimensionalize the temperature and fit the results according to a double-phase exponential decay model Θ ( t ) = ( T ( t ) – T amb )/( T max – T amb ) = A 1 exp(– t / τ d ) + A 2 exp(– t / τ s ) + Θ 0 (Supplementary Tables 3 – 6 list the fitting parameters). A double-phase exponential model was found to describe our results better than a single-phase model especially for thinner Cu coatings because of the different timescales involved. The first exponential term, having a time constant τ d < τ s , describes the device-temperature decay. The gradient in temperature at the device level is less considerable for thicker Cu coatings (for example, 439 µm thick) due to their better heat-spreading capability. Hence, we found that the single exponential model is suitable for describing the temperature decay for thicker Cu coatings. The second exponential term mainly corresponds to the heat spreader, and therefore, we denote its time constant by τ s . Higher-order exponentials that represent the PCB-temperature decay are embedded in constant Θ 0 . Figure 3c,d shows the heat-spreader time constant τ s as a function of Cu-coating thickness t s for EPC2034. Supplementary Fig. 6 summarizes the time-constant values for both GaN devices. We observe that the time constant keeps increasing due to the continuous increase in coating thermal mass m s C s , where m s is the heat-spreader mass and C s is the heat-spreader specific heat capacity. The time constant for the Cu plane is comparable to that of the bare device, whereas the Cu-heat-sink PCB in air had the highest time constants (owing to its higher thermal mass). Moreover, although the Cu-coating time constant could exceed 80 s in air for both EPC2034 and GS66508T GaN devices having thick coatings, it remained below 8 s in water, where the relative increase in the coating time constant over that of the bare device is smaller than in air. The time constant τ s ≈ m s C s / hA s , where the thermal mass is divided by the product of the heat transfer coefficient ( h ) and spreading area ( A s ). The high heat transfer coefficient in water compared with that in air is the reason why the time constants are relatively low in water. Therefore, although the Cu coatings are effective in stabilizing the temperature in air, this is not true for water. However, even though fast thermal response is undesirable for preventing sudden temperature surges, the high heat transfer coefficient keeps water (or other dielectric fluids) a viable solution for temperature stabilization because it keeps temperature fluctuation amplitudes low. We also tested the temperature response to a pulsed heat load, obtained by exposing the GaN devices to an ON/OFF square-voltage signal. Figure 3e,f shows the device-temperature-swing (Δ T = T s – T amb ) response as a function of time for the air-cooled EPC2034 GaN device at a pulsed heat load of 1.00 Hz and 0.33 Hz, respectively. In the experiments, we set the energy per cycle to 6.09 ± 1.46 J with a duty cycle of 50%. Thicker copper coatings result in smaller temperature fluctuations, irrespective of the device type, coolant fluid or heat-load frequency. The maximum temperatures observed at larger pulsing frequencies are higher because the power dissipation per cycle is larger. In air, the temperature increases during a single cycle, because the temperature rise is larger than the temperature fall. The temperature keeps increasing until it reaches a steady temperature oscillation. This steady behaviour is confirmed in our experiments in water, where the temperature maximum does not grow within the 20 s experiment time (Supplementary Section 4 provides additional details of the pulsed heat experiments.) Spreading depth of monolithically integrated Cu coatings The thermal performance of Cu coatings depends on their surface area relative to their heat-spreading capability. To understand this relation, we choose to describe the spreading capability with the distance from the device where the temperature is sufficiently close to the ambient temperature such that negligible spreading, and hence negligible heat dissipation, occurs beyond this distance. Numerically, we define the 10% spreading depth δ s as ( T ( δ s ) – T amb )/( T max – T amb ) = 0.1. The spreading depth can be numerically obtained for radial heat transfer in a thin Cu coating (Supplementary Section 5 ). To understand the effect of the different coating parameters on δ s , we show that \(\delta _{{{\mathrm{s}}}}\approx \delta _{{{\mathrm{s}}}}^ \ast\) , where \(\delta _{{{\mathrm{s}}}}^ \ast = (k_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}/h)^{0.5}\) represents the characteristic spreading depth and k s is the planar thermal conductivity of the coating. Hence, δ s increases with a higher coating thickness and thermal conductivity and decreases with a higher cooling heat transfer coefficient. Supplementary Section 6 provides a full derivation of \(\delta _{{{\mathrm{s}}}}^ \ast\) . Figure 4a shows that δ s is reached within the lateral bounds of the 51-µm-thick Cu coating, where δ s is smaller than lateral distance L s from the device to the boundary of the heat spreader. Here Δ T = T s – T amb drops by 72% going from the device to the right-side edge of the heat spreader. In this heat-spreading mode, the spreader surface-to-ambient thermal resistance R s decreases with thicker Cu coatings as \(R_{{{\mathrm{s}}}}\approx r_{{{\mathrm{d}}}}^{ - 1}\left( {hk_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}} \right)^{ - 0.5}\) , where r d is the effective device radius (Supplementary Section 5 ). When the heat-spreader lateral dimensions become small compared with the spreading depth, the heat spreader achieves an almost uniform temperature that is close to the maximum spreader temperature. Fig. 4a explains this behaviour with the 439-µm-thick Cu coating where Δ T = T s – T amb only drops by 21% going from the device to the right-side edge of the heat spreader. In this case, thermal resistance R s ≈ 1/( hA s ); hence, it is independent of the coating thickness or coating thermal conductivity, which agrees with the observation that the thermal-resistance curves reach a plateau and cease to decrease with thicker Cu coatings in air (Fig. 3a ). The same convergence behaviour that we observe with water (Fig. 3b ) occurs due to a different mechanism. The higher heat transfer coefficient associated with water makes \(R_{{{\mathrm{s}}}}\approx r_{{{\mathrm{d}}}}^{ - 1}\left( {hk_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}} \right)^{ - 0.5}\) converge faster. Therefore, we observe that the spreading depth and thermal resistance converge even though the spreading depth is smaller than the lateral distance L s (Fig. 4b,c ). Conversely, even if the spreader dimensions are small compared with δ s , increasing the coating thickness further adds a thermal mass that contributes in increasing the time constant further; therefore, no plateau is seen in the curves of Fig. 3c,d . Fig. 4: Heat-spreading analysis. a , Infrared imaging of the planar temperature distribution for the 51-µm- and 439-µm-thick Cu-coated heat spreaders and their decay with time. At t = 0 s, the device with the 51-µm-thick Cu coating operates at 1.22 W, whereas that with the 439-µm-thick coating operates at 2 W, both at the steady state. We turned off the power and captured the infrared temperature distribution at 30, 60 and 90 s. b , Calculated 10% spreading depth δ s as a function of Cu thickness ranging from 20 to 1,000 µm. The calculations are done for devices operating at 1, 2 and 5 W, in air or water. c , Spreader thermal resistance R s as a function of 10% spreading depth δ s . Three operating regimes were observed: the uniform maximum temperature regime when the spreader dimension L s ≪ δ s , the semi-infinite spreading regime when L s > δ s and the finite spreading regime between the two. The calculations show that the thermal resistance reaches a plateau in the semi-infinite regime, where \(R_{{{\mathrm{s}}}}\approx r_{{{\mathrm{d}}}}^{ - 1}\left( {hk_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}} \right)^{ - 0.5}\) declines fast due to the high heat transfer coefficient h . Here R s reaches a resistance limit in the uniform maximum temperature regime as well, where the spreader is at a constant temperature and R s ≈ 1/( hA s ). Full size image Design of Cu-coated heat spreaders The spreading depth is an important figure of merit for designing monolithically integrated Cu coatings as an efficient cooling mechanism. The resulting Cu-coated heat spreader operates under three different regimes distinguished by δ s . The first regime is the uniform maximum temperature regime where the dimensions of the heat spreader are much smaller than the spreading depth ( L s ≪ δ s ). Here thermal resistance R s is independent of coating thickness t s and thermal conductivity k s . The thermal resistance depends only on the heat transfer coefficient of the fluid medium h and heat-spreader coverage area A s . This case can be observed with the 439-µm-thick spreader at t = 0 s (Fig. 4a ). The second regime is the semi-infinite spreader regime where the dimensions of the heat spreader are greater than the spreading depth ( L s > δ s ). Here increasing A s does not result in a remarkable reduction in the thermal resistance R s because the spreading depth has been reached inside the spreader. In this case, increasing δ s , and hence reducing R s , is achieved by increasing either k s or t s . This case can be observed in the horizontal direction of the 51-µm-thick spreader at t = 0 s (Fig. 4a ). The third regime is the finite spreader regime where the dimensions of the heat spreader are comparable to the spreading depth ( L s ≈ δ s ). Here R s depends on all the three parameters, namely, t s , k s and A s . This case can be observed in the vertical direction of the 51-µm-thick spreader at t = 0 s (Fig. 4a ). Knowledge of the spreading depth is also essential to predict the effect of hotspots on neighbouring active or passive components (inductors and capacitors, for example). Some devices cannot tolerate the high temperatures at which wide-bandgap semiconductor power transistors are rated to operate. Therefore, if heat is spread from hotspots to those low-temperature devices, it can compromise their operation. Furthermore, the temperatures of multiple active devices that are thermally interacting are higher than individual standalone devices operating under the same coating and cooling-medium conditions. Thermal coupling is considerable when the devices are located within a distance from each other that is smaller than δ s . Designing the heat spreader and cooling medium such that the distance separating the devices is greater than δ s can alleviate the adverse effects of thermal interactions. The spreading depth also determines the dependence of R s on the device location within the spreader. Our computational fluid dynamics (CFD) simulations show that this dependence is the strongest when the dimensions of the spreader are close to δ s (Supplementary Section 8 ). In this case, the thermal resistance increases as the device is moved from the centre of the spreader to its edges. The more the dimensions of the spreader deviate from δ s (larger or smaller), the less considerable is the dependence of R s on the device location. This indicates that quantifying this dependence when L s ≈ δ s is an important step when designing coated heat spreaders. Another aspect studied through finite element analysis (FEA) simulations is the effect that our coatings have on the thermomechanical reliability of electronics (Fig. 5a–c ). The FEA results reveal that our coatings can potentially extend the life of solder joints by coupling the substrate/chip deformations and reducing plastic strain energy density (Fig. 5d ), and hence, crack formation, propagation and failure are delayed 31 . Although the high Young’s modulus of Cu increases the stress on the chip (Fig. 5e ), the added stress is far below the Si fracture strength of ~4 GPa 32 . Also, the relatively low stiffness of parylene helps in reducing the chip stresses compared with those obtained with stiffer insulators such as silicon dioxide (SiO 2 ) (Fig. 5e ). Supplementary Section 9 provides the complete FEA thermomechanical analysis. An effect not captured by our FEA simulations is the internal stress of Cu coating that increases with thicker Cu coatings 33 . A high enough internal stress may result in Cu piercing the flexible parylene, leading to potential electrical short circuits. Fig. 5: Coating effect on thermomechanical reliability. a , Schematic of the model used in the FEA simulations. The model consists of a silicon (Si) chip (grey colour) connected to an FR-4 PCB (green colour) through 25 solder balls (brown colour). All the exposed surfaces are conformally coated with a 10-µm-thick electrical insulation film (parylene C or SiO 2 ) followed by a 150-µm-thick Cu coating. We applied a three-cycle thermal load alternating between −55 and 85 °C, where each cycle consists of a 10 min dwell at −55 °C, a 7 min ramp up, a 10 min dwell at 85 °C and a 7 min ramp down. b , Contour plots of the equivalent plastic strain ε p in the solder ball farthest away from the Si centre, obtained from the FEA simulation of the model with parylene C and Cu coatings at −55 °C during the first cycle. c , Contour plots of the von Mises stress σ v in the Si and solder balls obtained from the simulation of the model with parylene C and Cu coatings at −55 °C during the first cycle. d , Bar plot of the plastic strain energy density accumulated during the second cycle in the solder-joint region of the solder ball farthest away from the Si centre for the no coating, SiO 2 and Cu, and parylene C and Cu cases. The plastic strain energy density was averaged across the volume of the solder-joint portion 10 µm below the Si. e , Bar plot of the maximum von Mises stress in Si that occurred during the entire thermal loading for the no coating, SiO 2 and Cu, and parylene C and Cu cases. Methods and Supplementary Section 9 provide additional details of the FEA simulation. Full size image The performance and reliability of the coatings in boiling immersion cooling systems need investigation. The addition of nanostructured CuO can offer attractive boiling performance when used in immersion cooling by increasing the critical heat flux of water 34 , 35 . In addition to CuO, the developed Cu coating provides a platform for the creation of alternative Cu-based methods for heat transfer enhancements directly on electronic devices. These include the cathodic deposition of Cu 36 and inverse opal deposition 37 . Replacement materials for Cu and parylene (like diamond and ceramics, respectively) can be investigated in the future. The coating materials should be compatible with the electronic application. For example, the effects of the integration of parylene/Cu coatings to devices involving radio-frequency signals and electromagnetic-interference shielding need more investigation. Also, implementation of these coatings requires consideration of the serviceability of electronics. Replacing a malfunctioning device should not require re-coating of the entire PCB. Electrothermal co-design is needed to fabricate electrical modules that can mechanically separate from each other and can be easily detached from the system, fixed and re-coated without interfering with other PCB parts. Conclusions The low junction-to-ambient thermal resistances measured in quiescent air and quiescent water in our work indicate that coated heat spreaders can achieve efficient and inexpensive passive cooling in electrical systems, saving cooling energy consumption and enhancing product performance. Dielectric fluids, such as mineral oils and hydrofluoroethers, have been extensively used as immersion coolants due to their dielectric properties. Water offers even higher heat transfer coefficients and heat fluxes 11 , but its cooling performance is compromised by its electrically conductive property. Our copper-coating approach enables immersion cooling in water due to the insulating parylene layer. Our approach can also be designed to operate in the single-phase cooling regime, cooling electronics and eliminating boiling mechanical stresses that are common when using dielectric coolants. Since the copper coatings are planar, they would not interfere with the stacking of server modules. Therefore, integration can lead to higher system power densities compared with standard cooling methods (Supplementary Section 10 ). For example, our experiments show that although a heat sink and the 223-µm-thick Cu coating have similar thermal resistances, the power per unit volume of the copper coating is 740% higher than that of the heat sink. This increase in power density is due to an 89% decrease in the volume occupied by the coatings relative to that of the heat sink. We benchmarked our coatings with existing heat sinks and cold plates. The 562-µm-thick copper coatings have a 10.1 and 2.7 cm 3 K W –1 volumetric thermal resistance ( r ″) in air and water, respectively, require zero cooling power and should outperform existing air-forced convection and indirect single-phase liquid-forced convection cooling. The 562-µm-thick copper coating in water achieved the lowest r ″ value among all the cooling methods compared in the benchmarking analysis (Supplementary Section 11 ). Our monolithic integration method can also achieve better specific thermal resistance (cm 2 K W –1 ) than vapour chambers attached with TIMs, as well as removing the need to apply pressure to the electronic device (Supplementary Section 11 ). Recent developments in additive manufacturing have led to the removal of TIMs by directly depositing metal on semiconductor devices 38 , 39 . These additive manufacturing techniques can produce thermally optimized solutions, but they cannot coat several devices at once because of electrical short-circuit risks. This challenge prevents additive manufacturing from achieving the volume compactness that coated heat spreaders are capable of, unless an electrically insulating layer is applied. Methods Cu coating deposition recipe The following steps were used to achieve monolithic integration of Cu coating on the devices and PCB, and we used Kapton tape masks where coatings are not desired. Adhesion promoter and parylene C deposition Here A-174 silane (also known as γ-MPS (3-(trimethoxysilyl) propylmethacrylate)) was used as an adhesion promoter for parylene C on the PCB. It was deposited through the following steps recommended by the Marvell Nanofabrication Laboratory. We started by preparing a promotion solution containing isopropyl alcohol, deionized (DI) water and A-174 in a 100:100:1 volume ratio. We stirred the solution for 30 s and allowed it to stand for 2 h. Then, we immersed the PCBs in the promotion solution for 30 min, removed them from the solution and allowed them to dry for 30 min. Finally, the PCBs were rinsed in isopropyl alcohol for 30 s and dried with nitrogen. The parts were coated with parylene C within 30 h of depositing the adhesion promoter. The parylene C layer was then deposited on the PCBs through a CVD process using the SCS Labcoter 2 parylene deposition system (Specialty Coating Systems). Chromium/Copper thermal evaporation A 20-nm-thick layer of chromium (Cr) was deposited onto the PCBs followed by a 50-nm-thick layer of Cu. Both layers were deposited through a PVD thermal evaporation process. Chromium acts as an adhesion promoter for Cu. In this step, it is essential to add Kapton tape on all the parts that are not covered with parylene C because Cu can penetrate through the solder mask layer of the PCB and short-circuit the underlying Cu traces. The Denton DV-502A vacuum evaporator (Denton Vacuum) was used for this step. Coatings were performed at less than 4 × 10 −6 torr, with Cr application at ~90 A current and 1.9–2.8 Å s –1 deposition rates and Cu application at ~80 A and 15 Å s –1 deposition rate. We attempted to use electroless deposition without this PVD step but failed to achieve high-quality coatings. Although a plasma surface treatment of parylene C can facilitate electroless copper deposition, the PVD step we incorporate in this recipe is beneficial for pattern coating. Masking the PVD copper particles would result in a copper pattern on the surface and prevents copper deposition in the masked regions during the subsequent steps of the recipe. Cu electroless deposition An electroless Cu kit (Caswell Inc.) was used for this process. We followed the steps provided by Caswell Inc. with a shorter waiting time for sensitization and activation to minimize the formation of Cu oxides. The PCBs were not allowed to dry between the steps. First, the PCBs were immersed into the sensitizer solution (acidic stannous chloride solution) at room temperature for 50 s with no agitation and rinsed in DI water. Next, the PCBs were immersed into the activator solution (acidic palladium chloride solution) at room temperature for 50 s and rinsed in DI water. Since the sensitizer and activator solutions were separately and successively applied, the catalysis is based on ionic solutions, contrary to mixed PdCl 2 /SnCl 2 solutions where colloidal particles form and contribute to the catalysis 40 . Finally, the PCBs were immersed in the electroless Cu solution for 3 min, rinsed in DI water and dried with nitrogen. The electroless Cu solution was obtained by mixing electroless Cu A and electroless Cu B in equal amounts. Cu A solution contains water (> 75% concentration), triethanolamine (5-10%), ethylenediaminetetraacetic acid tetrasodium salt (1-5%), diethanolamine (1-5%), copper sulfate pentahydrate (1-5%), and sodium hydroxide liquid (not specified). Cu B solution contains water (71% concentration), methanol (25%), and formaldehyde (4%). Sensitizer solution contains water (90-98% concentration), hydrochloric acid (1-5%), and stannous chloride (1-5%). Activator solution contains water (90-98% concentration), hydrochloric acid (1-5%), and palladium chloride (1-5%) The compositions and concentrations are obtained from the manufacturer's safety data sheets. Cu electroplating Standard electroplating was performed as the last step where Cu was transferred from a Cu electrode to the PCBs. The electrolyte solution contains 0.2 M Cu( ii ) sulfate (CuSO 4 ) and 1 M sulfuric acid (H 2 SO 4 ) (Sigma-Aldrich). A current density of 74 A m –2 (based on the spreader area) was used. An HP 6033A d.c. power supply was used to provide the d.c. current. After electroplating is done, the boards were rinsed in DI water and dried with nitrogen gas. Following this recipe, we fabricated ten Cu coatings on 8.490 ± 0.513-µm-thick parylene C layers. The only difference between these coatings is the electroplating duration: two coatings were fabricated each with 2, 11, 17, 22 and 47 h of electroplating. We chose a large gap (between 22 and 47 h) because the plating times in this region result in coatings with the same thermal resistance and plating thickness behaviour: the thermal resistance does not remarkably decrease with higher thickness and the plating thickness is linear with plating time. We masked the PCBs such that we always obtain a 5.4 × 2.5 cm 2 coating area. We fabricated ten PCBs with Cu thicknesses ranging from 14.24 ± 10.15 to 562.10 ± 71.13 µm (Supplementary Table 1 ). Parylene C and Cu coating characterization A KEYENCE VK-X1000 three-dimensional laser scanning confocal microscope was used to measure the thickness of both parylene C and Cu layers. For parylene C, we used the microscope in the film mode, where the intensity of the laser light reflected from samples with a transparent film is measured to determine the film thickness (Supplementary Fig. 2a ). The distance between the two light peaks reflected from the surface of parylene C and the surface of buried Kapton tape determines the thickness of parylene C. We chose to measure the thickness of the parylene C film on top of the Kapton tape because the Kapton tape provides a smooth underlying surface compared with FR-4. The refractive index of parylene C of 1.64 41 was used to correct the measurements. A ×50 objective lens was used for these measurements. The microscope was also used in the manual mode with a ×5 objective lens to measure the thickness of the Cu coatings. The upper and lower limits of the lens elevation were set so that the scanning encompasses the levels of both parylene C and Cu top surface. The microscope scans the heat spreaders along the centreline of the spreader (parallel to the 2.5 cm dimension) and the thickness is determined by subtracting the elevation of parylene C from that of the Cu top surface. We calculated the average thickness and its standard deviation for each heat spreader (Supplementary Table 1 ). The data in Fig. 1g refer to the average thickness and standard deviation of heat spreaders with the respective electroplating duration. To investigate the electrical insulation performance of parylene C, we conducted a leakage current test using boards coated with parylene C and Cu (7 boards/14 devices). We shorted the drain, source and gate of the GaN devices and we connected them in series with an N5752A d.c. system power supply (Keysight Technologies), a 100 kΩ resistor, a 34461A digital multimeter (Agilent) and then back to the Cu layer (Supplementary Fig. 2c ). We increased the voltage from 0 to 600 V or until the leakage current increased from the microampere to the milliampere range, indicating that current blocking by parylene C has failed. We chose this maximum voltage because it compares well with the drain-to-source voltages V DS that typical GaN devices are rated for (650 V for GS66508T and 200 V for EPC2034). The devices labelled with a solid green bar in Fig. 1f were able to handle a voltage of 600 V for 5 min with leakage current below 1 µA; the shaded red bars show the devices that failed to meet this criterion with their corresponding maximum voltages. The thermal conductivity of the copper layer is derived from its electrical conductivity, which is determined by the four-point probe technique (Jandel four-point probe, model RM3-AR), using the Wiedemann–Franz law, namely, k / σ = LT , where k is the material’s electronic contribution to the thermal conductivity, σ is the material’s bulk electrical conductivity, T ≈ 295 K is the temperature and L is the Lorenz number generally given as L = 2.44 × 10 −8 W Ω K –2 . Supplementary Section 2 provides details regarding the measurement of σ . The measured thermal conductivity, k Cu = 127 ± 14 W m –1 K –1 , is lower than that of bulk multipurpose copper (~400 W m –1 K –1 ) 42 , but comparable to the value of sputtered copper films (~150–200 W m –1 K –1 ) 43 . The reason for the reduction in thermal conductivity is due to the introduction of impurities and defects during the electroplating process 44 . Steady-state and transient experiments We designed and fabricated 10 × 7 cm 2 70-µm-thick Cu 1.6-mm-thick FR-4 PCBs for this study. Each board contains one top-cooled SMD GS66508T and one top-cooled BGA EPC2034 GaN transistors (Fig. 2d ). Each device is connected to a five-pin terminal that provides access to the gate, source, drain, kelvin source and kelvin drain connections of the device. The two kelvin connections were added to accurately measure the voltage drop as close as possible to the device and eliminate any added voltage drop due to the Cu traces and electric cables during operation. All the Cu coatings were grown on those PCBs. The Cu-plane heat spreader board (Fig. 2a ) follows the same PCB design with an added Cu plane that is isolated from the circuit and acts as a heat spreader. For the heat-sink experiments, we start by depositing a layer of TGF 4000 gap filler (BERGQUIST) that surrounds the devices (Fig. 2f ). Then, the thermocouple is added on top of the device and a layer of NT-H2 pro-grade thermal paste (Noctua) covers the device, thermocouple and gap filler. A 1.4 × 1.4 × 14 cm 3 14-plate pin Cu heat sink (Alphacool) is attached on top of the thermal paste and a thermally insulated vice is added to apply pressure and ensure good thermal contact between the heat sink and the device. The following procedure was implemented to measure the surface temperature and device power in the thermal characterization experiments. A 40 American wire gauge (AWG) (80 µm diameter) K-type perfluoroalkoxy (PFA)-insulated thermocouple (Omega) made from special limits of error wire (±1.1 °C) was attached to the surface to measure its temperature. The thermocouples were calibrated against temperatures of a 0.95 emissivity black tape measured with a A655sc camera (FLIR Systems). We then covered the thermocouple head with a tiny bead of NT-H2 pro-grade thermal paste (Noctua), and we used a 1.5 × 0.25 cm 2 aluminium tape on top of the paste to attach the thermocouple to the surface (Fig. 1d ). We used a PXIe-1073 chassis and TB-4300 module (National Instruments) to acquire the thermocouple measurements in LabVIEW 2019 SP1 software, where we processed and recorded the data. The 10 kHz low-pass filter available in the TB-4300 module was used to eliminate the sensor’s high-frequency noise. We connected the GaN transistors to the HP 6033A d.c. power supply in the diode mode to increase the power dissipation while keeping the terminal currents low, where the gate is shorted to the drain and the drain-to-source voltage drop V DS is positive. The current flowing in the circuit is measured with the 6033A power supply (±0.2500% accuracy) and the voltage drop V DS is measured across the kelvin connections using the 34461A digital multimeter (Agilent) (±0.0055% accuracy). The Joule heating power dissipated by the device is equal to the product of the measured voltage drop and current. In the steady-state experiments, we fixed the device voltage and waited until the average temperature settled. We then recorded one temperature data point every 10 ms for 1 s and we calculated their average and standard deviation. The average corresponds to the surface temperature at this power dissipation and the standard deviation divided by 10 corresponds to its random error, which is small compared with the thermocouple accuracy. We repeated the same process for the other power levels. Finally, the ambient temperature was measured and subtracted from the surface temperatures and the Δ T versus power and heat flux ( q ″) plots were created (Supplementary Fig. 3 ). The heat flux was calculated by dividing the power level by the device footprint area. A linear regression analysis of these curves results in the thermal resistance along with its standard error. We added the junction-to-case thermal resistance (0.50 K W –1 for GS66508T and 0.45 K W –1 for EPC2034) and the calculated thermal resistance of parylene C to the surface-to-ambient thermal resistance to obtain the total thermal resistance. In the transient experiments, we turned the device power on and we waited until its temperature reached the steady state. Depending on the device and cooling medium, the maximum temperature was between 55 and 80 °C. Since we used dimensionless temperatures, this interval did not create problems in the comparison later. Then, we started recording the temperature versus time and we abruptly turned the device power off. We saved one temperature data point every 100 ms for the experiments done in air and every 10 ms for experiments done in water, where the time constant is smaller. The time constant and its standard error were obtained from the double-phase exponential decay fitting discussed earlier. We investigated the transient response of select Cu-coated heat spreaders under pulsed heat loading in air and water. The surface temperature was measured as previously described but using an NI 9923 (National Instruments) terminal block instead of the PXIe-1073 chassis and TB-4300 module. This substitution allowed for higher precision in temperature measurements so that fluctuations during transient operations are more easily detected. The temperature was recorded at 1 kHz frequency. We used a triple-output d.c. power supply (OWON ODP6033) to apply an ON/OFF square-voltage signal at frequencies equal to 1.00 and 0.33 Hz in air, and 1.00 and 0.20 Hz in water, such that the total energy per cycle remained equal to 6.09 ± 1.46 J. To measure the corresponding voltage and current outputs, we used a digital oscilloscope (Keysight Technologies, DSOX3054T) connected to the voltage and current probes (Keysight Technologies, N2843A and 1147B, respectively). Thermomechanical analysis We performed FEA measurements to study the effect of parylene C/Cu coatings on the thermomechanical reliability of electronics. We used Ansys (2019 R3) Static Structural analysis to simulate all the cases of this study. The model we built consists of a 4.6 × 2.6 × 0.51 mm 3 Si chip connected to a 1.5 × 1.5 × 0.16 cm 3 FR-4 PCB through 25 solder balls (Fig. 5a ). The solder balls are made of SAC305, and they follow the truncated sphere model 45 with a height of 170 µm and sphere radius of 194.55 µm. We explored three coating cases in our simulations: SiO 2 with Cu, parylene C with Cu, and no coating at all. Both SiO 2 and parylene C act as the electrical insulation layer that isolates the electronics from Cu coatings. The electrical insulation and Cu coatings conformally cover all the exposed surfaces and have thicknesses of 10 and 150 µm, respectively. Since the model is symmetric, it was cut by two planes along the centrelines to reduce the computation time (Fig. 5a ). A fixed support was added on the lowest point of the line where the two symmetry planes intersect to prevent body motion. We refined the mesh in the regions of interest (coatings, solder joint and Si). The effect of refining the mesh further on the results is negligible (≤10%). Next, we applied a three-cycle thermal load alternating between −55 and 85 °C, where each cycle consists of a 10-min dwell at −55 °C, a 7 min ramp up, a 10 min dwell at 85 °C and a 7 min ramp down. At each instant of this thermal loading, the temperature of all the bodies of the model was equal. We measured the equivalent plastic strain in the solder balls and Cu coating, the plastic strain energy density in the solder joints, and the von Mises stress in the Si, solder balls, FR-4, Cu and insulation coatings. Supplementary Section 9 provides the details of the thermomechanical analysis. Data availability Data supporting the findings of this study are available at . All other data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Code availability MATLAB, Ansys Static Structural and Ansys Icepak input files generated for this work are available at . All other files that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. | None | [] | [] | [] | SciNews | Computer | Tarek Gebrael et al, High-efficiency cooling via the monolithic integration of copper on electronic devices, Nature Electronics (2022). DOI: 10.1038/s41928-022-00748-4 Journal information: Nature Electronics | https://dx.doi.org/10.1038/s41928-022-00748-4 | https://techxplore.com/news/2022-05-solution-cool-electronic-devices-overheating.html | Researchers at the University of Illinois at Urbana-Champaign and University of California, Berkeley have developed a new strategy for cooling electronic devices, which involves using heat spreaders comprised of an electrical insulating layer and a copper coating. This conformal coating heat spreader covers the entire device, including the top, bottom, and sides, providing more routes for heat to dissipate and achieving a 740% increase in power per unit volume compared to standard air-cooled copper heat sinks. The team's solution is more efficient and cost-effective than previous proposals, which used expensive materials like diamond, and could be used to cool down electronic devices without overheating issues, enabling the creation of smaller, more powerful devices for applications such as AI and augmented reality.
Electronic devices, including smartphones and tablet portable computers, are becoming increasingly advanced and compact. As their performance increases and their size decreases, these devices generate more heat, which can reduce their safety and cause them to break. In recent years, engineers have thus been trying to develop strategies that could prevent electronics from overheating. One proposed solution entails the use of heat spreaders, layers that promote the spread and dissipation of heat inside devices. Researchers at University of Illinois at Urbana-Champaign and University of California, Berkeley (UC Berkeley) have recently devised an alternative strategy that could cool electronics more efficiently than other existing solutions. Their strategy, introduced in a paper published in Nature Electronics, is based on the use of heat spreaders comprised of an electrical insulating layer of poly (2-chloro-p-xylylene) (Parylene C) and a coating of copper. "Our recent paper was the culmination of our efforts to produce coating heat spreaders for high-efficiency electronics cooling," Tarek Gebrael, one of the researchers who carried out the study, told TechXplore. "The motivation was to enable effective heat dissipation from power-dense electronics." Heat spreaders are cooling systems comprised of materials with a high-thermal conductivity, such as copper and aluminum. These systems can spread the heat generated by the devices across a larger surface area, making it easier for them to dissipate heat into the surrounding environment. "The advantage of using our conformal coating heat spreaders is that they cover the electronic device entirely, including the top, bottom, and sides of the device," Gebrael explained. "This is impossible with standard heat spreaders which are usually added on top of the device or with standard PCB copper planes. By achieving those conformal coatings, we were able to provide more routes for the heat to leave the electronic device, which translates into a better cooling performance." In the past, teams had developed similar techniques that prevent overheating by opening more "routes" for heat to leave electronic devices. Previously proposed solutions, however, utilize very expensive materials, such as diamond. This makes them difficult to develop and implement on a large scale. Gebrael and his colleagues evaluated their copper coated-heat spreaders in a series of tests and found that they performed extremely well. Specifically, their solution achieved up to a 740% increase in the power per unit volume compared to standard air-cooled copper heat sinks used today. "This remarkable result derives from our spreaders' effectiveness in dissipating the heat, as well as the compact volume they occupy when applied on printed circuit boards," Gebrael said. "This feature enables fitting more electronics in a smaller space without overheating issues, which is essential to create the platforms of future technologies (AI, augmented reality, etc.)." In the future, the heat spreaders developed by this team of researchers could be used to cool down electronic devices more efficiently, without requiring expensive materials. Notably, the coating recipe they proposed combines processes that are already in use in the electronics industry. This could further facilitate its application in real-world settings and its commercialization. "We are now investigating the reliability and durability of our coatings in specific environments (boiling water, boiling dielectric fluids, thermal cycling, and high-voltage environments) for long periods of time," Gebrael added. "We want to make sure that our coatings retain their superior cooling performance. We are also implementing the coatings with full-scale power modules and GPU cards, whereas we used only simple test boards in the initial work." |
10.1038/s41467-021-25613-4 | The mystery of the flexible shell | An international research team with participation of the Paul Scherrer Institute PSI has deciphered why the protective cover of the brachiopod Discinisca tenuis becomes extremely soft in water and gets hard again in the air. The study appears today in the journal Nature Communications. The brachiopod Discinisca tenuis lives on the west coast of Africa. It has a mineral-rich shell that protects it from harmful environmental influences. Bathing the shell in water leads to a structural change in the material: The flat, hard shell becomes so flexible that it can even be folded up without breaking. With the help of the Swiss Light Source SLS, the researchers have deciphered exactly how this transformation takes place. The phenomenon was discovered by chance a few years ago by Fabio Nudelman, a materials chemist currently at the School of Chemistry, University of Edinburgh in Scotland. Maggie Cusack, who was recently appointed president of Munster Technological University in Ireland, had provided Nudelman with shells of the brachiopod Discinisca tenuis, which originally came from Namibia. When he wanted to wash the hard object, it suddenly became soft and flexible in contact with water. The shell had absorbed liquid and thereby changed its structure. The process was reversible: When the shell dried, it became hard and brittle again. Together with colleagues from six countries, Nudelman set out to discover what exactly takes place during this unexpected transformation. "In its composition, the shell resembles bone," he explains. "But bone doesn't change its structure when it gets wet." The same goes for clams: If the animals need to adapt the properties of their shell to different environmental conditions, they normally have to rework the material in a lengthy and energetically costly process, by resorbing and redistributing minerals. It doesn't work simply through the absorption of water. Johannes Ihli und co-author Klaus Wakonig at SLS’s cSAXS beamline. Credit: Paul Scherrer Institute/Markus Fischer Hybrid material with a special trick It was so-called cryo-tomography, performed at the Swiss Light Source SLS, that "opened the door to reveal the secret," says Johannes Ihli, a PSI researcher at SLS. With this technique, the researchers examined the material as if under a very high-resolution microscope, and in fact at extremely low temperatures. "At room temperature it would not have been possible, since the high-energy X-ray light would immediately alter the sensitive shell structure," Ihli explains. The brachiopod's shell, which is no more than half a millimeter thick, consists of a hybrid material: mainly inorganic mineral in which organic polymers made from proteins and sugars are embedded. Bones, clam shells, and teeth are structured in a similar way out of a mixture of organic and inorganic material. The mineral that constitutes the main component of the shell is a type of fluoroapatite—similar to the material that makes up the enamel of our teeth. Tiny nanocrystals of this material are arranged in layers. Nudelman compares it to brick walls: "In this analogy, the bricks are the nanocrystals, and the mortar between the bricks consists of organic molecules such as chitin and proteins." As the researchers observed, this "mortar" can absorb large amounts of water, causing it to swell up. Through the storage of water, it changes its structure: It becomes soft, and the bricks become movable with respect to each other. "Then water acts like a lubricant between the individual nanocrystals," Ihli explains. "The crystals can then slip against each other." Through this movement, the shell becomes flexible. The researchers found a network of pores in the shell that was especially effective in guiding water inside and rapidly distributing it throughout the material. Evolutionary advantage Discinisca tenuis lives in large clusters in tidal zones on the coast where, depending on the tide, the animals are exposed to strong waves or calm waters. The researchers speculate that it is probably advantageous if the animals can quickly adapt the softness or hardness of their shell to the respective situation: "This could prevent damage to the shell and thus be a key to the animals' survival," they write in the study. The phenomenon may even be more widespread than suspected: "We don't know how many other animal species there might be that have this kind of property," says Nudelman. Aside from biology and evolution, the newly gained insights are also of interest for materials science: The development of a hard, brittle material whose stiffness can be controlled could hold promise for many applications. Sports clothing or helmets, for example, might be able to flexibly adapt to movements and always offer the protection required depending on the impact. Harnessing this phenomenon could also prove useful in developing bone-replacement materials. | An international research team has discovered the secret behind the unique properties of the brachiopod Discinisca tenuis, which has a mineral-rich shell that becomes extremely soft in water and hard again in air. Using the Swiss Light Source SLS, the team found that the shell's structure changes when it absorbs water, causing the organic molecules to swell and the nanocrystals to move against each other, making the shell flexible. This phenomenon is reversible, with the shell returning to its hard and brittle state when it dries. The researchers believe that this ability to adapt to different environmental conditions may be an evolutionary advantage for the brachiopod, allowing it to survive in tidal zones with varying wave conditions. The discovery also has potential applications in materials science, such as developing hard materials that can flexibly adapt to movements or impacts, and could even lead to the development of bone-replacement materials. | None | Abstract The function-optimized properties of biominerals arise from the hierarchical organization of primary building blocks. Alteration of properties in response to environmental stresses generally involves time-intensive processes of resorption and reprecipitation of mineral in the underlying organic scaffold. Here, we report that the load-bearing shells of the brachiopod Discinisca tenuis are an exception to this process. These shells can dynamically modulate their mechanical properties in response to a change in environment, switching from hard and stiff when dry to malleable when hydrated within minutes. Using ptychographic X-ray tomography, electron microscopy and spectroscopy, we describe their hierarchical structure and composition as a function of hydration to understand the structural motifs that generate this adaptability. Key is a complementary set of structural modifications, starting with the swelling of an organic matrix on the micron level via nanocrystal reorganization and ending in an intercalation process on the molecular level in response to hydration. Introduction For hundreds of millions of years, nature has evolved a large assortment of organic–inorganic hybrid materials such as bone, teeth, and shells. Each of these biominerals exhibits material properties that have been optimized to aid a particular function, such as navigation, protection, or mechanical support 1 , 2 . These properties arise from a three-dimensional multi-scale organization of the biomineral’s primary building blocks, e.g., inorganic nanocrystals, specialized proteins, and polysaccharides, from the molecular to the millimeter scale 3 , 4 , 5 . Biominerals with load-bearing functions are optimized, in particular, with respect to their mechanical properties, so as to provide sufficient stiffness to support the typical mechanical loads in the biomineral’s environment and enough toughness to resist crack propagation 3 . This optimization is achieved, first, by incorporating organic biopolymers within the inorganic phase, which increases the toughness of the inherently brittle mineral 6 , and second by organizing the basic building blocks of the tissue into higher-order structures 7 . This hierarchical organization creates a large number of internal interfaces that help to avoid crack propagation and significantly increases fracture toughness. A further advantage of a hierarchical structure is that it endows the organism with an additional level of constructional control, where the basic building blocks can be assembled into different structural motifs of different mechanical properties 8 . Altering the material properties of the biomineral in response to environmental stresses, generally requires active restructuring by remodeling by the organism. A time- and energy-consuming process that involves the resorption of the existing biomineral, followed by the precipitation of new tissue with a different structure and composition 9 , 10 . In this paper, we report that the load-bearing shells of the brachiopod Discinisca tenuis 11 are able to dynamically modulate their mechanical properties in response to a change in the environment without the need for remodeling via resorption and regeneration of the tissue, i.e., they switch from hard and stiff when dry to malleable when hydrated within minutes. Importantly, when hydrated the shell can freely bend to the point that it can be folded in two without fracturing. The effects that water and organic matrix hydration degree have on the mechanical properties of biominerals are well recognized 12 . Water, as a component of most biominerals, is known to increase the flexibility of materials such as bone, teeth, and shells. Modulation of hardness and elastic modulus/flexibility 13 by passive control of the water content of the tissue has been suggested to occur in non-mineralized insect cuticle 14 and in mineralized crustacean cuticles, both of which contain organic matrices composed of chitin and proteins 15 , 16 . In these cases, the changes in mechanical properties are due to the plasticizing role of water and do not involve major changes in the structure of the tissue 12 , 16 . However, none of these aforementioned mineralized tissues exhibit flexibility that is comparable to that of the mineralized D. tenuis shell when in their natural, hydrated state. We hypothesized that the extreme flexibility of the hydrated D. tenuis shell cannot be accounted for solely by the plasticizing effect of water as in these other examples. Rather, such reversibility between stiff and flexible as a function of hydration must have its origins in the structure of the D. tenuis shell with water promoting structural changes at different hierarchical levels. The mechanisms that underpin these changes in mechanical properties as a function of hydration are unknown. Chemically controllable material properties and the causal structural motives are of significant interest in the design of stimuli-responsive synthetic materials 17 . As such, there is imperative to determine how hydration alters the structure of the D. tenuis shell, and how these changes facilitate the modulation in mechanical properties. Using a combination of ptychographic X-ray tomography, electron microscopy, small- and wide-angle X-ray scattering, solid-state nuclear magnetic resonance spectroscopy, and mechanical testing, we characterized the shell’s hierarchical structure and composition as a function of hydration covering the micro- and nanoscales and provide an insight into molecular changes. We demonstrate that water absorption by the shell induces a complementary set of structural modifications, starting with the swelling of an organic matrix on the micron level, via nanocrystal reorganization and restructuring, and ending in the intercalation of water between the organic framework and the mineral on the molecular level. In combination, we propose that these changes endow the shell with its mechanical adaptability. We envisage that these observations will aid/ inspire the design of novel synthetic materials with properties that can be modulated in real-time. Results Global compositional analysis The shells of D. tenuis (Fig. 1a ) are an organic-inorganic composite material, where the mineral phase constitutes about 68 wt% of the dry shell 11 . The mineral phase is composed predominantly of carbonate-substituted fluorapatite crystals in the form of francolite 18 (Supplementary Fig. 1 ) with minor contributions of amorphous calcium phosphate, octacalcium phosphate, and tricalcium phosphate 19 . The remaining ~32 wt% of the shell consist of various organic fractions, of which chitin, glycosaminoglycans, and proteins make up the dominant portion 11 , 19 , 20 , 21 . While these shells do not exhibit discernible structural motifs at the micron scale (Fig. 1b ), high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) has shown that they are hierarchically structured consisting of a laminated brick-work-like structure. This structure is arranged normal to the shell height, where francolite crystals (Fig. 1c , bright objects) are enwrapped by a network of chitin and proteins (Fig. 1c , dark regions). Fig. 1: Hierarchical structure of a brachiopod Discinisca tenuis shell. a Top-down optical micrograph of a dry brachiopod Discinisca tenuis shell. Scale bar is 2 mm. b Cross-sectional scanning electron microscopy images of the dry shell at increasing magnification. Left: low magnification image showing the cross-section across the z -axis. Right: high magnification of the area marked by the dotted square. Scale bars are 20 µm (left), 2 µm (right). c High-angle annular dark-field scanning-transmission electron microscopy (HAADF-STEM) images of a thin section from a dry shell. Scale bars are 200 and 20 nm. White arrows point to the organic matrix component (dark areas) surrounding the mineral (bright areas). d Cross-sectional electron micrograph acquired with backscattered electrons (BSE) of a fully hydrated shell thin section folded in two. Scale bar 50 µm. Source data for this figure are available at the University of Edinburgh DataShare, data identifier 67 . Full size image Importantly, these shells which are hard and brittle when dry, significantly increase in flexibility upon hydration to the point where they can be folded in half without fracturing as seen in Fig. 1d and supplementary Movie 1 . This process is reversible so that shells can be cycled multiple times through hard/brittle and soft/flexible by dehydrating/rehydrating. Thermogravimetric analysis (TGA) was used to determine the water content of an atmospherically dry shell stored in the air as compared to a fully hydrated shell after immersion in H 2 O for 24 h (Supplementary Fig. 2 ). The dry shell displayed a gradual water weight loss of 5% from ambient temperature to 200 °C. The hydrated shell exhibits a water weight loss of 24% across the same temperature range. Notably, in the hydrated shell, these losses occur in two distinct steps, the first at 60 °C, corresponding to the loss of physisorbed water accounting for ~18 wt% H 2 O and a more gradual secondary loss between ~100 and 200 °C (~6 wt% H 2 O). Further weight loss steps, observed for both the dry and the hydrated shell, occurring above 287 and 400 °C, are related to the pyrolysis of the organic components 22 , 23 , while decarbonation occurs above 700 °C and results in the formation of fluorapatite (Supplementary Fig. 3 ). Mechanical behavior characterization Depth-sensing nanoindentation was used to determine the mechanical properties of the shell as a function of the degree of hydration. Due to practical and geometrical constraints, i.e., the brachiopod’s shell is curved, has an irregular surface and a thickness of 50–500 µm that is both shell- and location-dependent, depth-dependent dynamic nanoindentation measurements were performed on shell cross-sections. These cross-sections expose the laminated structure (Fig. 1b ), i.e., the indentation direction is in the plane of the laminae. Measurements were performed on atmospherically dry and fully hydrated shells on the same sample, at the center of the cross-section or shell diameter. Depth-sensing nanoindentation measurements show that both Young’s modulus ( E IT ) and hardness ( H IT ) drop drastically when the shell becomes hydrated (Fig. 2 ). At the maximum tested load of 30 mN, E IT is about 26% of the dry value and H IT shows a similar reduction down to 22% (Supplementary Table 1 ). The greater deformability of the hydrated sections under the same applied load is demonstrated also by the increase of the maximum indentation depth from ~1.5 to ~3.5 µm at 30 mN (Supplementary Fig. 4 ), as well as by the larger residual indentation imprint (Supplementary Fig. 5 ). Fig. 2: Depth-dependent dynamic nanoindentation measurements of D. tenuis brachiopod shell samples at different hydration levels. Plotted is the dependence of shell hardness H IT (red, right) and Young’s modulus E IT (blue, left) as a function of indentation depth for an atmospherically dry shell sample (top, filled symbols) and a fully hydrated shell sample (bottom, open symbols). Measurements were conducted in continuous stiffness mode up to a maximum force of 30 mN. Shown is the average over eight indentation measurements. Full size image Micron-scale characterization Ptychographic X-ray computed tomography (PXCT) was used to characterize the shell structure at increasing levels of hydration on the micron and sub-micron levels. PXCT provided quantitative electron density tomograms ( n e Å − 3 ) 24 , 25 , 26 , with a half-period spatial resolution of roughly 85 nm (Supplementary Fig. 6 ). These tomograms allow a structural evaluation of the shell and provide information such as local variations in swelling behavior and the hydration degree of specific components in the sample, e.g., organic- and mineral-rich regions. Sample cylinders, greater than 15 µm in diameter, were extracted along the shell width ( z- axis in Fig. 1b ) and prepared using a nitrogen-cooled micro-lath (Supplementary Movie 2 ) 27 . These cylinders were then either vacuum-dried or incubated at 70 % or 100 % relative humidity (RH) for 36 h 28 , 29 , resulting in samples of increasing hydration level. Lastly, samples were flash-frozen in liquid nitrogen and analyzed using cryo-PXCT at −180 °C. PXCT-derived electron density tomograms are shown in Fig. 3 . As the vacuum-dried sample can be described as a two-component system of mineral and organic fractions, each with approximately known electron densities of 0.78 n e Å − 3 (francolite) and 0.46 n e Å −3 (chitin), the measured electron densities can be used to determine the shell’s composition globally and locally in consideration of partial volume effects 24 , 30 . Partial volume effects refer to the occupation of a volume element, e.g., a voxel, by multiple components, leading to a fractional occupancy-related electron density. For example, the average electron density of the vacuum-dried sample of 0.59 n e Å −3 , suggests that the vacuum-dried shell consists of ~58 vol% organics or roughly 33 wt%, as previously reported 11 , 31 . Moreover, using the vacuum-dried shell as a compositional reference point, the average water content in the hydrated samples can be estimated, i.e., observable changes in electron density are attributed to the incorporation of water into the shell structure. In detail, whereas the shell sample stored at 70% RH contains roughly 17 vol% water or ~4 wt%, the sample incubated at 100% RH possesses up to 50 vol% water or ~12 wt%. Although the hydration level of the sample stored at 70% RH is comparable with that of the atmospherically dry sample shown in Fig. 1c , a lower hydration level was measured for the 100% RH sample by PCXT when compared to the shell sample fully immersed in water shown in Supplementary Fig. 2 . This discrepancy is a result of the sample cylinder hydration process, which was used to avoid structural alteration of the shell during the freezing process. Fig. 3: Electron density tomograms of D. tenuis brachiopod shell samples at increasing hydration level. a Example volume rendering of the imaged, cylindrical, sample pillars. Sagittal cut slices through the center of b a vacuum dried sample, c a sample incubated at 70% RH, and d a sample incubated at 100% RH. The cutting plane is represented by the yellow line shown in ( a ). Scale bars are 2 µm. Common to all cuts is a single color scale ranging from white to yellow representative of electron density values. Shown in e are sample plane averaged electron density line profiles normal to the laminae structure (pink arrow) alongside secondary derivatives highlighting major fluctuations in electron density. Sample corresponding frequency normalized ( N ) electron density histograms are shown in ( f ). Further provided in f are the theoretical electron density values of the shell main components: francolite 0.78 n e Å − 3 , high molecular weight polysaccharides approximated using chitin ~0.46 n e Å −3 , and low-density amorphous ice, 0.31 n e Å −3 . PCXT measurements were conducted under cryogenic conditions. The voxel size of all tomograms is (38.8 nm) 3 . Source data for this figure are available at the University of Edinburgh DataShare, data identifier 67 . Full size image Figure 3a provides an example volume rendering of one of the imaged sample cylinders. Sagittal slices through the acquired electron density tomograms are presented in Fig. 3b–d and Supplementary Movies 3 – 5 . These slices reveal a progressively more defined laminar structure of alternating high electron density, mineral-rich layers and low electron density, organic-rich layers normal to the shell height from the dry to the fully hydrated sample. To quantify the separation of these layers and emphasize local variations in their thickness and composition, layer-averaged electron density profiles normal to the laminae structure were calculated (Fig. 3e ). Visible in these profiles is a continued expansion of the organic-rich layer thickness from ~160 nm in the dry sample to ~180 nm in the partially hydrated sample (70 RH) to ~340 nm in the fully hydrated sample (100 RH). The mineral-rich layers also appear to expand, although to a much lesser extent. From ~90 nm in the dry sample to ~110 nm in the partially and fully hydrated samples. These data suggest the existence of two local hydration environments, associated with either the mineral-rich or the organic-rich layers, each possessing a distinct hydration capacity. The corresponding electron density histograms of the tomograms are supportive of this interpretation (Fig. 3f ). With increasing hydration, an evolution of the Gaussian distribution centered at 0.59 n e Å −3 is visible. Partial hydration results in a shift of the Gaussian distribution towards an electron density center at 0.54 n e Å − 3 . This shift, while retaining the Gaussian distribution, suggests a near-uniform water uptake throughout the shell, i.e., including the francolite nanoparticle enwrapping organics. Further hydration leads to the development of a broad and asymmetric electron density distribution with 3 main peaks: centered at 0.42, 0.51, and 0.59 n e Å −3 . While the retained peak at 0.59 n e Å −3 and the newly emergent peak at 0.42 n e Å −3 can reasonably be assigned to mineral-rich domains and fully hydrated organic-rich domains respectively, the persistence and dominance of the intermediate peak are intriguing. It is indicative of either a not yet fully completed hydration process or the presence of not one but two organic-rich layer structures in the shell each with different hydration capacities. To investigate the existence of such a variety in structure and to establish a correlation between the degree of hydration and volume expansion of the organic-rich layers, we remapped the electron density tomogram of the fully hydrated sample to percent water weight. Subsequently, we characterized the organic-rich layers in this tomogram, which revealed that within a single layer, hydration is rather uniform. However, hydration can and does vary across layers. Visible in the tomogram is a zonation in hydration along with the shell height. Organic-rich layers in close proximity to the shell exterior and extending microns in height, compartmentalize up to ~70 vol% of water or ~24 wt%. Organic-rich layer toward the shell interior adopts a hydration level of around 8 wt% H 2 O. This zonation is in agreement with the variety in organic-rich layer structure detected in the histogram (see Methods and Supplementary Fig. 7 ). Nanometer-scale characterization As PXCT measurements are limited in spatial-resolution, e.g., a single voxel is occupied by multiple organic matrix-coated francolite crystals (Fig. 1c ), we used backscattered electron-scanning electron microscopy (BSE-SEM), offering a higher resolving power, to confirm and expand on these observations. Further BSE-SEM allowed us to resolve the shell’s fine structure and probe the hydration behavior on the nanoscale. Cross-sections of a fully hydrated (fixed in 4% formaldehyde) and a dry shell stored in the air were prepared through a series of dehydration, critical-point drying, resin embedding, and mechanical polishing (details in Methods). Overview micrographs confirm both the presence of organic-rich layers of varying thickness and their volume expansion upon hydration (Fig. 4a, b ). The volume expansion is suggested to occur through the uptake of physisorbed water. In addition, given the available cross-sectional view of the entire shell height in these micrographs, it is evident that major organic-rich layers are predominantly located toward both the outer and inner surfaces. Equally visible in these micrographs is a sparse network of transport channels, <400 nm in diameter 11 , throughout the entire shell, exhibiting a tortuous structure running dominantly normal to the laminae structure. See also Supplementary Movies 3 and 4 and complementary small-angle X-ray scattering measurements shown in Supplementary Fig. 8 . In addition, PXCT data show that these pores change in electron density from ~0.05 to 0.2 n e Å −3 in the vacuum-dried sample to ~0.3–0.35 n e Å −3 in the hydrated sample (Supplementary Fig. 9 ). This change is in agreement with the pores carrying water in the hydrated state of the shell; for comparison, the electron density of amorphous ice is 0.31 n e Å −3 Fig. 4: Scanning electron microscopy of polished shell samples. Shown in the left panels are SEM-BSE cross-section micrographs of a a dry shell stored in air and b a fully hydrated shell fixed with 4% formaldehyde. Blue arrows highlight fracture lines incurred during sample preparation. Red arrows are used to point out example areas of high organic content. Blue circles are used to indicate pores in the shell. Scale bars are 20 µm. Provided in the central panels are images at a higher magnification acquired either with secondary electrons (SE) to stress variations in morphology and surface topography or acquired with backscattered electrons (BSE) used to highlight compositional/elemental contrast. Scale bars are 200 nm. The inset highlights the francolite bundle dimension. The scale bar is 50 nm. In the far right panels are average line profiles of the grayscale within the colored boxes in ( a ) and ( b ), normal to the laminae direction. Source data for this figure are available at the University of Edinburgh DataShare, data identifier 67 . Full size image Selected-area high magnification SEM images (Fig. 4b ) probing the nanoscopic level, not only confirm the swelling of organic components at this level but additionally discloses another feature of the shell’s organization. Throughout the shell, francolite crystals appear to be organized in individual, rod-shaped bundles, roughly 25 nm in diameter and 100 nm in length (also seen in Fig. 1c and Supplementary Fig. 8 ). These bundles are found within both mineral- and organic-rich layers, and appear to expand in volume upon hydration, suggesting that they are composed of francolite nanocrystals and organic matrix elements. PXCT measurements confirm this assessment, as the highest voxel-level recorded electron density is still well below that of francolite, confirming that each bundle possesses a minimum organic content of 15 vol%. The nanocrystals within a bundle are asymmetric in shape with a short axis, 3–6 nm in diameter, and a long axis, 14–19 nm in diameter, according to Rietveld’s refinement of powder diffraction data, (Supplementary Fig. 1 ). The spatial arrangement of the francolite crystals, evident from Fig. 4b , is that the bundles and the nanocrystals preserve some preferential orientation with their long axis parallel to the laminar direction upon hydration. Molecular-scale characterization Solid-state nuclear magnetic resonance (NMR) (ssNMR) spectroscopy and differential scanning calorimetry (DSC) were used to investigate the hydration process of the mineral and of organics surrounding individual francolite crystals on the molecular level. To examine the effect that hydration has on the molecular structure of the organic matrix we collected 13 C solid-state NMR spectra. Presented in Fig. 5a are 13 C cross-polarization magic angle spinning NMR ( 13 C CP MAS NMR) spectra of a dry and a fully hydrated shell, revealing a hydration-induced sharpening of amide signals (170–180 ppm) to the extent that two amide signals are resolved for the hydrated shell (170.5 and 174.4 ppm) compared to a single broad signal in this region for the dry shell. The chitin N-acetyl methyl group 13 C signal occurs as expected between 20 and 25 ppm and the N-acetyl C=O 13 C around 175 ppm. We assign the signals at 22.8 and 174.4 ppm which sharpen with increasing hydration to chitin (or other glycans) N-acetyl groups that become more mobile with hydration of the shell. Fig. 5: Solid-state nuclear magnetic resonance spectroscopy and differential scanning calorimetry of D. tenuis brachiopod shell samples. a 13 C Cross-polarization magic angle spinning (CP MAS) NMR spectra of an atmospherically dry and hydrated shell. b Rotational-echo double resonance (REDOR) NMR on an atmospherically dry shell, cyan: reference spectrum; orange: REDOR dephasing spectrum. c Differential scanning calorimetry measurements of an atmospherically dry and hydrated specimen and of β-chitin extracted from a brachiopod shell. Full size image To examine the interface between the organic matrix and mineral, 13 C{ 31 P} rotational-echo double resonance (REDOR) NMR spectra were collected to determine which organic functional groups are in closest contact with the phosphate anions of the francolite crystals (Fig. 5b ). Signals that have a reduced intensity in the REDOR spectrum compared to the reference spectrum of the dry shell (orange spectrum in Fig. 5b ) are indicative of 13 C sites that are within 0.8–1 nm of 31 P. An insufficient signal intensity even in the reference spectrum of the hydrated shell sample, due to short T 2 relaxation times indicate that hydration results in a significant increase in molecular mobility of all organic components of the shell material. Further, signals from methyl groups at 16.9 and 22.8 ppm show a reduction in intensity in the dry REDOR spectrum (orange spectrum in Fig. 5b ), along with both amide carbonyl signals, a broad signal at ~185 ppm from carboxylate groups, and a set of signals corresponding in the chemical shift to primary or secondary amine 13 Cs (50–60 ppm, as labeled in Fig. 5b ). Interestingly, there is no reduction in intensity in the REDOR spectrum of signals from chitin/glycan ring carbons, suggesting that these carbons are more than 0.8–1 nm from phosphate. The amide carbonyl signals are from glycan N-acetyl and/or protein–peptide bond carbonyls and the 22.8 ppm signal is from the N-acetyl methyl 13 C in chitin or other glycans, suggesting that the N-acetyl moieties of chitin/glycan molecules are associated with the mineral. In summary, the ssNMR data indicate that chitin is organized in layers with its N-acetyl groups facing the mineral where possible, creating inter-layer channels that allow the intercalation of water molecules during hydration. This picture is consistent with the chitin network surrounding the crystals absorbing water. The result is increased mobility of the macromolecular chains and thus flexibility of this particular part of the shell when hydrated. DSC measurements of entire shells and chitin extracted from D. tenuis shells, presented in Fig. 5c , are in general agreement with the suggested hydration process. Not only does the DSC signal of an atmospherically dry shell (5 wt% H 2 O) display two endothermic peaks, one at 156 °C and a second at 200 °C, it also matches the expected stepwise transition of β-chitin dihydrate to its anhydrous form 32 . Importantly, these transitions are recorded at significantly reduced temperatures, 137 and 175 °C (Fig. 5c ) in the case of a fully hydrated shell (24 wt% H 2 O). These observations imply that water molecules intercalate between the polysaccharide chains of dihydrate chitin and decrease the inter-chain interactions, making the molecules more mobile. Chemically this can be explained by the preference of the hydroxyl groups in the pyranose ring to make hydrogen bonds with the more mobile solvent molecules rather than with a neighboring sugar residue 33 . Lastly, to characterize the effect that hydration has on the mineral, 2D 1 H– 31 P heteronuclear correlation solid-state NMR spectra were collected on shell fragments (Fig. 6 ). Spectra from atmospherically dry shells show multiple distinct 1 H environments correlated with phosphate 31 P signals, evident of a well-structured mineral on the molecular length scale. The 1 H environment near 4.7 ppm is similar to that previously observed for water in the amorphous hydrated surface layer of nanocrystalline hydroxyapatite 34 . The 1 H signals between ~10 and 15 ppm are from mineral hydrogen phosphate groups with 1 H chemical shifts that are similar to those found in the amorphous hydrated surface layer of synthetic nanocrystalline hydroxyapatite 35 and in the hydrated layers of octacalcium phosphate, and in the hydrated calcium hydrogen phosphate phases, monocalcium monohydrate and brushite 36 . There is an additional intriguing 1 H signal around 7.5 ppm; a similar 1 H chemical shift was observed in hydroxyapatite samples and has previously been tentatively assigned as hydroxyapatite-associated hydrogen phosphate 36 , and 2D 1 H– 31 P correlation spectra of synthetic hydroxyapatites contain intensity in this spectral region as well 34 (Fig. 6a ). Upon hydration, the 1 H spectrum is dominated by a single water signal which has shifted to being centered at 5.9 ppm (Fig. 6b ), indicating a significant change in the mineral water environment; the shift to a higher frequency for the water 1 H resonance suggests the water is in a more strongly hydrogen-bonded environment, such as that in the hydrated layers of OCP or the crystalline water in brushite 36 . These observations suggest that in the hydrated shell, water associated with the mineral is in smaller width channels than in the dry shell. These spectral changes are consistent with a relaxation of strain in the mineral structure upon hydration, as suggested by wide-angle-X-ray-scattering measurements of a shell in its dry and hydrated form (Supplementary Fig. 10 ) and possibly the result of cracking or partial hydrolysis of mineral crystals and admission of water into the resulting cracks/ hydrolyzed regions. Fig. 6: 1 H– 31 P heteronuclear correlation solid-state NMR spectra of D. tenuis brachiopod shell samples. Shown are changes in the chemical structure of the atmospherically dry shell ( a ) upon hydration ( b ). The correlation degree follows a normalized color map ranging from red to white (0–1). Full size image Discussion We show that water absorption causes structural changes in the shell at three levels: (1) At the microscopic level, where organic-rich laminae swell due to the uptake of physisorbed water; (2) at the nanoscopic level, where the organic matrix, surrounding mineral bundles, swells, and (3) at the molecular level, where the chitin network that surrounds the mineral crystals within each bundle become hydrated. This results in a mobility increase of the macromolecular chains of the polysaccharide and in the intercalation of water molecules between chitin and the mineral. These insights allowed us to develop a hypothesis as to how such structural changes translate into the observed mechanical adaptability. Our proposed model and the wider significance of the material properties of the shell are discussed below. To explain the structure–property relationship of the shells of D. tenuis we propose the following. At the microscopic level, the shell has organic-rich regions which swell when hydrated (Fig. 7 i). This results in thicker laminae of low stiffness intercalated with high-stiffness mineral-rich regions, which provides higher flexibility and fracture toughness 37 . At the nano-scale, the mineral-rich regions are composed of francolite crystals assembled into rod-like bundles ca . 25 nm in diameter and 100 nm in length surrounded by a network of chitin. This intercalation of two different materials with different elastic moduli—low-stiffness chitin surrounding high-stiffness mineral crystals—is responsible for the inherent high toughness of the shell 38 . We propose that the swelling of the organic components at this level, increasing the disorder in the arrangement of these bundles, facilitates the movement of these structural building blocks when a load is applied (Fig. 7 ii). At the molecular level, hydration reduces the stiffness of chitin 39 by breaking stabilizing hydrogen bonds between the sugar residues 33 , perhaps analogous to how water breaks the inter-peptide bonds in collagen in dentine, decreasing the hardness and stiffness of the latter. It is therefore conceivable that this reduction in the stiffness of chitin, together with the increase in mobility of the polysaccharide side chains, helps to dissipate mechanical stress 40 . In addition, we propose that the intercalation of water molecules between the polysaccharide and the mineral decreases the interaction between these two components so that they can slide/move with respect to each other when under a load (Fig. 7 iii). In combination, these structural changes could explain the mechanical adaptability of the shell as a function of the surrounding environment. Fig. 7: Hydration scheme of a D. tenuis brachiopod shell. Schematic of the proposed hydration mechanism across length scales from the micron (i), sub-micron (ii), nano (iii), and molecular level (iv) 33 , 68 , 69 . In (iii) and (iv), we propose that the intercalation of water between the mineral and chitin enables the mineral units to move more freely when a load is applied. Full size image In view of the passive, rapid, and repeated adaptability of the shell, a key factor facilitating the hydration process is the efficient transport of water into and out of the shell. As the mechanical properties of the shell change within minutes after immersion in or removal from water, diffusion through the mineral layers is unlikely to be the dominant transport mechanism. As shown in Fig. 4a, b and Supplementary Fig. 9 , the shell is permeated by pores that run predominantly normal to the laminae structure. As these pores become filled with water in the hydrated sample (Supplementary Fig. 9 ) and are interconnected with the protein and chitin networks, as discussed by Williams et al. 11 , it is conceivable that they serve as hydration channels in the mineralized tissue. In terms of how hydration affects the mechanical properties of the shell, absorption of water causes a reduction in elastic modulus ( E ) and hardness by factors of around four. Considering that hardness is generally proportional to yield stress ( σ y ), and the shell diameter or thickness ( h ) is nearly unchanged following hydration, this suggests that both the dry and hydrated shell possess roughly the same flexibility ( f ) as defined by Peng and Snyder (2019), ƒ = (2/ h )( σ y / E ) 13 . Therefore, the increased flexibility observed upon hydration does not originate from a larger decrease in \({{{{{{{\rm{E}}}}}}}}\) compared to σ y . A decrease in elastic modulus by a factor of four implies that a four times higher elastic strain can be imposed on the hydrated shell compared to the dry one under the same load, possibly explaining why it becomes so easy to bend. Yet, the observed change in hardness, positively correlated with a yield stress, suggests that plastic deformation develops under a four times smaller stress. In other words, when the bending force is increased, the hydrated shell material enters the plastic deformation regime at about four times smaller stress compared to the dry one. The fact that much larger deformations can be achieved in the hydrated state compared to the dry state can only be explained by considering the presence of some plastic deformation together with a much higher ductility upon hydration (In general, plasticity at smaller stress brings about an enhanced ductility, i.e., the ability to withstand larger plastic deformation without fracture). If the situation were otherwise, it would be possible to achieve the same deformation in the dry shell simply by applying a four times higher force. This is not the case as the dry shell fractures at small strains. In summary, we suggest: (1) The large deformations in the hydrated shell are never purely elastic but include a certain degree of plasticity; (2) the hydrated shell has a much higher ductility and does not fracture immediately when entering the plastic deformation regime, in contrast to the brittle dry shell; (3) the combined changes of elastic modulus, hardness and ductility with hydration/dehydration determines the macroscopic mechanical behavior of the shell. It is interesting to compare this behavior with that of other biominerals. Dentine, for example, possesses a similar ratio of organic to mineral content, yet displays a decrease in elastic modulus by a factor of ~1.5, and hardness by a factor of ~3 upon hydration 41 . The crustacean endocuticle, which has a thickness that is more comparable to that of the D. tenuis shell ( ca . 200–300 µm 16 , whereas the shell is 50–500 µm thick), and a more comparable passive change in flexibility with changes in hydration, displays a decrease in stiffness by a factor of ~1.4 upon hydration, while the yield stress changes by a factor of ~4 16 . In the latter, these changes are driven mainly by the interaction of water with the protein that is associated with the chitin fibers 39 , and ingress of water breaking hydrogen bonds between macromolecular chains 33 or inter-peptide bonds 41 is a common mechanism by which water increases the flexibility of biominerals. In these cases, water chiefly acts as a plasticizer, increasing the viscoelasticity and plasticity of the organic matrix components 12 through changes in intermolecular hydrogen bonding in the tissue as discussed above. What sets the D. tenuis shells apart from other biominerals is the extent of the flexibility caused by hydration, which is not seen in other mineralized tissues, and the speed of the structural reorganization underlying the change in flexibility. To put it simply, mollucs shells, bone and dentine cannot bend in half without breaking, as the hydrated D. tenuis shell can, no matter what their water content. Considering that bone and dentine have similar amounts of organic and inorganic contents as the D. tenuis shell (60–70% inorganic and 30–40% organic), it is clear that the ability of the shell to freely bend is due not only to its high organic content and the plasticizing effect of water. Their differences in mechanical behavior must also be due to how the building blocks of each material are organized. As described above, the arrangement of the francolite nanocrystals into discrete bundles enwrapped by a layer of the organic matrix provides the shell with separate blocks that can move with respect to each other once a load is applied, and the stiffness of chitin and the chitin–mineral interactions are weakened by hydration. Moreover, the mineral itself appears from solid-state NMR measurements, to restructure reversibly upon hydration/dehydration. We speculate that the activation energy for such restructuring comes from the relaxation of crystal strain upon water ingress and the resulting formation of hydrated channels or layers with dimensions of the scale of the water channels and layers in OCP or brushite, for instance. Bone, on the other hand, is made of collagen fibrils with intra- and extra-fibrillar mineral 42 , 43 , 44 , 45 . The mineralized collagen fibrils are further arranged into higher-order structures—such as unidirectional ordered fibrils or plywood structures, further arranged into super-structures 3 , 44 . This hierarchical organization results in a material with high stiffness that resists deformation when under a load. Similarly, other biominerals such as the nacreous layer in molluscs have their mineral building blocks arranged such that they cannot move much with respect to each other when under a bending load. The nacre of bivalves, for example, is made of aragonite tablets that are significantly larger than the crystalline units of the D. tenuis shells—500 nm in thickness and 5–15 μm in diameter 46 —and are staggered with respect to each other which readily prevents any deformation in the same scale as reported here for the brachiopod shell. In addition, while these tissues are naturally hydrated, they have not been reported to further uptake significant amounts of water as the D. tenuis shell does. As demonstrated, an increase in water content is a prerequisite to increase their flexibility. As for the non-mineralized insect cuticle and the mineralized crustacean cuticles, these tissues are composed of chitin-protein fibers aligned in parallel arrays forming horizontal planes that are stacked vertically with the gradual rotation of the long axis of the fibers around the normal axis of the cuticle, leading to a twisted plywood structure 16 , 39 . Their hardness and stiffness depend on the stacking density of the chitin–protein layers and on the degree of mineralization 47 . These two structural factors do not change upon hydration and dehydration. In summary, we conclude that the responsiveness of the mechanical properties of the D. tenuis shell to hydration, when compared to other biominerals, is a combination of several factors: (1) the high amount of organic content; (2) the plasticizing role of water on the organic matrix and mineral; (3) the weakening of the interaction between the mineral and chitin upon hydration, allowing the former to move more freely under a load; and (4) the unique hierarchical structure of the shell, with crystals surrounded by a chitin matrix at the nanoscale, and organic-rich layers at the micron scale. While factors (1) and (2) are common among other biominerals, factors (3) and (4) are unique to the D. tenuis shell. To address the lingering question as to why D. tenuis brachiopods evolved and currently possess shells of such mechanical adaptability further investigations are needed. Nonetheless, one can draw a parallel between the brachiopod Lingula anatina , which also has a phosphatic shell. A certain degree of mechanical adaptability and flexibility in the shells of L. anatina was proposed to be needed for the burrowing of the animal into sediment 37 , for its infaunal habitat. It is likely that the mechanical properties of D. tenuis shells are similarly suitable to their environment. Large clusters of D. tenuis specimens, attached only to each other, inhabit the inter-tidal zone. In such a high-energy environment, with extreme ranges of hydration throughout diurnal tidal cycles, as mimicked in the presented experiments, environment-adapting flexibility could be advantageous to prevent shell damage and therefore could be key to the survival of the animals. Thus, differences between the ecological niches between the two species of phosphatic brachiopods, D . tenuis and L. anatina , or even when compared to calcitic-shelled species, could mean different mechanical requirements for their shells and hence explain the reason D. tenuis shells have higher flexibility when hydrated than other brachiopods. In conclusion, we report on the mechanical behavior of the shells of D. tenuis . The former displays passive adaptability among binominerals in mechanical properties as a function of hydration/the environment it finds itself in. Mechanical testing and characterization of the structure of the shell as a function of hydration level, at several length scales, from the micro- to the molecular level, revealed that these shells conform to a hierarchical, non-uniform construction, wherein water absorption within distinct environments facilitates structural adaptation to a changing environment. The discovered design motifs and modifications thereof upon water absorption, underpinning the properties of this natural composite material, will help material scientists to design and synthesize novel stimuli-response materials that are as tough and adaptable as these brachiopod shells. Methods Materials Brachiopod D. tenuis s hells were collected in Swakopmind, Namibia by Sir Alwyn Williams. The soft tissue was removed, and shells were stored in the air. Electron microscopy (EM) SEM was performed either on a Quanta 650 FEG SEM or on a Zeiss Crossbeam 550 cryoFIB-SEM. HAADF TEM was performed on a Thermofisher Scientific Scios Dual Beam FIB-SEM. Sample preparation for electron microscopy Shell fragments were incubated in MilliQ water overnight at room temperature, followed by fixation in 4% formaldehyde for 4 h. The specimens were then gradually dehydrated in ethanol following a dilution series (50%, 70%, 96%, and 100% ethanol), followed by critical point drying using a Polaron critical point dryer. Subsequently, the shells were embedded in resin, polished, and coated with carbon. To check that the sample preparation procedure does not introduce artifacts such as local sample shrinkage, control experiments on non-treated samples were performed. A comparison between dry shells in their native state and after sample preparation showed no discernible differences. Further, electron microscopy observations, where possible, are cross-validated and confirmed by cryo-PXCT. Moreover, both dry and hydrated samples underwent the same sample preparation, i.e., samples would be largely equally affected by potential preparation artifacts. Thermogravimetric analysis (TGA) TGA was performed using a Thermal Analysis SDT Q600 instrument. A 5–10 mg of ground shell, were subjected to a heating rate of 10 °C/min under nitrogen. Fourier transform infrared spectroscopy (FT-IR) FT-IR spectroscopy measurements were performed using an FTIR Nicolet iS10. Approximately, 2 wt% of ground shell fragments were mixed with KBr and pressed into a transparent disk 48 . Data were acquired between 4000–400 cm −1 with a spectral resolution of 4 cm −1 . Differential scanning calorimetry DSC measurements were performed using a Thermal Analysis SDT Q600 instrument. A 2–5 mg of ground shell was analyzed. Samples were heated up to 240 °C with a rate of 10 °C/min. β-chitin extracted from D. tenuis shells was used as a standard. Chitin extraction was done by decalcifying 10–15 mg of shells in 0.55 M HCl twice for 30 min, then once for 1 h at room temperature. The remaining organic material was then incubated in 0.3 M NaOH at 80 °C under reflux for 1 h. The extracted chitin was dried at 80 °C for 1 h 49 . Powder X-ray diffraction (PXRD) Diffraction patterns were collected using an X′Celerator detector fitted on a PANalytical X′Pert Pro diffractometer, using Cu-Kα radiation generated at 40 kV and 40 mA. Data were collected within the 2 θ range from 5° to 70° with a step size of 0.02° and a counting time of 1200 s. Fixed anti-scatter and divergence slits of 1/16° were used with a 10 mm beam mask. Small-angle X-ray scattering (SAXS) Monochromatic radiation with a wavelength of λ = 1.54 Å was produced by a rotating Cu anode (MicroMax 007HF). Scattering patterns were acquired using a Dectris PILATUS 300 K, with a pixel size of 172 μm, placed at a different sample to detector distances between 0.5 and 1.6 m for each dataset. The obtained 2D SAXS patterns were azimuthally integrated, normalized with respect to the incident beam intensity and acquisition time, and then merged to construct a single 1D intensity profile I ( q ) vs. q covering an effective scattering vector range of 0.0035 to 1 Å − 1 . For each sample, at least three SAXS datasets were collected across the shell width. Two representative intensity profiles, i.e., one per sample, are shown in Supplementary Fig. 8 . Wide-angle X-ray scattering (WAXS) WAXS data were collected at the 11-BM Complex Materials Scattering (CMS) beamline at National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. Data were collected at 13.5 keV with a beam footprint on the sample of 0.2 × 0.2 mm. The point acquisition time was 10 s. WAXS patterns were recorded with a Pilatus800k placed 0.26 m downstream of the sample. The obtained 2D WAXS patterns were azimuthally integrated, normalized with respect to the incident beam intensity and acquisition time. The resulting 1D intensity profiles are shown in Supplementary Fig. 10 . Solid-state nuclear magnetic resonance spectroscopy All experiments were carried out on shells that had been stored at −80 °C, packed into inserts for 4 mm zirconia rotors in a Bruker double-resonance MAS probe on a Bruker AVANCE II 400 MHz wide-bore spectrometer. CP-MAS: MAS frequency 10 kHz, 1 H 90° pulse 2.5 μs, contact time of 2.5 ms with a ramped pulse on 1 H and square pulse on 13 C 70 kHz spin lock field strength, 100 kHz field strength SPINAL64 decoupling during acquisition with 4.4 μs pulses, recycle delay 2 s. Heteronuclear correlation (Hetcor) spectra were recorded at 400 MHz, 10 kHz MAS and 290 K. NMR parameters were: 1H 90° pulse length and decoupling 86 kHz, 1H contact pulse 54 kHz, 31 P contact pulse 44 kHz, 200 µs contact time. Lee–Goldburg (LG) RF field was set to 50 kHz, with an offset for proton evolution under LG of −2000 Hz. Depth-dependent dynamic nanoindentation To prepare the physical cross-sections, the two large shell surfaces were first covered with a thin layer of plasticine (about 1 mm in thickness) before the sample was embedded in epoxy resin and cut to size using a rotating diamond blade. The exposed cross-sections were polished with silicon carbide sandpaper of two decreasing grit sizes (P600 and P1220) and finally using alumina colloidal suspensions with grain sizes of 3-1 and 0.05 microns. Finally, the plasticine was removed with the help of a thin curved dissecting needle generating two cavities (Supplementary Fig. 11 ). These cavities were filled with double-distilled water for the measurements in hydrated conditions. The same sample was then used for the measurements in dry conditions upon water removal and overnight air drying. Depth-dependent mechanical properties of dry and fully hydrated shell sections were measured with a nanoindentation tester (model NHT-TTX by CSM Instruments) equipped with a Berkovich diamond tip. The instrument was operated in continuous stiffness mode up to a maximum applied load of 30 mN. During the 60 s loading phase, the oscillatory force modulation had an amplitude equal to 10% of the current force value and a frequency of 5 Hz, while the unloading phase was carried out linearly in 15 s. The instrumented values of the elastic Young’s modulus E IT and hardness H IT were determined as a function of the indentation depth by Oliver–Pharr dynamic analysis 50 of the loading phase. The mechanical properties of the tested shell samples were completely reversible in response to the applied hydration-drying cycle. Ptychographic X-ray computed tomography (PXCT) Sample preparation for PXCT. Brachiopod shells were first mechanically fractured and cut into mm-sized pieces. Pieces from adjacent areas taken from the environment facing side along the shell width were then glued onto individual custom-build tomography pins 51 . The epoxy was pre-cured and only applied to the top of the tomography pin to avoid sample contamination. The sample-loaded pins were mounted on a custom-built micro-lath and milled under cryogenic conditions 27 . The resulting cylindrical pillars had a diameter of ~20–40 microns and a sample height of ~50–80 microns. The prepared pillars were then either vacuum dried or incubated in desiccators containing salt solutions to create an atmosphere of 70% relative humidity or 100% relative humidity for 36 h 28 , 29 . The pillars were following frozen using liquid nitrogen to lock the set hydration level in place for the duration of the PXCT measurement. No signs of crystalline ice on the surface of prepared pillars were recorded. PXCT setup and data acquisition . PXCT experiments were carried out at the cSAXS beamline of the SLS. The photon energy was 6.2 keV. The horizontal aperture of slits located 22 m upstream of the sample was set to 20 μm in width to coherently illuminate a Fresnel zone plate, the latter being 220 μm in diameter with an outermost zone width of 60 nm 52 . Coherent diffraction patterns were acquired using a 500k Eiger detector 53 with a 75 μm pixel size, 7.284 m downstream of the sample. A flight tube was positioned between sample and detector to reduce air scattering and absorption. Measurements were carried out using the positioning instrumentation described in Holler et al. 54 , 55 . The samples were imaged in an in-vacuum version of this setup at a temperature of −180 °C in a vacuum. Sampling positions were set using a Fermat spiral scanning grid 56 with an average step size of 2 μm. Tomography projections were acquired using a binary acquisition strategy as described by Kaestner et al. 57 with two nests of projections. Around 600–1200 projections were acquired depending on the sample diameter. Each projection was obtained by a ptychographic scan of ~400–800 to diffraction patterns, each with an exposure time of 0.1 s. Ptychographic image and tomogram reconstruction . From each diffraction pattern, a region of 512 × 512 pixels was used in the ptychographic reconstruction of acquired projections. The resulting pixel size is (38.8 nm) 2 . Reconstructions were obtained with 300 iterations of the difference map algorithm 58 followed by 300 iterations of maximum likelihood refinement using 2 probes modes 59 , 60 . Reconstructions were performed using the PtychoShelves package 61 . Prior to tomography reconstructions, the complex-valued projections were aligned and processed as described in Guizar-Sicairos et al. 62 . Horizontal alignment was ensured based on tomographic consistency 63 . Tomographic reconstruction of phase projections was performed using a modified filtered back-projection algorithm (FBP) 62 . To mitigate noise in the reconstruction, a Hanning filter was used. The tomograms provide the 3D distribution of the refractive index decrement, δ ( r ), and electron density away from sample relevant absorption edges as in the present case 24 , 25 . PXCT dose estimation . The X-ray dose imparted to a shell sample during tomogram acquisition was estimated to be on the order of 10 6 to 10 7 Gy. The estimated dose is based on the average area flux density of each scan and the assumed mass density of the specimen 64 . Here, the specimen was assumed to consist of hydroxyapatite and chitin. Estimation of spatial resolution . The half-period spatial resolution of ptychographic tomograms was estimated by Fourier shell correlation (FSC) 65 . The full dataset of angular projections used for the tomographic reconstructions was divided in half, and two independent tomograms with double angular spacing were reconstructed independently. Then, the correlation between these two tomograms in the Fourier domain was calculated and the resolution was estimated based on the intersection with a set threshold. The threshold criteria for the FSC was the ½ bit criteria 65 . FSC line plots of are shown in, Supplementary Fig. 6 . Tomogram analysis . Owed to the superior spatial-resolution, the analysis focused on the retrieved phase respectively electron density tomograms 30 . To exclude any potential sample preparation artifacts we extracted sub-volumes (Fig. 3a ) from the center of the imaged volume. Figure 3e , electron density line profiles normal to the laminae structure were obtained by calculating the radially averaged electron density of the identifiable layers. The average layer thickness was calculated using a parallel plate model. Overall sample composition and hydration were based on linear combination fitting using the theoretical electron density values of known shell components as well as the measured electron densities of the fully dry shell as reference points. Equally, component matching was achieved by comparing calculated electron densities of known shell components, i.e., francolite, organic matrix (approximated using molecular weight and density of chitin), and water/ice, with the measured electron densities of manually isolated, i.e., visual pure, components in the tomogram where possible. Supplementary Fig. 7 shows local variations in hydration level for the fully hydrated sample calculated from the respective electron density tomogram using the average electron density of the dry sample and the electron density of amorphous ice as reference values. The volume percentages obtained were converted to water weight percent using tabulated density values of shell components. The resulting hydration tomogram was threshold segmented for visualization purposes and to determine the swelling degree of structurally coherent layers as a function of hydration level following using thickness analysis 26 , 66 . Data availability The electron microscopy and X-ray computed ptychography generated in this study can be retrieved from the University of Edinburgh DataShare, . The remaining data that support the findings reported in this study are available within the paper and its supplementary information files. | None | [] | [] | [] | SciNews | Biology | Johannes Ihli et al, Mechanical adaptation of brachiopod shells via hydration-induced structural changes, Nature Communications (2021). DOI: 10.1038/s41467-021-25613-4 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-25613-4 | https://phys.org/news/2021-09-mystery-flexible-shell.html | An international research team has discovered the secret behind the unique properties of the brachiopod Discinisca tenuis, which has a mineral-rich shell that becomes extremely soft in water and hard again in air. Using the Swiss Light Source SLS, the team found that the shell's structure changes when it absorbs water, causing the organic molecules to swell and the nanocrystals to move against each other, making the shell flexible. This phenomenon is reversible, with the shell returning to its hard and brittle state when it dries. The researchers believe that this ability to adapt to different environmental conditions may be an evolutionary advantage for the brachiopod, allowing it to survive in tidal zones with varying wave conditions. The discovery also has potential applications in materials science, such as developing hard materials that can flexibly adapt to movements or impacts, and could even lead to the development of bone-replacement materials.
An international research team with participation of the Paul Scherrer Institute PSI has deciphered why the protective cover of the brachiopod Discinisca tenuis becomes extremely soft in water and gets hard again in the air. The study appears today in the journal Nature Communications. The brachiopod Discinisca tenuis lives on the west coast of Africa. It has a mineral-rich shell that protects it from harmful environmental influences. Bathing the shell in water leads to a structural change in the material: The flat, hard shell becomes so flexible that it can even be folded up without breaking. With the help of the Swiss Light Source SLS, the researchers have deciphered exactly how this transformation takes place. The phenomenon was discovered by chance a few years ago by Fabio Nudelman, a materials chemist currently at the School of Chemistry, University of Edinburgh in Scotland. Maggie Cusack, who was recently appointed president of Munster Technological University in Ireland, had provided Nudelman with shells of the brachiopod Discinisca tenuis, which originally came from Namibia. When he wanted to wash the hard object, it suddenly became soft and flexible in contact with water. The shell had absorbed liquid and thereby changed its structure. The process was reversible: When the shell dried, it became hard and brittle again. Together with colleagues from six countries, Nudelman set out to discover what exactly takes place during this unexpected transformation. "In its composition, the shell resembles bone," he explains. "But bone doesn't change its structure when it gets wet." The same goes for clams: If the animals need to adapt the properties of their shell to different environmental conditions, they normally have to rework the material in a lengthy and energetically costly process, by resorbing and redistributing minerals. It doesn't work simply through the absorption of water. Johannes Ihli und co-author Klaus Wakonig at SLS’s cSAXS beamline. Credit: Paul Scherrer Institute/Markus Fischer Hybrid material with a special trick It was so-called cryo-tomography, performed at the Swiss Light Source SLS, that "opened the door to reveal the secret," says Johannes Ihli, a PSI researcher at SLS. With this technique, the researchers examined the material as if under a very high-resolution microscope, and in fact at extremely low temperatures. "At room temperature it would not have been possible, since the high-energy X-ray light would immediately alter the sensitive shell structure," Ihli explains. The brachiopod's shell, which is no more than half a millimeter thick, consists of a hybrid material: mainly inorganic mineral in which organic polymers made from proteins and sugars are embedded. Bones, clam shells, and teeth are structured in a similar way out of a mixture of organic and inorganic material. The mineral that constitutes the main component of the shell is a type of fluoroapatite—similar to the material that makes up the enamel of our teeth. Tiny nanocrystals of this material are arranged in layers. Nudelman compares it to brick walls: "In this analogy, the bricks are the nanocrystals, and the mortar between the bricks consists of organic molecules such as chitin and proteins." As the researchers observed, this "mortar" can absorb large amounts of water, causing it to swell up. Through the storage of water, it changes its structure: It becomes soft, and the bricks become movable with respect to each other. "Then water acts like a lubricant between the individual nanocrystals," Ihli explains. "The crystals can then slip against each other." Through this movement, the shell becomes flexible. The researchers found a network of pores in the shell that was especially effective in guiding water inside and rapidly distributing it throughout the material. Evolutionary advantage Discinisca tenuis lives in large clusters in tidal zones on the coast where, depending on the tide, the animals are exposed to strong waves or calm waters. The researchers speculate that it is probably advantageous if the animals can quickly adapt the softness or hardness of their shell to the respective situation: "This could prevent damage to the shell and thus be a key to the animals' survival," they write in the study. The phenomenon may even be more widespread than suspected: "We don't know how many other animal species there might be that have this kind of property," says Nudelman. Aside from biology and evolution, the newly gained insights are also of interest for materials science: The development of a hard, brittle material whose stiffness can be controlled could hold promise for many applications. Sports clothing or helmets, for example, might be able to flexibly adapt to movements and always offer the protection required depending on the impact. Harnessing this phenomenon could also prove useful in developing bone-replacement materials. |
10.1038/s41586-018-0357-y | Research team uses excitons to take electronics into the future | Excitons could revolutionize the way engineers approach electronics. A team of EPFL researchers has created a new type of transistor—one of the components of circuits—using excitons instead of electrons. Notably, their exciton-based transistor functions effectively at room temperature, a hitherto insurmountable obstacle. They achieved this by using two 2-D materials as semiconductors. Their study, which was published today in Nature, has numerous implications in the field of excitonics, a promising new area of study alongside photonics and spintronics. "Our research showed that by manipulating excitons, we had come upon a whole new approach to electronics," says Andras Kis, who heads EPFL's Laboratory of Nanoscale Electronics and Structures (LANES). "We are witnessing the emergence of a totally new field of study, the full scope of which we don't yet know." This breakthrough sets the stage for optoelectronic devices that consume less energy and are both smaller and faster than current devices. In addition, it will be possible to integrate optical transmission and electronic data-processing systems into the same device, which will reduce the number of operations needed and make the systems more efficient. Higher energy level Excitons are actually quasiparticles, a term used to describe the interaction between the particles that make up a given substance rather than the substance itself. Excitons consist of an electron and an electron hole. The two are bound together when the electron absorbs a photon and achieves a higher level of energy; the "excited" electron leaves behind a hole in the previous level of energy, which, in band theory, is called a valence band. This hole, also a quasiparticle, is an indication of the missing electron in this band. Since the electron is negatively charged and the hole is positively charged, the two particles remain bound by an electrostatic force. This bond between the electron and the hole is called Coulomb attraction. And it is in this state of tension and balance that they form an exciton. When the electron finally falls back into the hole, it emits a photon. And with that, the exciton ceases to exist. Put more simply, a photon goes in at one end of the circuit and comes out the other; while inside, it gives rise to an exciton that acts like a particle. Double success It is only recently that researchers have begun looking at the properties of excitons in the context of electronic circuits. The energy in excitons had always been considered too fragile and the exciton life span too short to be of any real interest in this domain. In addition, excitons could only be produced and controlled in circuits at extremely low temperatures (around -173 degrees C). The breakthrough came when the EPFL researchers discovered how to control the life span of the excitons and how to move them around. They did this by using two 2-D materials: tungsten diselenide (WSe2) and molybdenum disulfide (MoS2). "The excitons in these materials exhibit a particularly strong electrostatic bond and, even more importantly, they are not quickly destroyed at room temperature," explains Kis. The researchers were also able to significantly lengthen the excitons' lifespan by exploiting the fact that the electrons always found their way to the MoS2 while the holes always ended up in the WSe2. The researchers kept the excitons going even longer by protecting the semiconductor layers with boron nitride (BN). "We created a special type of exciton, where the two sides are farther apart than in the conventional particle," says Kis. "This delays the process in which the electron returns to the hole and light is produced. It's at this point, when the excitons remain in dipole form for slightly longer, that they can be controlled and moved around using an electric field." | Researchers at EPFL have made a breakthrough in electronics by creating a new type of transistor that uses excitons, quasiparticles composed of an electron and an electron hole, instead of electrons. This achievement is significant because it allows the transistor to function effectively at room temperature, a previously insurmountable obstacle. The team used two 2-D materials, tungsten diselenide and molybdenum disulfide, as semiconductors to control the lifespan and movement of the excitons. By exploiting the electrostatic bond between the electron and hole, the researchers were able to lengthen the excitons' lifespan and control them using an electric field. This breakthrough has numerous implications for the field of excitonics, a promising new area of study, and sets the stage for the development of optoelectronic devices that consume less energy and are smaller, faster, and more efficient than current devices. | None | Abstract Devices that rely on the manipulation of excitons—bound pairs of electrons and holes—hold great promise for realizing efficient interconnects between optical data transmission and electrical processing systems. Although exciton-based transistor actions have been demonstrated successfully in bulk semiconductor-based coupled quantum wells 1 , 2 , 3 , the low temperature required for their operation limits their practical application. The recent emergence of two-dimensional semiconductors with large exciton binding energies 4 , 5 may lead to excitonic devices and circuits that operate at room temperature. Whereas individual two-dimensional materials have short exciton diffusion lengths, the spatial separation of electrons and holes in different layers in heterostructures could help to overcome this limitation and enable room-temperature operation of mesoscale devices 6 , 7 , 8 . Here we report excitonic devices made of MoS 2 –WSe 2 van der Waals heterostructures encapsulated in hexagonal boron nitride that demonstrate electrically controlled transistor actions at room temperature. The long-lived nature of the interlayer excitons in our device results in them diffusing over a distance of five micrometres. Within our device, we further demonstrate the ability to manipulate exciton dynamics by creating electrically reconfigurable confining and repulsive potentials for the exciton flux. Our results make a strong case for integrating two-dimensional materials in future excitonic devices to enable operation at room temperature. Main Solid-state devices use particles and their quantum numbers for their operation, with electronics being the ubiquitous example. The need to improve power efficiency of charge-based devices and circuits is motivating research into new devices that would rely on other principles. Candidates so far include spintronics and photonics 9 , 10 . Excitons—electrically neutral quasi-particles formed by bound electrons and holes—can also be manipulated in solid-state systems. The development of such excitonic devices has so far been hindered by the absence of a suitable system that would enable room-temperature manipulation of excitons, limiting the expansion of the field. Here, we demonstrate room-temperature excitonic devices based on atomically thin semiconductors. These devices could open the way for wider studies and applications of excitonic devices in the academic and industrial sectors 11 . Many applications can be envisaged, because excitons could be used to efficiently couple optical data transmission and electronic processing systems. Although fast optical switches have already been demonstrated 12 , 13 , the comparably large size (about 10 μm) 14 , 15 of such devices limits packing density. This can be overcome in excitonic devices, the characteristic size of which is determined by that of electronic field-effect transistors (FETs). Owing to their finite binding energy E b , excitons can exist up to temperatures of around T ∝ E b / k B , where k B is the Boltzmann constant. In a conventional III–V-semiconductor coupled quantum well with a size of a few nanometres, the relatively small binding energy of around 10 meV permits the observation of excitons only at cryogenic temperatures (less than 100 K) 3 . To reach higher temperatures, different materials are required. To this end, systems with higher E b (in the range of tens of millielectronvolts) have been explored more recently, such as (Al,Ga)N/GaN (ref. 16 ) or ZnO (ref. 17 ). Two-dimensional semiconductors such as transition-metal dichalcogenides have even larger exciton binding energies, which can exceed 500 meV in some cases owing to strong quantum confinement 4 , 5 . This could enable the realization of excitonic devices that operate at room temperature 18 . Although intralayer excitons have relatively short lifetimes (about 10 ps) 7 , 19 , the spatial separation of holes and electrons in interlayer excitons results in lifetimes more than two orders of magnitude longer, well in the nanosecond range 6 . For the device presented here, we take advantage of interlayer excitons in an atomically thin MoS 2 –WSe 2 heterostructure. Type-II band alignment 20 , 21 (Fig. 1a ) results in charge separation between the constituent materials, with electrons and holes residing in MoS 2 and WSe 2 , respectively. The formation of indirect excitons is marked by the appearance of a new photoluminescence emission peak 22 , redshifted by about 75 meV with respect to the intralayer exciton of the WSe 2 monolayer. In Extended Data Fig. 1b we present a typical photoluminescence spectrum obtained from such a heterostructure on SiO 2 , in which the spectral signature of the interlayer exciton is clearly visible (dark blue line), together with the individual WSe 2 and MoS 2 monolayers (blue and red lines, respectively). Recent reports 23 suggest that excitons in the MoS 2 –WSe 2 system are not only spatially indirect, but also momentum-indirect owing to lattice mismatch. The phonon-assisted nature of the emission process further reduces the exciton recombination rate, yielding a longer lifetime 8 , 24 . Such an extended lifetime can be used to obtain interlayer exciton diffusion over a scale of micrometres, even at room temperature. Fig. 1: Interlayer excitons in the WSe 2 –MoS 2 van der Waals heterostructure. a , Type-II band alignment in the WSe 2 –MoS 2 heterostructure with intralayer ( X 0 ) and interlayer ( X i ) excitons. The red and blue areas represent the bands in the two materials and the heterobilayer. Positive and negative symbols indicate holes and electrons, respectively. b , Schematic depiction of the WSe 2 –MoS 2 heterostructure, showing the heterobilayer encapsulated in hexagonal boron nitride (h-BN) and the top and bottom gates. The interlayer exciton has a permanent out-of-plane dipole moment p that allows manipulation via the electric field E . c , False-colour optical image of the device, highlighting the different materials. d , e , Spatial maps of photoluminescence at 670 nm ( d ) and 750 nm ( e ), corresponding to MoS 2 and WSe 2 intralayer excitonic resonances, respectively. Photoluminescence is quenched in the heterostructure area owing to efficient charge transfer. Scale bars, 5 μm. a.u., arbitrary units. Full size image To obtain a pristine surface, the heterostructure is encapsulated in hexagonal boron nitride and annealed in high vacuum. Multiple transparent top gates are fabricated out of few-layer graphene. This double-gate configuration allows us to apply a vertical electric field without changing the carrier concentration in the MoS 2 –WSe 2 heterostructure. In Fig. 1c we show a false-colour optical micrograph of the resulting stack. We characterize the structure by using photoluminescence mapping at room temperature, under 647-nm excitation. In Fig. 1d, e and Extended Data Fig. 1 we show the intralayer emission distribution at the wavelengths characteristic of MoS 2 (670 nm), WSe 2 (760 nm) and the interlayer exciton (785 nm). Whereas individual monolayers appear to be homogeneously bright, emission from the heterostructure region is uniformly quenched by more than three orders of magnitude, owing to the efficient charge transfer between layers 24 . Even with this strong quenching, we are able to detect the interlayer peak in the photoluminescence spectra (Extended Data Fig. 2 ), confirming the generation of interlayer excitons. Because this effect has a central role in our work, we fabricated three more heterostructures encapsulated in hexagonal boron nitride, confirming the reproducibility of this result (Extended Data Fig. 3 ). Given that excitons do not carry a net electric charge, we do not expect their flow to be influenced by the direct application of an in-plane electric field. However, the confinement of oppositely charged carriers in different layers results in a well-defined interlayer-exciton dipole moment p with an out-of-plane ( z ) direction (Fig. 1b ). An electric field E z ( x , y ) perpendicular to the crystal plane can then be used to shift the exciton energy by δ E = − p z E z , while its lateral modulation drives the exciton motion towards regions of lower energy. Exciton dynamics in the longitudinal direction can be modelled by a diffusion equation with an external potential (see Methods ): $$\begin{array}{c}D\frac{{\partial }^{2}n}{\partial {x}^{2}}+\frac{D}{{k}_{{\rm{B}}}T}\frac{\partial }{\partial x}\left(n\frac{\partial \varphi }{\partial x}\right)+G-\frac{n}{\tau }=\frac{\partial n}{\partial t}\end{array}$$ (1) where n , D and τ are the interlayer-exciton concentration, diffusion coefficient and lifetime, respectively, φ is the exciton potential (including the electrostatic contribution φ el = −p z E z ) and G is the optical generation rate. This simple model qualitatively shows how the application of an electric field E z can affect interlayer exciton diffusion, as we discuss later. We first demonstrate an electrically controlled excitonic switch, represented schematically in Fig. 2a . Laser light focused inside the heterostructure area (input) generates interlayer excitons, which diffuse along the channel of the heterostructure. However, the low brightness of interlayer emission makes monitoring the operation of the device challenging. For this reason, we use the exposed WSe 2 that extends out of the heterostructure as a bright emitter. Here, interlayer excitons diffuse towards the edge of the heterostructure. During this diffusion process, interlayer excitons are expected to dissociate into single carriers, which are allowed to diffuse inside monolayer MoS 2 and WSe 2 , where they experience recombination with native charges, resulting in bright emission. The emitted radiation is recorded simultaneously using a charge-coupled device (CCD) camera and a spectrometer (see Methods ), to obtain spatial and spectral emission profiles. This allows us to further confirm the presence and diffusion of interlayer excitons inside the heterobilayer (Extended Data Fig. 2 ). In the absence of applied fields (Fig. 2b ), excitons diffuse away from the pumping area (red circle in Fig. 2d ), owing to temperature and concentration gradients 25 , 26 , 27 , and reach the recombination site, approximately 3 μm away. Comparison of pumping and emission profiles (Extended Data Fig. 4 ) lets us exclude the possibility of a direct excitation of monolayer WSe 2 by the low-intensity tail of the laser spot. This situation (bright output) is shown in the emission image in Fig. 2d and corresponds to the ON state of the excitonic transistor. On the contrary, by introducing a potential barrier higher than k B T on the path of the diffusing excitons (Fig. 2c ), we impede their motion, resulting in the suppression of light emission (Fig. 2e ). In this way, we can achieve efficient electrical modulation of the output emission, as shown in Fig. 2f , in which the emission intensity (normalized by the value in the OFF state, corresponding to V g1 = +16 V) is plotted as a function of applied voltage. For reference, we also plot the intensity modulation observed when the laser beam is located on the emission centre (input–output distance d i–o = 0 μm). The switching threshold is around 8 V, which corresponds well with the calculated exciton energy modulation of δ E ≈ k B T ≈ 25 meV (blue dashed line in Fig. 2f ). This result is consistent with our model: because the height of the energy barrier starts to become comparable to thermal excitation, it is now possible to block the diffusion of exciton flux. We extract an intensity ON/OFF ratio larger than 100, limited by the noise level of the set-up in the OFF state (see also Extended Data Figs. 4 , 5 ). Such a high ratio results from the realization of an excitonic transistor with complete suppression of emission in the OFF state. This effect is also clearly visible in the spectrum of the emitted light, in which the WSe 2 peak is selectively suppressed when the device is in the OFF state (Extended Data Fig. 6 ). We also note that strong emission from MoS 2 is detected in both states, because excitons can diffuse freely in other directions. Fig. 2: Excitonic transistor operation at room temperature. a , The application of gate voltages ( V g1 , V g2 , V g3 ) to transparent graphene electrodes (gates 1–3) can engineer a potential landscape for the diffusion of excitons, controlling their flux through the device. b , c , Calculated energy variation δ E for the excitons in the ON (free diffusion; b ) and OFF (potential barrier; c ) states. Red arrows represent laser excitation; the bound charges and black dashed arrows denote the excitons and their diffusion, respectively. d , e , Corresponding images of exciton emission. Dashed lines indicate the positions of the different layers that form the heterostructure and the top graphene gate (gate 1). The laser spot is represented by the red circle. Colour scale indicates the normalized photoluminescence intensity. Scale bars, 5 μm. f , Gate dependence of the ON/OFF ratio for optical excitation 3 μm away from the emission centre (left axis). The right axis shows the reference data, which were acquired with the incident laser beam located directly on the emission centre (input–output distance, d i–o = 0 μm). The measured emission intensity is normalized by the OFF-state value at V g1 = 15 V. The background shading indicates the ON (red) and OFF (grey) states. The blue dashed line represents the gate voltage at which the barrier height is equal to the thermal energy. Full size image An alternative mechanism that could in principle explain the recombination far away from the excitation spot is based on the diffusion of single carriers rather than interlayer excitons. It has been shown that such carriers (holes in particular) can have long lifetimes 6 , 28 , 29 . However, experimental observations indicate that this is not the dominant mechanism in our heterostructure. First, we observe the production of interlayer excitons directly in the excitation area, even if the intensity is low. Second, for a flux of single carriers, the voltage modulation necessary to counteract thermal excitation and block the single-particle flux would be about 50 mV, more than two orders of magnitude lower that the gate voltage of approximately 8 V required in our experimental result shown in Fig. 2 . Finally, this mechanism would also result in different emission profiles for different regimes of device operation (see Extended Data Fig. 7 ). To exclude the possibility that the observed effect arises from an unwanted modulation of the charge carrier density in WSe 2 , we perform a calibration experiment in which the excitation light is focused on the output area ( d i–o = 0) and the device is biased as before. This reference experiment is discussed in detail in Methods and the result is presented in Fig. 2f (grey curve); it shows that only a comparably small modulation of WSe 2 emission intensity is observed. This confirms that the energy barrier is the origin of the switching behaviour. We study the dependence of the ON/OFF ratio on d i–o further (Extended Data Fig. 8 ) by keeping the voltage profile constant and optically injecting excitons at different distances from the output point. Consistent with our model, we observe efficient modulation when the laser is focused beyond the energy barrier, with emission intensity decreasing with increasing d i–o owing to long-distance diffusion. The diffusion length can be doubled at lower temperature (4.7 K), resulting in operation over a longer distance (Extended Data Fig. 9 ). Having demonstrated that we can block or allow spontaneous exciton diffusion, we go further by creating a drift field in the desired direction, in analogy with the source–drain bias of a conventional FET. We show this type of operation in Fig. 3 , with all three electrodes used to create a potential ladder going upwards or downwards with respect to the excitation point (Fig. 3a, b ). When excitons encounter a gradually decreasing energy profile (forward bias), their diffusion is enhanced by a drift term, allowing us to operate the device with a larger distance between optical input and output. As shown in Fig. 3c , this regime of electrically assisted diffusion can result in exciton transport over a distance of 5 μm. To obtain a more quantitative estimate of the induced modulation, we measure the dependence of the emission intensity on the distance from the laser spot as it is displaced away from the output area at fixed gate voltages. The results (Fig. 3d ) show that the length over which excitons diffuse can be effectively modulated from 5.5 μm to 3 μm, compared to about 4.5 μm in the unbiased case. The modulation of the effective diffusion length with the potential φ el qualitatively follows the model introduced in equation ( 1 ). Fig. 3: Biasing of the excitonic device. a , b , Calculated energy profile δ E of the indirect exciton as a function of lateral coordinate X for the forward ( a ) and backward ( b ) bias cases. The black solid line indicates the direction of exciton drift. c , Image showing exciton emission from the device when injecting at a distance d i–o = 5 μm from the emission area. Colour scale, dashed lines and red circles as in Fig. 2d, e . Scale bar, 5 μm. d , Normalized output intensity as a function of the distance d i–o between optical injection and the emission point, for the forward (red) and backward (blue) bias configurations, compared to the unbiased case (grey). The grey shading indicates the noise floor. Exciton diffusion over a distance of 5.5 μm is achieved. Full size image We further use the multi-gate configuration to demonstrate more complex and electrically reconfigurable types of potential landscape and related device operation. In Fig. 4a–c we present the energy profiles calculated for free diffusion (Fig. 4b ) compared with a potential well (Fig. 4a ) and a repulsive barrier (Fig. 4c ) produced by the central gate (gate 2), while the side gates (1 and 3) are kept grounded. In this case, the position of the optical pump is centred on the middle electrode, which corresponds to the centre of the well or barrier. In Fig. 4d, g we show the CCD camera image and related emitted intensity profile along the device channel for the case of the potential well. We observe photoluminescence emission only from the narrow area below the central contact, which is indicative of electrical confinement of the excitonic cloud. Conversely, when applying a positive voltage to create a ‘potential hill’ (Fig. 4f, i ), we see an expulsion of excitons from the pumping area with the appearance of bright emission spots outside the middle section of the device, owing to excitons drifting along the energy profile and recombining on the edges of the heterostructure. This is evident from a comparison with the free-diffusion case in Fig. 4e, h . Interestingly, we also observe higher-energy emission from the neighbouring MoS 2 monolayer parts inside the well in the case of exciton confinement. A similar effect is also observed during exciton expulsion, with bright spots appearing at the edges of the heterostructure around the repulsive potential. Further inspection of the emission spectra from Fig. 4d, f confirms this, with the intensity of monolayer peaks decreasing (increasing) when confining (anti-confining) the excitons (Extended Data Fig. 6 ). As also discussed in Methods, the observed MoS 2 emission is affected by the local inhomogeneity of the substrate and by the optical filters used. As discussed earlier, the diffusion of single particles and their recombination with native charges that are available in the monolayers could have a role in light emission that extends from the edges of the heterobilayer into the monolayers. Fig. 4: Electrically reconfigurable energy landscape. a – c , Calculated energy profile δ E of the indirect exciton for the cases of a potential well ( a ), free diffusion ( b ) and a potential barrier ( c ). d – f , Imaging of exciton emission for the configurations shown in a – c . Incident laser light (red circle) is focused on top of gate 2. Dashed lines indicate positions of different layers that form the heterostructure and the graphene top gate 2; colour scale as in Fig. 2d, e . Scale bars, 5 μm. g – i , Cross-section of the intensity profile along the device channel, integrated over its width, for the three configurations. The red-shaded underlay represents the profile of the excitation laser. Full size image Methods Device fabrication The heterostructure was fabricated using polymer-assisted transfer (see Extended Data Fig. 10 ) of flakes of hexagonal boron nitride (h-BN), WSe 2 (HQ Graphene) and MoS 2 (SPI). Flakes were first exfoliated on a polymer double layer, as described previously 30 . Once monolayers were optically identified, the bottom layer was dissolved with a solvent and free-floating films with flakes were obtained. These were transferred using a custom-built set-up with micromanipulators to carefully align flakes on top of each other. During the transfer process, the sharp edges of the flakes were aligned to obtain a twist angle between the two crystal axes close to 0° (or 60°). However, in the case of MoS 2 –WSe 2 heterobilayers, the alignment has been shown to be not critical for the observation of interlayer excitons 23 , 31 . This is due to the indirect (in reciprocal space) nature of the transition and to the considerable lattice mismatch between the two layers (about 4%). Polymer residue was removed with a hot acetone bath. Once completed, the stack was thermally annealed in high vacuum at 10 −6 mbar for 6 h. Few-layer graphene flakes were obtained by exfoliation from graphite (NGS) on Si/SiO 2 substrates and patterned in the desired shape by electron-beam lithography and oxygen plasma etching. After thermal annealing, the patterned flakes were transferred on top of the van der Waals stack using a polymer-assisted transfer and the entire structure was annealed again in high vacuum. Finally, electrical contacts were fabricated by electron-beam lithography and metallization (60 nm/2 nm Au/Ti). Optical measurements All measurements presented here were performed in vacuum at room temperature unless specified otherwise. Excitons were optically pumped by a continuous-wave 647-nm laser diode focused to the diffraction limit with a beam size of about 1 μm. The incident power was 250 μW. The spectral and spatial characteristics of the device emission were analysed simultaneously. The emitted light was acquired using a spectrometer (Andor) and the laser line was removed with a long-pass 650-nm edge filter. For spatial imaging, we used a long-pass 700-nm edge filter so that the laser light and most of the MoS 2 emission were blocked. Filtered light was acquired by a CCD camera (Andor Ixon). The room-temperature photoluminescence spectrum of MoS 2 shown in Extended Data Fig. 1b was obtained under 150-μW excitation at 647 nm, whereas monolayer WSe 2 and the heterostructure fabricated on SiO 2 substrate were characterized under 488-nm excitation. Owing to the small separation between the interlayer and the intralayer WSe 2 exciton peaks, it is not possible to completely distinguish them in the images acquired on the CCD. The tail of the WSe 2 monolayer peak normally overlaps with the spectral line of the interlayer exciton considerably, meaning that weak luminescence around 785 nm can be observed even on monolayer WSe 2 (Extended Data Fig. 3e ), which is not due to interlayer excitons. Because of the use of the 700-nm filter, the emission from monolayer MoS 2 is in principle not observable on the CCD. However, some light can be transmitted when the broadening of the photoluminescence peak results in a low-energy tail (see Extended Data Fig. 11 ) extending beyond 700 nm. Local inhomogeneity in the substrate can affect this broadening, which could explain why the observed MoS 2 luminescence in Fig. 4f comes mostly from the left part of the device. Low-temperature measurements (Extended Data Fig. 9 ) were performed in a liquid-helium, continuous-flow cryostat (Oxford Instruments). Reference experiment We performed a reference experiment to exclude spurious effects that could compromise the interpretation of the data. First, we observed how the photoluminescence emission from monolayer WSe 2 changes when gating the device using the back gate. For this purpose, we excited the exposed WSe 2 with the laser beam directly and recorded the photoluminescence spectra. When applying voltage to the back gate, a modulation in the emission intensity is clearly observable (Extended Data Fig. 12a ). We repeated the same measurement, but instead of applying a voltage between the flake and the back gate, we biased the top and back gates, thus generating a vertical electric field inside the device. In this case, we cannot observe any substantial change in the emission intensity (Extended Data Fig. 12b ). This allows us to rule out the possibility that the switching action that we observe could be due to a suppression of photoluminescence from a changing doping level in the material. Image processing To aid the interpretation of images from the CCD camera, we performed several image-processing steps using ImageJ 32 . We first subtracted from the original image a background image obtained without laser illumination, to account for ambient light noise. In some cases, a simple background was not sufficient to compensate for the presence of spurious signals from unwanted reflections or changing ambient background. In these cases, a background image was generated by applying the rolling-ball algorithm in ImageJ. Contrast was adjusted to cover the range of values in the image. We provide an example of the procedure in Extended Data Fig. 13 . Modelling exciton diffusion The dynamics of the exciton in the channel of our device can be modelled by one-dimensional diffusion in the presence of an external potential φ ( x ) (temperature, electrostatic potential or dipole–dipole interaction). The gradient of exciton concentration n ( x ) drives diffusion current j diff while the potential gradient causes drift j drift : $${j}_{{\rm{d}}{\rm{i}}{\rm{f}}{\rm{f}}}=-D\frac{{\rm{\partial }}n}{{\rm{\partial }}x},\,{j}_{{\rm{d}}{\rm{r}}{\rm{i}}{\rm{f}}{\rm{t}}}=-\mu n\frac{{\rm{\partial }}\phi }{{\rm{\partial }}x}$$ where μ is the exciton mobility, which is related to the diffusion coefficient D and the thermal energy k B T by the Einstein relation D = μk B T . We also include an exciton generation rate G by means of optical pumping and an exciton recombination rate R , which is related to the exciton lifetime as R = − n / τ . From the exciton continuity equation we then obtain equation ( 1 ). In our system, in which excitons have a built-in vertical dipole moment p , the electrostatic potential induced by the vertical electric field is φ el = − E z p z . Because we use continuous-wave excitation, we assume a steady-state case (∂ n /∂ t = 0). Considering φ el as the main contribution to exciton drift, we obtain $$D\frac{{\partial }^{2}n}{\partial {x}^{2}}-\frac{D{p}_{z}}{{k}_{{\rm{B}}}T}\frac{\partial }{\partial x}\left(n\frac{\partial {E}_{z}}{\partial x}\right)+G-\frac{n}{\tau }=0$$ We simplify the model further by assuming two fundamentally different regions, shown in Extended Data Fig. 14 . First region is under constant homogeneous excitation so that the concentration reaches an equilibrium value with equal recombination and generation rates ( R + G = 0). The equilibrium concentration is then n 0 = Gτ . Outside of the pumping region, excitons diffuse away, driven by the concentration and potential gradients: $$D\frac{{\partial }^{2}n}{\partial {x}^{2}}-\frac{D{p}_{z}}{{k}_{{\rm{B}}}T}\frac{\partial }{\partial x}\left(n\frac{\partial {E}_{z}}{\partial x}\right)-\frac{n}{\tau }=0$$ The case of diffusion in the absence of an external field can be solved analytically, revealing exponential decay of exciton density from the pumping region with a characteristic distance that corresponds to the diffusion length \({l}_{{\rm{diff}}}=\sqrt{D\tau }\) : \({n}_{{\rm{free}}}(x)={n}_{0}{{\rm{e}}}^{-x/{l}_{{\rm{diff}}}}\) An applied non-homogeneous vertical electric field can alter the diffusion length (as demonstrated experimentally), which can be modelled as a change in the effective diffusion length. Numerical simulation of the exciton-energy profile We first calculate the electric-field distribution in our system using the COMSOL Multiphysics simulation software. All calculations were performed considering the dimensions of the device as follows: the top graphene gates are 1.1 μm wide and spaced 0.8 μm apart. The heterostructure is encapsulated between two h-BN crystals (10 nm thick on the top and 20 nm on the bottom), and the substrate is heavily doped Si with 270 nm of SiO 2 on top (see Extended Data Fig. 15a ). Extended Data Fig. 15b shows an example of the electrical field in the system in the confinement configuration, with −10 V applied to the central gate and the side gates grounded. Interlayer excitons have a built-in out-of-plane dipole moment directed upwards, with an absolute value of p z = ed = e × 7.5 × 10 −10 m, where e is the elementary charge and d = 7.5 Å is the layer separation in our heterostructure. They thus experience an energy shift of δ E = − p z E z in the presence of a vertical electric field E z . The resulting force applied on the exciton in the longitudinal direction is proportional to the first derivative of the vertical electric field E z with respect to the channel x axis: $${F}_{x}=-\frac{\partial ({\rm{\delta }}E)}{\partial x}=ed\frac{\partial {E}_{z}}{\partial x}$$ Example profiles of the confinement-well configuration are shown in Extended Data Fig. 15c . Data availability The data that support the findings of this study are available from the corresponding author on reasonable request. | None | [] | [] | [] | SciNews | Nano | Dmitrii Unuchek et al, Room-temperature electrical control of exciton flux in a van der Waals heterostructure, Nature (2018). DOI: 10.1038/s41586-018-0357-y Journal information: Nature | http://dx.doi.org/10.1038/s41586-018-0357-y | https://phys.org/news/2018-07-team-excitons-electronics-future.html | Researchers at EPFL have made a breakthrough in electronics by creating a new type of transistor that uses excitons, quasiparticles composed of an electron and an electron hole, instead of electrons. This achievement is significant because it allows the transistor to function effectively at room temperature, a previously insurmountable obstacle. The team used two 2-D materials, tungsten diselenide and molybdenum disulfide, as semiconductors to control the lifespan and movement of the excitons. By exploiting the electrostatic bond between the electron and hole, the researchers were able to lengthen the excitons' lifespan and control them using an electric field. This breakthrough has numerous implications for the field of excitonics, a promising new area of study, and sets the stage for the development of optoelectronic devices that consume less energy and are smaller, faster, and more efficient than current devices.
Excitons could revolutionize the way engineers approach electronics. A team of EPFL researchers has created a new type of transistor—one of the components of circuits—using excitons instead of electrons. Notably, their exciton-based transistor functions effectively at room temperature, a hitherto insurmountable obstacle. They achieved this by using two 2-D materials as semiconductors. Their study, which was published today in Nature, has numerous implications in the field of excitonics, a promising new area of study alongside photonics and spintronics. "Our research showed that by manipulating excitons, we had come upon a whole new approach to electronics," says Andras Kis, who heads EPFL's Laboratory of Nanoscale Electronics and Structures (LANES). "We are witnessing the emergence of a totally new field of study, the full scope of which we don't yet know." This breakthrough sets the stage for optoelectronic devices that consume less energy and are both smaller and faster than current devices. In addition, it will be possible to integrate optical transmission and electronic data-processing systems into the same device, which will reduce the number of operations needed and make the systems more efficient. Higher energy level Excitons are actually quasiparticles, a term used to describe the interaction between the particles that make up a given substance rather than the substance itself. Excitons consist of an electron and an electron hole. The two are bound together when the electron absorbs a photon and achieves a higher level of energy; the "excited" electron leaves behind a hole in the previous level of energy, which, in band theory, is called a valence band. This hole, also a quasiparticle, is an indication of the missing electron in this band. Since the electron is negatively charged and the hole is positively charged, the two particles remain bound by an electrostatic force. This bond between the electron and the hole is called Coulomb attraction. And it is in this state of tension and balance that they form an exciton. When the electron finally falls back into the hole, it emits a photon. And with that, the exciton ceases to exist. Put more simply, a photon goes in at one end of the circuit and comes out the other; while inside, it gives rise to an exciton that acts like a particle. Double success It is only recently that researchers have begun looking at the properties of excitons in the context of electronic circuits. The energy in excitons had always been considered too fragile and the exciton life span too short to be of any real interest in this domain. In addition, excitons could only be produced and controlled in circuits at extremely low temperatures (around -173 degrees C). The breakthrough came when the EPFL researchers discovered how to control the life span of the excitons and how to move them around. They did this by using two 2-D materials: tungsten diselenide (WSe2) and molybdenum disulfide (MoS2). "The excitons in these materials exhibit a particularly strong electrostatic bond and, even more importantly, they are not quickly destroyed at room temperature," explains Kis. The researchers were also able to significantly lengthen the excitons' lifespan by exploiting the fact that the electrons always found their way to the MoS2 while the holes always ended up in the WSe2. The researchers kept the excitons going even longer by protecting the semiconductor layers with boron nitride (BN). "We created a special type of exciton, where the two sides are farther apart than in the conventional particle," says Kis. "This delays the process in which the electron returns to the hole and light is produced. It's at this point, when the excitons remain in dipole form for slightly longer, that they can be controlled and moved around using an electric field." |
10.3390/ijerph19084446 | Studying species composition and community function of dinoflagellate-associated bacteria | Interactions between primary producers and bacteria impact the physiology of both partners, alter the chemistry of their environment, and shape ecosystem diversity. Several studies have documented that dinoflagellates-bacteria interactions have the potential to dramatically influence population dynamics. However, species-level information about the bacterial consortia characteristically associated with dinoflagellates still remains obscure. Recently, a research team led by Prof. Tang Yingzhong from the Institute of Oceanology of the Chinese Academy of Science (IOCAS) has provided new insights into the fundamental functions of bacteria consortia associated with the phycospheres of dinoflagellates and other harmful algal blooms (HABs)-forming microalgae. The study was published in International Journal of Environmental Research and Public Health on April 7. The researchers characterized the bacterial assemblages associated with 144 clonal cultures of harmful algae that have been established and cultured in the laboratory, including 130 strains of dinoflagellates (covering all major taxa of dinoflagellates) and 14 strains from other classes. The long-lasting bacterial associations to laboratory-raised algal cultures hinted bilaterally (i.e., mutualism) or at least unilaterally (i.e., commensalism) beneficial to the two partners. Bacterial communities of dinoflagellates displayed strong conservation across strains with an enrichment of Methylophaga from the class γ-proteobacteria and implied a potentially functional group of methylotrophs. "While bacterial associations with thecate and athecate dinoflagellates displayed compositional and functional similarities, athecate dinoflagellates showed a more preferred niche for aerobic cellulolytic members in Actinobacteria phyla. This implies a plausibly proneness to utilize cellulose as energy source," said Dr. Deng Yunyan, first author of the study. "Our results provide insightful understanding of the species composition and community functional profiles of dinoflagellate-associated bacterial assemblages," said Prof. Tang. | A recent study published in International Journal of Environmental Research and Public Health has shed light on the bacterial consortia associated with dinoflagellates and other harmful algal blooms (HABs)-forming microalgae. The research team, led by Prof. Tang Yingzhong, characterized the bacterial assemblages associated with 144 clonal cultures of harmful algae, including 130 strains of dinoflagellates and 14 strains from other classes. The study found that the bacterial communities displayed strong conservation across strains, with an enrichment of Methylophaga from the class γ-proteobacteria and a potentially functional group of methylotrophs. Additionally, the study revealed that athecate dinoflagellates showed a preference for aerobic cellulolytic members in Actinobacteria phyla, suggesting a propensity to utilize cellulose as an energy source. Overall, the results provide valuable insights into the species composition and community functional profiles of dinoflagellate-associated bacterial assemblages. | None | Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections Adolescents Aging Air Anthropogenic Circularity Biosafety Chemoenvironment Children's Health Climate Change Digital Health Disabilities Disaster Medicine Disease Prevention Emerging Contaminants Environment and Applied Ecology Environmental Analysis and Methods Environmental Chemistry and Technology Environmental Earth Science and Medical Geology Environmental Ecology Environmental Health Environmental Microbiology Environmental Remediation and Management Environmental Science and Engineering Exercise and Health Global Health Health Behavior, Chronic Disease and Health Promotion Health Care Sciences & Services Health Communication and Informatics Health Economics Health Informatics Health-Related Quality of Life and Well-Being Industrial Ecology Infectious Disease Epidemiology Injury Prevention and Rehabilitation Mental Health Nursing Occupational Safety and Health Oral Health Public Health Statistics and Risk Assessment Reproductive Health Skin Health Sport and Health Toxicology and Public Health Traumas Water Science and Technology Women's Health All Sections Special Issue All Special Issues 2nd Edition of Environmental Impact Assessment by Green Processes 2nd Edition of Health Literacy, Nutrition and Public Health 2nd Edition of Oral Inflammation and Chronic Autoimmune Diseases 2nd Edition of Protecting, Supporting and Promoting Appropriate Breastfeeding in the 21st Century 2nd Edition of Sleep Quality and Health-Related Outcomes 2nd Edition of Social Determinants of Health 2nd Edition of Treatment of Foot and Ankle Injury and Public Health 2nd Edition of Trends in Sustainable Buildings and Infrastructure 2nd Edition of Water Sports Implications for Training, Environment and Health 2nd Edition: Advances in Maternal and Child Healthcare 2nd Edition: Advances in Personalized Exercise Prescription for Chronic Disease Prevention and Rehabilitation 2nd Edition: Evidence-Based Nature for Human Health 2nd Edition: Movement Studies for Individuals with Visual Impairments 2nd Edition: Tobacco Smoke Exposure and Tobacco Product Use A More Sustainable and Healthier Future for All: What Works? Access, Health, Regulations, and Policy: Exploring Rural and Remote Drinking Water Supplies in North America Actions in EU MS for Combating Health Threats in the Maritime Transport Sector Active Commuting and Active Transportation Addiction Behavior Addictions and Cognitive Behavioral Therapy Approaches Addictions in Children and Adolescents Addressing Environmental Health Inequalities – Proceedings from ISEE Conference 2015 Adolescent Depression Prevention Advances in Big Data Analytics and Intelligence Advances in Earth System Science Advances in Endodontic Pain Control Advances in Environmental Biology Advances in Environmental Chemistry Advances in Environmental Economics Advances in Environmental Geotechnics Advances in Environmental Justice Advances in Environmental Modelling Advances in Environmental Neurotoxicology Advances in Environmental Sensor Networks Advances in Environmental Sustainability, Resilience, and Health Advances in Epidemiology Advances in Foot Disorders and Its Treatment Advances in Gastroenterology, Hepatology and Clinical Nutrition Advances in LGBTQ+ People's Health Advances in Mental Health, PTSD and Moral Injury Advances in Physical Diagnosis, Physiotherapy and Rehabilitation in Medicine and Dentistry Advances in Squamous Cell Cancer of the Head and Neck Advances in Substance and Drug Abuse Prevention Advances in Telehealthcare Advances in the Diagnosis and Management of Renal Diseases Advances in the Use of eHealth for Pain Management Advancing Health Equity for Sexual and Gender Minority Populations Ageing Well: The Role of Age-Friendly Environments Aging and Cognition Aging and Work Agrochemicals in the Agri-Food Chain Air Pollution and Carbon Dioxide Emissions Air Pollution Exposure and Health Risks Air Pollution Modeling Air Pollution, Climate Change, and Public Health: The Unavoidable Path towards Decarbonization Air Quality and Healthcare Associated Infections Alcohol Abuse: Newer Approaches to an Old Problem Alcohol and Drugs of Addiction, Aggression and Violence Alcohol and Health Alcohol and Public Health Alcohol Policy and Public Health Alcohol-Related Violence Alcoholism Allergic Disease Epidemiology Allergy and Environment Ambient Air Pollution and Health Vulnerability Anemia and Patients Blood Management Patient in Critical Care Animal Assisted Interventions and Activities for Health and Wellbeing Anti-cancer Activity for Cancer Prevention and Treatment Antibiotic Resistant and Pathogenic Bacteria in the Food Production Environment: Epidemiological Evolution and Control Antimicrobial Resistance Prevention and Control Antimicrobials and Antimicrobial Resistance in the Environment Application of Bio-Based Materials in Environmental Governance Application of Biostatistical Modelling in Public Health and Epidemiology Application of Robotic Devices for Neurologic Rehabilitation Application of Statistical Methods in Public Health and Medical Research Applications of Statistical and Data Science Methods in Public Health Applied Physiology and Exercise Testing in Endurance and Ultra-Endurance Sports Applying Clinical Psychology to Medical Conditions Arsenic Contamination, Bioavailability and Public Health Arsenic in Drinking Water: Current Perspectives and Future Directions Asbestos and Cancers: Exposure from Neighboring Factories and Mines, Contaminated Soil, and Slate Roofs Asbestos Exposure and Disease: An Update Asbestos Exposure and Health Impact Assessment of Public and Occupational Electromagnetic Field Exposure in New and Emerging Connectivity Exposure Scenarios Assessment, Management, and Policy-Making for Environmental Pollution Related to Land Use and Land Cover Atmospheric Change and Impact on Health Auditory Experience, Music and Voice across the Perinatal Period: Innovations in Theory, Research and Clinical practice Back and Neck Pain Bariatric Surgery: Nutritional, Metabolic and Public Health Aspects Bayesian Design in Clinical Trials Beyond Amyloid and Tau - Targeting Lipid Metabolism for Alzheimer’s Disease Biomarkers and Therapies Big Data and Mathematical Modeling in Biomedicine Biochemical and Genetics Tools for Monitoring Health and Risk of Disease Biodegradability and Environmental Sciences Biological and Agricultural Engineering Biological and Health Effects of Low Level Non-Ionising Electromagnetic Fields Biomarkers: Environmental Research and Public Health Bioprocesses for Air Pollution Control Biostatistics Reveals New Insights in Sports Science, Sports Medicine, and Health Birth Defect Prevention Blood Cells, Hematopoiesis, Molecules, Nanomedicine and Diseases Research Bone Health: Nutritional Perspectives Boredom in Health, Education and Sports Breastfeeding and Infant Health Building Related Illnesses Caffeine in the Diet: Health Implications, Safety Issues, and Molecular Mechanisms Cancer Prevention, Treatment, and Survivorship Cannabis, Cannabis-Based Products, and Cannabinoids for Medicinal Use Carbon Capture and Storage Carbon Dioxide (CO2), Emissions, Environmental and Public Health Impact Carbon Emissions and Environmental Protection Cardiac Rehabilitation in the Time of COVID-19 Cardiovascular Disease Self-Care Interventions Cardiovascular Diseases and Public Health Challenges for Health Inequalities Research during COVID-19 Pandemic Challenges in Positive Organizational Psychology Child Injury Prevention Child Injury Prevention 2015 Childhood Obesity: Novel Approaches to a Global Problem Childhood Obesity: Prevention and Treatment Children, Adolescents and Nutrition Children’s Exposure to Environmental Contaminants Chronic Diseases and Multimorbidity in Primary Care Circular Economy & Social Inequalities Climate Change and Human Health Impacts and Adaptation Climate Change and Human Health Impacts and Adaptation Climate Change Impacts on Vector-borne Diseases Clinical and Translational Pathways for Cardiovascular Research College Student Health Behaviors Impact Their Food Security Influenced by Their Environment Community Health Intervention to Reduce Chronic Disease Community Participation in Health Promotion: Challenges and Successes Community Resilience and Recovery for Public Health Emergencies Community-Based Health Promotion Community-Based Solutions for Injury and Violence Prevention Community-Engaged Research to Address Health and Healthcare Disparities Complex Interventions for Public Health Improvement Computational Modeling in Biology and Medicine Contemporary challenges in public health Correctional Health COVID-19 and Health Education COVID-19 and NCDs: Emerging Trends in the Tropics COVID-19 in Low and Middle Income Countries COVID-19 Outbreak and Beyond: Psychological and Behavioral Responses and Future Perspectives COVID-19 Pandemic and the Environment COVID-19 Research COVID-19: Public Health Response Creativity and School Functioning Crisis Management and the Circular Economy for Healthier, Smarter and More Sustainable Cities Critical Issue on Lower Urinary Tract Symptoms and Urinary Incontinence Culture of Evidence-Based Practice and Quality Improvement in Nursing Cumulative and Integrated Health Impact Assessment Cumulative Health Risk Assessment Current Issues in the Neurological Rehabilitation of Children and Adolescents Current Research Trends in Transgender Health Current Therapeutic Trends and Challenges in the Management of Patients with Type 2 Diabetes Mellitus Cyber Pathology: Cyber Victimization and Cyber Bullying Cycling Medicine Data Analytics and Statistical Approaches Applied in Injury Risk, Illness, Well-Being, and Exercise Monitoring Decarbonization and the Benefits of Tackling Climate Change Decision-Making in the Health Care of Older Adults Decompensated Heart Failure Dental Hygiene and Oral Health Research: Lessons and Challenges Design and Development of Rehabilitation in Individuals with Parkinson’s Disease Determinants of Health and Well-Being in at Risk Populations Development and Evaluation of New Tobacco Control Interventions Development, Implementation and Evaluation of Gender-Specific, Community Based Health Promotion Interventions Diabetes Prevention: Challenges and Opportunities Diagnosis and Healthcare for Dementias Diet, Adiposity and Metabolic Health in Pregnant Women Different Views on a Child's Motor Development and Motor Performance Digital Governance and Low-Carbon Development Disabilities – Constant Quest in Medical, Social and Political Records Disability and Public Health Discoveries in Oral and Maxillofacial Surgery Diseases Etiology and Management: Towards a Precision Medicine Approach Drinking Water and Health Drug Abuse and Addiction Drug Resistance Early exposure to endocrine disrupting chemicals Eating Behaviour and Food Safety, Physical Fitness and Health Ecological and Human-Health Effects of Pesticides in the Environment Economic Causes and Impacts of Diabetes and Diabetes-Related Conditions Economics of New Drug Development and Approval Economics of the Prevention and Treatment of Obesity Edentulism and Oral Health Problems among Older Adults Editorial Board Members' Collection Series: Environmental Ecology and Management Effect of Air Pollution Exposure on Children and Elderly’s Health and Neurological Functions Effect of Insomnia on Human Health Effect of Sport Activity on Health Promotion Effecting a Safe and Healthy Environment in Construction Effects of COVID-19: Issues on Health Economics and Education Effects of Environmental Factors on Autism Effects of Hyperoxic Training on Acute Responses and Exercise Performance Effects of Parental Stress on Child Development Effects of Physical Activity on Cognitive Function in Young People Effects of Physical Activity on Cognitive Function in Young People: 2nd Edition Effects of Resistance Training on Strength and Muscle Thickness Effects of Stress Exposure on Mental Health and Well-Being Efficient Use of Acute Hospital Services for Older People Electromagnetic Fields and Health Electromagnetic Fields and Health—Effects on the Nervous System Electronic Cigarette Use and Public Health Electronic Cigarettes as a Tool in Tobacco Harm Reduction Eliminating Health Disparities to Achieve Health Equity Emergency Medical System and Emergency Medicine in the Time of COVID-19 Emerging Biological Threats and Public Health Preparedness Emerging Contaminants in the Environment Emerging Issues in Air Quality: Pollutants, Sources, Concentrations, Chemistry, and Health Risks Emerging Technologies for Treating Emerging Contaminants in Water/Wastewater Emerging Transportation Solutions for Safer and Greener Future Mobility Emotional, Behavioral, and Psychological Impact of COVID-19 Pandemic Endocrine Disruptors and Human Health Endocrine Disruptors and Public Health Endocrine Disruptors Leading to Obesity and Related Diseases Energy Conservation Measures, Indoor Air Quality and Health Energy Use and Environmental and Public Health Energy-Saving Technology in the Urban Sewage Treatment Process Environment and Behavior Environment and Health - Bridging South, North, East and West: Proceedings from the ISEE, ISES and ISIAQ Conference 2013 Environment and Patient Safety in Intensive Care Units Environmental and Food Allergy Environmental Bacterial Pathogens and Human Health Environmental Carcinogens Environmental Chemical Mixture at Low Concentration and Children’s Health Environmental Determinants of Infectious Disease Transmission Environmental Diseases Environmental Fate and Effect of Nanoparticles and Nanomaterials Environmental Geochemistry and Health Environmental Health Risk Assessment Environmental Hygiene Environmental Hygiene and Health Promotion Environmental Impacts of Food Consumption and Nutrition Environmental Legislation and Public Health Environmental Nanoparticles and Their Toxicity Environmental Neurotoxicology Environmental Pollution and Human Health Risk Environmental Research and Public Health: Featured Review Papers Environmental Research on Alcohol: Public Health Perspectives Environmental Sustainability: Knowledge, Attitudes and Behavior Environmental Systems Engineering Epidemiology and Determinants of Dental Caries in Children Epidemiology as a Tool for Thinking Critically about the Cause of Disease and Ill-Health Epidemiology of Iron Deficiency in Children and Its Consequences Epidemiology of West Nile Virus Epidemiology, Prevention and Control of Legionellosis: New Trends and Perspectives Epidemiology, Prevention and Control of Malaria Equity of Health and Health Care in Aging Societies Ethics, Social Responsibility and Quality of Life in Times of Crisis Ethnicity and Religiosity as Risk Factors for Health Etiology, Diagnosis and Treatment of the Temporomandibular Joint and Masticatory Muscle Disorders in the Era of Emotional Challenges Evaluation of Rural Water Systems and Public Health Evidence for Incorporating Green Exercise into Clinical and Public Health Practice Evidence-Based Nature Therapy: Advances in Physiological Evaluation Evidence-Based Prevention of Childhood Malnutrition Evolutionary Medicine in Sport and Exercise: A Preventative Approach Exercise as a Polypill: The Effects of Physical Activity among Healthy Individuals and Patients Exploring Clinical Outcomes in Diabetes Patients Exposure and Health Effects of Secondhand Smoke Extreme Weather and Public Health Extreme Weather-Related Morbidity and Mortality: Risks and Responses Factors Affecting Human Fat Distribution and the Metabolic Health Implications Family Health Family Violence Financial Stress and Mental Health First International Conference on Environmental Sciences and Sustainability in University of Venda Food Addiction and Binge Eating Food Allergy, Genes and Environment Food Insecurity: Evidence to Move the Needle Food Safety Food Safety and Public Health Food Supply Chain, Consumer Choice and Obesity Foods for Plant-Based Diets: Innovations, Technologies and Applications Fourth Edition of Health Emergency and Disaster Risk Management (Health-EDRM) Frailty and Aging From Understanding Suicide Risk to Preventing Suicide Frontiers in Mental Health and the Environment Future Challenges in Emergency Response and Public Health in Compound Disasters Gender and Geoethics in the Geosciences Gender and Health Gene-Environment Interaction in Chronic Diseases Gene-Environment Interactions and Disease Gene-Nutrient-Environment Interactions Genetic Epidemiology Genetic Impact on the Development of Allergic Disease Geostatistics in Environmental Pollution and Risk Assessment Global Children’s Environmental Health Global Climate Change and Contaminants Global Environment and Children Health Global Epidemics of Zika? Implications for Public Health Global HIV Prevention and Treatment: Public Health Considerations Global Public Health and Epidemiology Green Transportation Green-Blue Space and Health: Advances in Methods, Technologies and Applications Greening Urban Spaces: A Healthy Community Design Groundwater Contamination in Urbanized Areas Gulf War Illness, A Drug and Environmentally-Triggered Condition Gut Microbiota in Health and Disease Hair Disorders Hazardous Waste and Human Health Hazardous Waste and Human Health-2015 Head and Neck Cancers: What's New? Headaches HEAL: Transformational Change for Environmental, Planetary, and Human Health Health and Health Care for Homeless People in Various Contexts Health and Healthcare for People with Diabetes Health and Wellbeing of Children, Adolescents and Young Adults Health Behaviour and Lifestyle Health Benefits of Nature Health Care for Old People Health Disparities Arising from Inequitable Access to Water and Sanitation Systems in Developed Countries Health Economics Health Effects of Waterborne Contaminants: A Focus on Emerging Concerns Health Emergencies and Disasters Preparedness Health Environment and Sustainable Development Health Geography and Its Relevance for Future Public Health Health Humanities: Humanism in Health and Healthcare Delivery for Migrant and Minority Populations Health Impact Assessment Health Impact Assessment: Realizing Its Potential Health Inequalities in Urban Areas—Factors, Processes, and Dynamics Health Inequality and Spatially Distribution Health Informatics and Public Health Health Literacy and Equity – Interdisciplinary Perspectives and Recent Trends in Health Literacy Research and Action around the World Health Literacy, Nutrition and Public Health Health Market: Incentives and Competition Health Promotion and Quality of Life Improvement Throughout the Life Cycle Health Promotion, Physical Activity and Health Behaviors among Teenagers and Young Adults Health Systems and Services Health Technology Assessment and Public Health: Relation, Potentialities and Evidence Generation Healthy Workplaces, Employment and Chronic Conditions in Europe: Answering the Hidden Emergency with Innovative Strategies Heated Tobacco Products Heavy Metals and Health Heavy Metals: Environmental and Human Health HIV Prevention and Mental Health Disparities HIV Status of the World after Three Decades: Progress, Prevention Efforts, Challenges, and Strategies for the Way Forward HIV/AIDS: Social Perspectives Holistic Approaches to Understanding and Caring for Vulnerable Populations Hospitals’ Contribution to the Geographies of Social and Economic Innovation Housing and Health Human Factors and Ergonomics: Bridging the Gap between Research and Practice in Occupational Safety and Health Human Health, Risk Analysis and Environmental Hazards Human Microphysiology Systems Human Performance in Extreme Environments ICERPH 2020—Addressing Environmental Threats to Human Health from Pregnancy to Senility IJERPH: 10th Anniversary IJERPH: 15th Anniversary Impact of Clear Aligners and Custom-Made Mechanics on Children’s and Adults’ Oral Health: New Frontiers, Indications, Limitations, and Patient Perspectives Impact of Coronavirus (COVID-19) Impact of Genes and the Environment on Obesity and Cardiovascular Disease Impact of Out-of-Home Meals on Public Health Implementation of Interventions in Public Health Implementation Research in Chronic Disease Prevention and Control Improving the Health of Individuals Who Inject Drugs In-Silico Medicine in Diagnosis, Prevention and Treatment of Musculoskeletal Disorders Independent Mobility: Exploring Children’s Freedom to Travel and Play without Adult Supervision Indigenous Health in Canada: Integration of Community-Based Environmental, Health, Medical, Natural and Social Sciences Research to Mitigate Disparities in Wellness Indigenous Health, Environments and Wellbeing in Canada Indoor activities and health risks/protection Indoor Air Pollution and Health Indoor Air Pollution and Human Health Indoor Environmental Quality: Exposures and Occupant Health Indoor Radon Risk Assessment and Remedial Actions Inequalities among Families Involved with Child Welfare Services Inequalities in Health Infections in Nursing Homes: Evidence-Based Interventions Influence of Dairy Consumption and Production on the Environment Influence of Expectations, Psychological, Cognitive, and Emotional Factors on Pain Perception Innovations in Biostatistical Methods and Data for Public Health Research Innovations in Exposure Assessment for the Sound Management of Chemicals Innovations in Remote Sensing and GIS for Environmental, Urban and Public Health Sciences Innovative Research in Health Communication Insights into Defeating HIV-Associated Neurocognitive Disorders Integrated Assessment of Artisanal and Small-Scale Gold Mining (ASGM) in Ghana International Symposium on Environmental Physiology and Medicine Internet Based Obesity Prevention in the Time of COVID-19 Interval Training: Different Approaches and Designs Applied to Health and Fitness Intimate Partner Violence: Predictor Factors and Consequences for the Victim and Their Children Invasive Procedures in the Hands of Respiratory Physicians in the Diagnosis, Staging, and Treatment of Thoracic Cancer: Why, How, and When? Is Exercise the Best Medicine during the COVID-19 Pandemic? Latest Insights and Research Perspectives ISEE Commentaries Job Burnout: A Deep-Rooted Issue in Ever-Changing Workplaces Job Stress and Health Latin American Study of Nutrition and Health (ELANS) Lead: Risk Assessment and Health Effects Leaving no one behind: Equity and Eye Health Leptospirosis in the Animal—Human-Ecosystem Interface Lessons Learned from Research on Rare Diseases: Ethical and Legal Challenges Leukemia Arising from Chemical Exposures and Chemotherapeutic Drugs Lifecourse Environmental Epidemiology Lifestyle and Environmental Interventions to Promote a Healthy Ageing Lifestyle and Risk of Depression Lifestyle Intervention for Chronic Diseases Prevention Livelihoods Resilience and Sustainable Rural Development Long COVID and Post-COVID-19 Syndromes Lung Diseases Associated with Environmental Pollutants Malnutrition and Public Health Management of the Circular Economy in the Productive Sectors: Innovative Practices for a More Sustainable World Mass Spectrometry and Environmental Analysis Maternal and Child Health Maternal and Child Health and Nutrition in Latin American Populations Measurement and Evaluation of Occupational and Behavioral Exposures Measuring Disability and Disability Inclusive Development Measuring Inequalities in Health: Innovative Approaches and Applications Medical Ethics in the Time of COVID-19: Challenges and Opportunities Medical Ontologies Melanoma Epidemiology: Analytical Studies, Systematic Reviews and Meta-Analyses Addressing Environmental Risk Factors and Preventive Measures Mental Health and Cognitive Function in Elite Athletes around the Globe Mental Health and Health Psychology Mental Health and Social Care and Social Interventions Mental Health and Well-Being on School Campus in the Post-pandemic Era Mental Health Care Mercury and Health: Current Perspectives and Future Directions Metabolic Syndrome and Its Association with Biomarkers Methodological Innovations and Reflections-1 Methodological Study in Environmental Health and Public Health Microplastics: Hazards to Environmental and Human Health Migraine: Symptoms, Causes, Diagnosis and Treatment Migrant Health Migrant Health 2012 Migrant Health Burden: Emerging Challenges and Future Solutions Migration and Migration Status: Key Determinants of Health and Well-Being Migration, Population Mobility and Public Health: Focus on Vulnerable and Marginalised Populations Migration, Resilience, Vulnerability and Migrants’ Health Mindfulness-Based Practice for Health Benefits Modeling Tools for Occupational Exposure Assessment Modern Occupational Medicine Molecular Mechanisms of Helicobacter pylori Pathogenesis and Host Factors Mosquito Control Innovations into The 21st Century Motor-Vehicle Crashes and Occupant Protection Mountain Sports Activities: Injuries and Prevention Multidimensional Aspects of Oral Health-Related Quality of Life Multimorbidity, Polypharmacy, and Medication Appropriateness: Public Health Challenges and Research Priorities Multiple Criteria Analysis and Artificial Intelligence for Multidimensional Risk Management with Applications in Healthcare, Supply Chain and Sustainability Musculoskeletal System in Exercise and Rehabilitation: From Lab to Clinical Practice Mycotoxins in the Agri-Food Chain Nano-Bio Interactions: Nanomedicine and Nanotoxicology Nanomaterials-Based New Techniques, New Drugs, and Antibacterial Reagents Nanomaterials-Based New Techniques, New Drugs, and Antibacterial Reagents: An Update Nanotechnology to the Benefit of Environment and Public Health Neglected Diseases: Public Health and Environmental Studies Neurodiseases and Public Health Neuromuscular Control of Human Movement Neurorehabilitation and Neuroeducation of Developmental Dyslexia in Healthcare Systems of Different Countries New Advances in Cardiovascular Diseases New Advances in Cervical Cancer New Advances in Neurological Physiotherapy and Rehabilitation New Advances in Palliative Care New Advances in the Diagnosis and Treatment of Congenital Heart Disease New and Re-emerging Pathogens New Challenges and Crucial Topics for 2030 Public Health New Challenges in Prehospital Emergency Care New Directions in Environmental Communication Research New Frontiers in Rehabilitation New Horizons in Cerebellar Research New Techniques, Technologies and Materials for Dentistry and Maxillofacial Surgery New Trends in Intelligent Quantitative Analysis on the Effects of Energy Consumption on Environment and Public Health New Trends in Research on Training, Performance, Conditioning, Coaching, Evaluation and Health in Basketball (Second Edition) New Trends in Virtual World Addictions and Problematic Internet Use Next Generation of Interoperability in Health Care: Standards and Application Nicotine, Electronic Cigarettes, and Alcohol Use and the COVID-19 Pandemic Noise and Quality of Life Noise and Sleep: 2nd Edition Noise Induced Hearing Loss and Tinnitus, Public Health and Sustainability Effects Noise Pollution and Health Non-cigarette Tobacco Product Dependence Novel Insights of the Etiological Factors Involved in Oral Conditions of Children and Teenagers Novel Methodologies in Drug Use and Drug Use Disorder Epidemiology Nursing and Public Health Nursing Care of Older Adults and People with Disability Nutrient Removal and Recovery Nutrition and Diabetes: A Health Issue Nutrition and Exercise in the Health Sciences Nutrition and Health Equity: Revisiting the Importance of Fruit and Vegetable Availability, Purchasing, and Consumption Nutrition and Public Health Nutrition in the First 1000 Days Nutrition, Diets and Public Health Nutritional and Non-nutritional Supplements in Sports: The Public Health Impact Obesity and Public Health Obesity Prevention: Systems change for healthy environments and healthy people Obstructive Sleep Apnea Syndrome: From Symptoms to Treatment Obstructive Sleep Apnea: diagnosis, prevalence, treatment, clinical presentation, comorbidities, and responsible mechanisms Obstructive Sleep Apnoea Occupational Cancer Occupational Health Occupational Safety and Related Impacts on Health and the Environment Occupational Sedentary Behaviour Occupational Stress in Planetary Health: Explaining Multilevel Systems and Effects of Work, Time and Place Occupational Stress, Human Health and Wellbeing Occupational Therapies and Human Well-Being Occupational Therapy: Neurorehabilitation of Children and Adults Occurrence of Oral Epidemiology and Its Determinants Open Innovation in the Research and Industry about Natural Environment and Public Health after Pandemic of COVID-19 Operational Research to Tackle Antimicrobial Resistance in Ghana Operations and Innovations for the Environment Opioids: A Challenge to Public Health Optimal Mental Health for Optimal Academic Performance in University Students Optimising Drug Prescribing and Improving Medication Management: What Can We Do? Oral Cancer: Public Health Concern Oral Diseases: Prevention, Diagnosis and Treatment Oral Health among the Older Population Oral Health and Host Related Environmental Factors in Periodontal Status Oral Health-Related Quality of Life and Periodontal Health Status among Different Age Groups Oral Hygiene and Function: Lessons and Challenges for General Health Oral Inflammation and Chronic Autoimmune Diseases Oral Microbiota and Oral Health Oral Pathologies and Their Impact on Public Health Outcomes of Joint-Preserving Surgery for Rheumatoid Forefoot Deformity Overweight and Obesity—Diagnosis and Treatment Pain Neuroscience Education, Chronic Pain, and Health Care Parents and Children during COVID-19 Pandemic Passion, Grit, Mindset, Achievement and Well-Being Passive Exposure to Conventional, Heat-Not-Burn and Electronic Smoking Products and Related Health Effects: Second Edition Patient and Consumer Engagement in Health Care and Wellbeing: Challenges and Opportunities for a Participatory Health Approach Patient Satisfaction with Different Service in Rare Diseases Pediatric Infectious Diseases Pesticides and Health Pharmaceutical Policy and Practice Research Photocatalysis Assists Carbon Neutrality Physical Activities in the Water Environment: Drowning, Prevention and Rescue Physical Activity and Dietary Behaviors among Underserved Groups Physical Activity and Exercise Programs in Older Adults Physical Activity and Health Behaviors Physical Activity and Nutrition Among Border and Other Marginalized Populations Physical Activity and Psychosocial Health: A Two-Way Path to Improve Youth Wellbeing Physical Activity and Public Health- Physical Activity and Sedentary Behavior: Trends and Impact during the Pandemic Physical Activity and Well-Being in School Setting Physical Activity Programmes for the Elderly and Their Health Implications Physical Activity, Fitness, and Physical Education Physical Activity, Sedentary Behavior and Sleep at Different Stages of Life Physical Fitness and Health Improvement Physical Fitness and Health Outcomes throughout the Life Span Physical Performance and Recovery during Exercise-Induced Muscle Damage Physical Well-Being and Motor Development over the Life Span Positive Development in Neuropsychology for Young Population Poverty and Child Well-Being Preparedness and Emergency Response Prescription Medication Addiction/Abuse and Public Health Consequences Prevention and Treatment of Cerebrovascular Diseases Prevention Better than Cure for Long Term Conditions? Prevention of Mental Health Disorders in Children and Adolescents Prevention, Treatment and Rehabilitation of Musculoskeletal Disorders Preventive Medicine Primary Care and Health Behavior Change Proceeding of 'Ecosystems and Human Well-Being in the Transition towards Green Economy' Proceedings from 2014 Global Land Project (GLP) Asia Conference Proceedings from 9th International Conference on Managing Fatigue Proceedings from the 11th International Symposium on Recent Advances in Environmental Health Research Proceedings from the 12th International Symposium on Recent Advances in Environmental Health Research Proceedings from the 13th International Symposium on Recent Advances in Environmental Health Research Proceedings from the 2014 Minority Health & Health Disparities Grantees’ Conference (MHHDGC) Proceedings from the 3rd International Symposium on Environment and Health Proceedings from the Eighth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Fifth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Ninth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Seventh International Symposium on Recent Advances in Environmental Health Research Proceedings from the Sixth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Tenth International Symposium on Recent Advances in Environmental Health Research Proceedings of 1st Latin American Congress of Clinical and Laboratorial Toxicology (Toxi-Latin 2014) Proceedings of 2nd International Symposium on Environment Science and Health Proceedings of ICOH Occupational Health for Health Worker (OHHW2019) Conference Proceedings of Research Centers at Minority Institutions (RCMI) Translational Science 2017 Conference Proceedings of the 2019 Research Centers in Minority Institutions (RCMI) Program National Conference Proceedings of the 2020 inVIVO Planetary Health Annual Conference: Project Earthrise Proceedings of the 2022 Research Centers in Minority Institutions (RCMI) Consortium National Conference Process-Oriented Data Science for Healthcare 2018 (PODS4H18) Processed Food: Nutrition, Safety and Public Health Promoting Health and Wellness: Implications for Physical Therapy Promotion of Big Data and Intelligent Transportation to Traffic Safety and Environment Promotion of Healthy Active Habits in Children, Adolescents and University Students Promotion of Healthy Foods: Effectiveness, Evaluation and Agenda Setting for Examining Intervention Techniques to Improve Dietary Intake Promotion of Healthy Work Providing Appropriate Health Care in Patients with Myelodysplastic Syndromes and Clonal Hematopoiesis Psychological Health and Benefits of Mindfulness-Based Interventions Psychological Impact of Stress- and Trauma-Related Events in Early Years of Parenthood Psychology of Addictive Behaviors Psychology of Eating: Understanding of Eating Behaviours Psychology of Learning in Higher Education Psychology, Behavior and Health Outcomes Psychology, Education and Sport in Children Psychosocial Aspects and Quality of Life in Bariatric Surgery: An Update and Directions for Future Research Psychosocial Factors and Health at Work: Evaluation and Intervention Public Health and Digital Technologies during Pandemic: A Multidisciplinary Perspective Public Health Consequences of Social Isolation and Loneliness Public Health Informatics Public Health Informatics Public Health: Feature Papers Public Health: How Safe Is Cardiac Imaging? Quality of Life and Mental Health of the Elderly in Nursing Homes Quality of Life in Orthopedic Diseases Quality of Life, Well-Being and Nurse-Patient Interaction in Late Life Quality of Life: The Interplay between Human Behaviour, Technology and the Environment Quantifying Atmospheric Ammonia and Its Impacts: Measurements, Modeling and Mitigation Radiation and Cancer Risk Real World Data for Population-Based Pediatric Studies Real-World Evidence for Resuscitation Science Reappraisal of Risk Factors for Cardiovascular Diseases Recent Advances and New Perspectives on the Multidisciplinary Management of COVID-19 and Long COVID in the Life Span Recent Advances in Environmental Research Recent Advances in Orthodontics and Clear Aligner Therapy Recent Advances in Public Health Recent Advances in the Management of Chronic Pain Recent Advances on Environmental and Toxicologic Pathology Recent Research in Health Psychology Recreation, Ecosystems and Social Wellbeing: Towards More Effective Synergy Reducing Exposure to Second-Hand Tobacco Smoke Regional Scale Industrial Contamination of Soils and Groundwater — From Risk Assessment to Risk Management Regulation of Muscle Mass, Exercise, Metabolism Rehabilitation in the COVID-19 Pandemic Remote Sensing, Crowd Sensing, and Geospatial Technologies for Public Health Research Workforce and Healthcare Disparities Resilience, Stress, and Risk Factors during the COVID-19 Pandemic Resistance Exercise/Training to Improve Physical Fitness and Health Responsible Risk Governance in Hazardous Industries Retail Strategies to Support Healthy Eating Risk Assessment and Preventive Child Health Care during the First 1000 Days from Conception Onwards Risk Factors for Oral Disease Road Safety: Public Health Challenge Roma Health Roma Health Disadvantage Routes to Improve Health Literacy during the Life-Course Salutogenesis and Coping: Ways to Overcome Stress and Conflicts Salutogenic Cities for Chronic Diseases Prevention Sarcopenia, Exercise and Quality of Life SARS-CoV-2 Variants, Where Is the End? School-Based Prevention Programs for Drug and Alcohol Misuse in Children and Adolescents School-to-Work Transitions: Developmental and Mental Health Outcomes Second Edition of Active Commuting and Active Transportation Second Edition of COVID-19: A Public Health Approach for Health Professionals Second Edition of Effects of Environmental Pollutants on Human Reproductive Health Second Edition of Recent Advances in Polycyclic Aromatic Hydrocarbons Research: Occurrence, Fate, Analysis and Risk Assessment Second Edition of Social-Emotional Development and Learning in Early Childhood across Cultures Second Edition of Teenage Reproductive Health: Pregnancy, Contraception, Unsafe Abortion, Fertility Second Edition of the COVID-19 Pandemic in Europe: Response to Challenges Second Edition of the Current Situation and Distribution of Rare Diseases: Challenges, Prevention, Healthcare, and Effects Second Edition of the Next Frontier in Health Geography: Context and Implications for Interventions Second Edition of the Nutrition Transition and Physical Inactivity and Health Outcomes thereof in Low- and Middle-Income Countries: From Preconception to Adulthood Second Edition of Urban Disaster Resilience and Sustainability Second Edition: Diagnosis and Treatment of ADHD in Adolescents Selected Paper from the 15th International Symposium on Recent Advances in Environmental Health Research Selected Papers from the 1st International Electronic Conference on Environmental Health Sciences Selected Papers from the 3rd International Electronic Conference on Environmental Research and Public Health--Public Health Issues in the Context of the COVID-19 Pandemic Self-Control, Compliance and Adherence to Health Prescriptions Self-Efficacy and Self-Management of Chronic Diseases Sepsis in a Changing Environmental and Technological Landscape Sexual and Domestic Violence and Adolescent Health Sleep Apnea Syndrome Sleep Health Sleep Medicine and Health Sleep Quality Research Small Solutions for Big Water-Related Problems—Innovative Microarrays and Small Sensors to Cope with Water Quality and Food Security Smart Coach and Injuries Prevention in Young Athletes Smoking and Tobacco Control Smoking Cessation Smoking Cessation in Pregnancy Social and Economical Determinants of Health Social and Environmental Determinants of Oral Health Social Determinants and Geographic Disparities in Health and Health Care Social Inequality and Health: Determinants, Mechanisms, and Consequences Social Justice and Administration and Public Health: Rights, Risks and Management Social Marketing’s Contribution to Public Health Social Network Analysis and Public Health Social Network Interventions for Health Behaviours Social Vulnerability and Frailty in Older People Societal Side Effects: The Wider Impact of Pharmaceuticals on Society Socioeconomic Circumstances and Mental Health Soil Pollution and Public Health Soil Pollution: Prevention and Mitigation Solving the Global Water Shortage Crisis: A Focus on Treatment and Reuse of Wastewater Sound and Health related Quality of Life Spatial Dimensions of Public Health: Identifying and Monitoring Vector-Borne Diseases and Their Geographic Diffusion Spatial Epidemiology Spatial Modelling for Public Health Research Spatio-temporal Frameworks for Infectious Disease Epidemiology Sport-Exercise and Stress: A Winning Combination Stress and Health Stress and Training Load Effects on Recovery, Well-Being and Sports Performance Stress Biomarkers Stress, Faith, Resiliency, and Health among Black Men Stroke: Athletes, Cardiac Risk, Physical Fitness, and Fatigue Studies and Advances to Evaluate the Impact of Epidemic Diseases in Modern Times Studies on Heavy Metals and Health Substance and Behavioral Addictions: Co-Occurrence and Specificity Substance and Drug Abuse Prevention Suicide Bereavement and Postvention: Advances in Research, Practice and Policy Suicide in Asia and the Pacific Suicide Prevention among Youth Suicide Prevention and Public Health Sunbathing Habits and Skin Cancer Sustainability of Wine Production and Food Systems in the Mediterranean Region Sustainability: Environmental Studies and Public Health Sustainable Healthy Working Life for All Ages—Work Environment, Age Management and Employability Sustainable Prosperity without Growth, or Damage to Nature Systematic Reviews and Meta-Analyses in Public Health Tackling Long-Term Care Needs in Ageing Societies in (Post) Pandemic Times: Multidisciplinary Research Approaches in an International Perspective Tactical Forces Injury Risk Management Teaching and Learning Process: Psychological Variables in Education, New Applied Technologies and Physical Activity Techniques in Renewable Energy Production and Their Future Aiming Sustainability and Commercialization Technology, Data, and the Assessment of Atmospheric Exposure on Finer Scales Test Your Limits: HIRT, HIIT, and Functional Training The (Un)Sustainable Side of Resilience in the Anthropocene The 2nd Edition: Land Use Changes and the Corresponding Ecological Risks The 2nd Edition: Mindfulness-Based Practice for Health Benefits The 2nd Edition: Stroke: Athletes, Physical Activity, and Resistance Training The 9/11 Disaster and Other Man-Made Trauma Events and Their Health Effects The Assessment of Alternative Interventions to Manage Health Emergencies The Burden of Orthopedic Surgery The Close Connection between Environmental Pollution and Medicinal Prescriptions The Combined Health Effects of Environmental Exposures The Complexity of Chronic Pain The Economics of Mental Illness The Effects of Advance Care Planning in Healthcare The Effects of Non-cigarette Tobacco Use on Health The Effects of Occupational Health and Safety Education and Training on Different Communities The Emerging Role of Sedentary Behaviour in the Health of Youth and Young Adults: Should There be a Recommended Threshold of Sedentary Behaviour Permissible? The Environment Risk of Autism The Environmental, Public Health, and Human Rights Impacts on Enhancing the Quality of Life of People with Intellectual Disability The Evolution of Dentistry in a Changing World between Technological Progress and Environmental Challenges The Evolving Relationship between Science and Disaster Risk Reduction The Health Consequences of Chronic Energy Imbalance The Health Effects of Water Fluoridation The Impact of Parasitology on Public Health The Impact of Sleep Loss on Human Behavior and Neural Activity The Impact of the COVID-19 Pandemic for Health Inequalities The Impact of the Gut Microbiota on Human Health The Impacts of the Built Environment on Public Health The Importance of Mentoring for Diversity, Equity and Inclusion The Importance of Statistical Analysis in the Field of Rehabilitation The Influence of Mediterranean Diet on Health and Environment The Injustices: Social Determinants of Vulnerability to Unhealthy Behaviours The Lived Experience of People Living with Dementia and Caregivers The Most Common Behaviors Associated with Substance Abuse The Negative Effects on Health Due to Noise Exposure The New Era of Treatment for Obesity The Nutrition Transition and Physical Inactivity and Health Outcomes thereof in Low- and Middle-Income Countries: From Preconception to Adulthood The Protection of Quiet Areas as a Public Health Aim Towards Sustainable Health: Approaches, Case Studies and Implementation The Relationship between Children’s Asthma and Air Quality The Role of Health Technology Assessment in Redesigning Chronic Disease Services The Role of Plants and Microorganisms in Ecological Restoration The Role of Science, Technology and Innovation in Ensuring Food Safety and Food Security The Role of Surgical Systems in Promoting Public Health The Social and Health Issues Facing HIV/AIDS Patients The Social Cost and Public Health Impact of Gambling and Online Game Playing Therapeutic Advances and Challenges in the Treatment of Multiple Sclerosis Thermal Comfort and Safety TikTok and Public Health Tobacco Control Tobacco Control 2015 Tobacco Control and Priority Groups Tobacco Control in Vulnerable Population Groups Tobacco Harm Reduction: Policy Considerations to Mitigate Risk to Youth and Adults Tobacco Smoking and Public Health Tobacco Smoking: Public Health, Science and Policy Tobacco Use and Treatment among Cancer Survivors Tobacco Use Research in Youth and Young Adults Tobacco-Related Diseases and Their Impact on Individual and Public Health Issues Together in the Fight against Arthropod-Borne Diseases: A One Health Perspective Topical Advisory Panel Members’ Collection Series: Environmental Science and Environmental Factors That Affect Health Towards More Sustainable Food Systems Toxicology of Xenobiotic Mixtures and Health Toxicology, Exposure Assessment and Epidemiology of Primary and Secondary Ultrafine Particles Traffic Safety and Injury Prevention Transport Impacts on Public Health Trauma, Addiction and Criminality Treating Alcoholism between Harm Reduction and Immediate Abstinence Ultrafine Particles and Potential Health Effects Urban Geochemistry and Human Health Urban Place and Health Equity Using Big Data to Advance Knowledge in Child Maltreatment Using Fuzzy Multi-Criteria Decision-Making Methods for Improving the Performance of Public Emergency Departments during the COVID-19 Outbreak UV-Radiation: From Physics to Impacts Vaccination and Health Outcomes Vaccine Safety and Public Health Violence against Women and Intimate Partner Violence Vitamin D and Public Health Water Desalination Water Microbial Pollution and Disinfection WHO Framework Convention on Tobacco Control. Are Countries Fully Implementing It? WHO Noise and Health Evidence Reviews Whole Systems Approaches to Process Improvement in Health Systems Winter Sports Implications for Training, Environmental and Health Women's Health and the Environment Work and Addictions: From Biology to Practice Work Engagement and Job Crafting Work Environment and Cardiovascular Diseases: From Evidences of Causal Associations to Workplace Interventions Workplace Aggression Workplace Interventions for the Prevention or Amelioration of Musculoskeletal Problems in the Working Population Youth and Child Development and Health Youth Sports, Young Athletes Evaluation, Implications for Performance and Health Youth Violence as a Public Health Issue All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text | None | [] | [] | [] | SciNews | Biology | Yunyan Deng et al, Abundant Species Diversity and Essential Functions of Bacterial Communities Associated with Dinoflagellates as Revealed from Metabarcoding Sequencing for Laboratory-Raised Clonal Cultures, International Journal of Environmental Research and Public Health (2022). DOI: 10.3390/ijerph19084446 Journal information: International Journal of Environmental Research and Public Health | https://dx.doi.org/10.3390/ijerph19084446 | https://phys.org/news/2022-04-species-composition-function-dinoflagellate-associated-bacteria.html | A recent study published in International Journal of Environmental Research and Public Health has shed light on the bacterial consortia associated with dinoflagellates and other harmful algal blooms (HABs)-forming microalgae. The research team, led by Prof. Tang Yingzhong, characterized the bacterial assemblages associated with 144 clonal cultures of harmful algae, including 130 strains of dinoflagellates and 14 strains from other classes. The study found that the bacterial communities displayed strong conservation across strains, with an enrichment of Methylophaga from the class γ-proteobacteria and a potentially functional group of methylotrophs. Additionally, the study revealed that athecate dinoflagellates showed a preference for aerobic cellulolytic members in Actinobacteria phyla, suggesting a propensity to utilize cellulose as an energy source. Overall, the results provide valuable insights into the species composition and community functional profiles of dinoflagellate-associated bacterial assemblages.
Interactions between primary producers and bacteria impact the physiology of both partners, alter the chemistry of their environment, and shape ecosystem diversity. Several studies have documented that dinoflagellates-bacteria interactions have the potential to dramatically influence population dynamics. However, species-level information about the bacterial consortia characteristically associated with dinoflagellates still remains obscure. Recently, a research team led by Prof. Tang Yingzhong from the Institute of Oceanology of the Chinese Academy of Science (IOCAS) has provided new insights into the fundamental functions of bacteria consortia associated with the phycospheres of dinoflagellates and other harmful algal blooms (HABs)-forming microalgae. The study was published in International Journal of Environmental Research and Public Health on April 7. The researchers characterized the bacterial assemblages associated with 144 clonal cultures of harmful algae that have been established and cultured in the laboratory, including 130 strains of dinoflagellates (covering all major taxa of dinoflagellates) and 14 strains from other classes. The long-lasting bacterial associations to laboratory-raised algal cultures hinted bilaterally (i.e., mutualism) or at least unilaterally (i.e., commensalism) beneficial to the two partners. Bacterial communities of dinoflagellates displayed strong conservation across strains with an enrichment of Methylophaga from the class γ-proteobacteria and implied a potentially functional group of methylotrophs. "While bacterial associations with thecate and athecate dinoflagellates displayed compositional and functional similarities, athecate dinoflagellates showed a more preferred niche for aerobic cellulolytic members in Actinobacteria phyla. This implies a plausibly proneness to utilize cellulose as energy source," said Dr. Deng Yunyan, first author of the study. "Our results provide insightful understanding of the species composition and community functional profiles of dinoflagellate-associated bacterial assemblages," said Prof. Tang. |
10.1038/nature14857 | Cracking open diamonds for messages from the deep earth | Geochemist Yaakov Weiss deals in diamonds. Not the brilliant jewelry-store kind, but the flawed, dirty-looking ones used more for industry than decoration. Gem-grade diamonds are generally pure crystallized carbon, but many lower-grade stones contain so-called inclusions–chemical intruders bottled up inside the crystal. Inclusions lower the stone's value; but they contain worlds of information about the deep, inaccessible regions where diamonds come from. Their compositions speak to not only how diamonds form (and maybe how to find them), but other basic processes below. "They are the most pristine samples we can get from underlying depths," says Weiss, who works at Columbia University's Lamont-Doherty Earth Observatory. "After a diamond captures something, from that moment until millions of years later in my lab, that material stays the same. We can look at diamonds as time capsules, as messengers from a place we have no other way of seeing." Some of his recent studies are providing new insights to these regions. For most of history, almost everything about diamonds was a mystery; no one even knew where they came from. In the late 19th century, geologists figured out that they erupt in small, oddball volcanic spouts, called kimberlites. These eruptions usually punch through the centers of ancient continents, made of rocks that date back billions of years. The diamonds themselves may or may not be that old. Scientists now believe they crystallize in earth's mantle, 140 to 250 kilometers (about 90 to 150 miles) below. A few may come from as deep as 700 kilometers (430 miles)–the deepest direct samples we have from those depths. At the surface, kimberlites are tiny–usually just a few acres–and hard to find. They are also mostly barren of diamonds; of around 1,500 or 2,000 known, only 50 or 60 have ever been found that are worth mining. Because diamonds are so valuable, many scientists are working to better understand them. But many questions remain. Exactly what raw materials and processes go into making diamonds? What causes kimberlites to erupt? Why are kimberlites, and diamonds, found in some areas, and not in others? Weiss's latest study, on the cover of the leading journal Nature, gets at some of these questions. In it, he and colleagues studied diamonds from the tundra of Canada's Northwest Territories. Prospectors have hunted diamonds across the United States and Canada for centuries, but it was not until the 1990s that the continent's first viable mines were discovered here. Some of the surface rocks are billions of years old, but the kimberlites that penetrated them are the youngest known–as young as 45 million years (others elsewhere can be hundreds of millions). Working with colleagues from the University of Alberta and Durham University, Weiss investigated so called fibrous diamonds–inferior stones that consist of multiple layers instead of a single gem-grade crystal–from the rich Ekati Mine. Inside, they found tiny droplets of liquid–apparent remainders of raw material from which the diamonds crystallized. Most researchers believe that diamonds solidify out of some kind of fluid or fluids; but exactly what those fluids are, and what processes are involved, are controversial. Analyses of these inclusions, and separate research on stones from a neighboring mine, showed them to be rich in carbon, and highly saline–plenty of chlorine, potassium and sodium, much like seawater. Weiss thinks this is not a coincidence. In recent years, other researchers have shown that the complex evolutions of the far north has included repeated opening and closing of ocean basins. A few have wondered if these events could be related to the formation of diamond-bearing kimberlites. Weiss and his colleagues connected the dots. Their research suggests that a slab of watery oceanic crust subducted at a shallow angle under the far more ancient continental rocks 150 million to 200 million years ago. The slab, they say, could have slid more or less intact, underneath what is now the present-day Canadian tundra, where the mines are located. There, they say, fluids from the long-traveled ocean crust reacted with solid continental rocks just above them, in exactly the zone where pressure and temperature conditions are right for forming diamonds. To bolster the case, in addition to the salts in the inclusions, there are trace element and isotope fingerprints that match the composition of seawater from this time, they say. Whether the reactions had something to do with driving the kimberlite eruptions to the surface is an open question. The diamonds most useful to geochemists are the least commercially valuable, containing chemical impurities. This one, from northern Canada, contains inclusions of coesite (a form of quartz) and tiny bubbles of fluid. The rough outer coating probably also contains items of interest. Credit: Yaakov Weiss Among other things, the study may help open the way to reconsidering the source of carbon for diamonds. As far as anyone can tell so far, most of the carbon seems to come from the depths of the mantle. But in recent years evidence has been building that at least some of it was once on the surface, and was shoved down by subducting tectonic plates like the ones Weiss proposes. A recent study by Lamont geochemist Peter Kelemen argues that the carbon can come from either the surface or the deep earth, though very little from either source gets turned into diamond. Weiss's current study does not examine this question. Are there more deposits to be found? Since the 1800s, scattered single diamonds have been found in many U.S. states and Canadian provinces, but almost none can be traced back to kimberlite sources. Some kimberlites have been uncovered, but most don't contain diamonds. One small mine operated in rural Arkansas in the early 1900s was quickly worked out; it is now a state park, where amateur diggers occasionally still find diamonds. Diamondiferous kimberlite was found in Colorado in the 1970s, but it was too poor for mining. The processes described in the Northwest Territories might have taken place elsewhere, but that remains to be seen. "Now it's time to look at fluid inclusions from other places," says Weiss. "Maybe the same things are happening in other areas. Maybe not." Weiss continues to work on related questions. At any one time, he has about 100 diamonds used for research. They are generally fingernail-clipping-size chips from larger stones. He keeps them wrapped up in elaborately folded small papers, labeled with origin and other information. In addition to Canadian diamonds, he has stones from Zimbabwe, Guinea, South Africa, Siberia and Brazil. Most have been loans or gifts from friends or colleagues, though a few years back he paid about $700 to a dealer in his native Israel for a half-dozen 1.5-carat African stones. Most of his investigations do not harm the diamonds; inclusions often can be analyzed by passing microscopic light beams or X-rays through them. However, in one new project aimed at diamonds from unusually deep regions, Weiss plans some destruction. To analyze isotopes of helium gas trapped within, he has to pulverize the diamonds to release the gas. (Diamond is the world's hardest substance, almost impossible to wear down–but a direct whack with a hammer will shatter one. Repeated beating turns it to something resembling fine granulated sugar.) "It seems crazy to crush diamonds, right?" he admits. "But it's the only way to get at that particular question." The Ekati diamond mine, on the tundra of Canada’s Northwest Territories, source of some of Weiss’s samples. The geochemist is interested in the origins of North American diamonds. Last year, Weiss published another paper about tiny droplets of fluid found encased within African gem-grade diamonds. Such droplets are fairly common in boart diamonds–inferior specimens like the fibrous type–but not in gems. Many scientists contend that boart and gem-grade stones crystallize out of two different kinds of fluids. To test the idea, Weiss obtained two very rare single-crystal stones containing fluids–one from South Africa's Finsch diamond mine, and one from a river deposit in Kankan, Guinea. The gems' fluids turned out to be similar to those in boart–a challenge to conventional theory. Such research could have practical applications. For one, greater knowledge of trace elements in diamond inclusions could lead to chemical "fingerprints" that would tell where commercial gems originated. This would allow better enforcement of the Kimberley Process, the 2003 UN agreement to blacklist so-called "blood diamonds" from nations where mining is controlled by warlords or corrupt governments. The process currently depends on paperwork that can be easily faked. In the lab, with a mass spectometer, used to analyze minute bubbles of gases trapped within diamonds. Beyond this, "understanding diamond formation can tell us about the deep earth's carbon cycle, which we have very little knowledge about," says Weiss. This is the long-term movement of vast amounts of carbon from the atmosphere and surface down into earth's interior, via biological processes, chemical weathering, subduction of tectonic plates, and then back up again via large, more conventional volcanic eruptions. The cycle is thought to play key roles in controlling climate and biological evolution, and in unseen processes far below the surface. For all his expertise, Weiss has to admit: he has yet to visit a diamond mine. As a student in Israel, he considered collecting samples in conflict-ridden areas of west Africa, but his adviser discouraged him. "He wanted me to stay in the lab and stay alive–not get killed in the field," he says. The mines in northern Canada are safe, but hard to get to–far out on roadless tundra, accessible only by charter aircraft. "I'm still hoping, some day," he says. "Diamonds–they're a very nice stone. It would be fun some day to see where people are finding them." Most of the techniques used to analyze diamonds are harmless–but to get at gas compositions, the diamond has to be crushed. This is all that remains of one previously studied stone. | Geochemist Yaakov Weiss studies diamonds, not the gem-grade kind, but the flawed and dirty-looking ones used for industry. He believes that inclusions in these diamonds, which contain chemical impurities, hold the key to understanding the deep, inaccessible regions where diamonds come from. Weiss's research suggests that diamonds crystallize in the earth's mantle, 140-250 kilometers below the surface, and that kimberlites, small volcanic spouts that erupt through ancient continents, are the source of diamonds. His latest study, published in Nature, found that diamonds from the Ekati Mine in Canada's Northwest Territories contain tiny droplets of liquid that are rich in carbon and highly saline, similar to seawater. This suggests that the diamonds formed from fluids that reacted with solid continental rocks, and that the reactions may have driven the kimberlite eruptions to the surface. Weiss's research has implications for understanding the carbon cycle and the formation of diamonds, and could potentially lead to the development of chemical "fingerprints" to identify the origin of commercial diamonds. | None | Abstract The infiltration of fluids into continental lithospheric mantle is a key mechanism for controlling abrupt changes in the chemical and physical properties of the lithospheric root 1 , 2 , as well as diamond formation 3 , yet the origin and composition of the fluids involved are still poorly constrained. Such fluids are trapped within diamonds when they form 4 , 5 , 6 , 7 and so diamonds provide a unique means of directly characterizing the fluids that percolate through the deep continental lithospheric mantle. Here we show a clear chemical evolutionary trend, identifying saline fluids as parental to silicic and carbonatitic deep mantle melts, in diamonds from the Northwest Territories, Canada. Fluid–rock interaction along with in situ melting cause compositional transitions, as the saline fluids traverse mixed peridotite–eclogite lithosphere. Moreover, the chemistry of the parental saline fluids—especially their strontium isotopic compositions—and the timing of host diamond formation suggest that a subducting Mesozoic plate under western North America is the source of the fluids. Our results imply a strong association between subduction, mantle metasomatism and fluid-rich diamond formation, emphasizing the importance of subduction-derived fluids in affecting the composition of the deep lithospheric mantle. Main Ancient sections of continental lithospheric mantle (CLM) are characterized by multi-stage evolution, involving strong depletion and melt removal followed by variable degrees of ephemeral refertilization 1 , 2 . Refertilization, or enrichment, occurs by mantle metasomatism, whereby invading fluids or melts transport mobile components between different mantle reservoirs. This process plays a major part in shaping the mineralogical and geochemical variation in the CLM, as well as in determining its long-term stability, rheology and oxidation state 1 , 8 . While many mantle samples reflect the action of metasomatism, including mantle xenoliths and mineral inclusions in diamonds, the nature of the fluids involved can normally only be constrained indirectly from geochemical proxies or calculated using mineral/melt partition coefficients. Carbonatitic fluid or silicic melts have been proposed as the key metasomatic agents 9 , 10 . Direct samples of mantle metasomatic fluids are encased as microinclusions in fast-growing diamonds, known as ‘fibrous diamonds’ ( Fig. 1a ). These high-density fluid (HDF) inclusions are shielded from late-stage alteration, encapsulating a unique chemical and physical record that can trace the sources of deep mantle fluids and constrain the processes that shape their nature. Their study has revealed that, along with carbonatitic and silicic melts, saline compositions, with very high Cl, K, Na and H 2 O contents 5 , 6 , are involved in the metasomatic alteration affecting the deepest parts of the CLM. Figure 1: Microinclusion compositions in fibrous diamonds from the Fox kimberlite, Ekati mine. a , Photomicrograph of diamond E191 with the location of the microinclusions analysed by electron probe micro-analyser (EPMA). Filled symbols indicate HDFs; open symbols indicate olivine and orthopyroxene (OPX). b , Composition of HDFs and micro-mineral inclusions associated with specific Fox diamonds coded by colour. The global compositional range of HDFs (delineated by average compositions for individual diamonds) and the wide range of compositions shown by individual diamonds PAN4 (ref. 5 ) and ON-DVK-294 (ref. 6 ) from neighbouring central Slave kimberlites are also shown. Shaded arrows define the compositional evolution trajectories of HDFs due to fluid–rock interaction and melting in carbonated peridotite (taupe arrow) and hydrous eclogite (pink arrow) lithologies (see also Fig. 2 ). c , Primitive-mantle (PM) normalized trace element and chondrite-normalized (CN) REE patterns of saline and silicic HDFs in fibrous diamond from the Fox kimberlite. Full analyses and additional figures are in Supplementary Tables 1 , 2 , 3 , 4 and Supplementary Fig. 1 . PowerPoint slide Full size image Diamond HDFs vary between four major compositional types: saline, silicic, and high-Mg and low-Mg carbonatitic 7 ( Fig. 1b ). A strong connection was established between high-Mg carbonatitic HDFs and a carbonated peridotite source, either lithospheric or asthenospheric in origin 7 , 11 , 12 , while silicic and low-Mg carbonatitic HDFs have been related to hydrous eclogite (plus or minus carbonate) 7 . The saline fluid endmember sampled by diamonds is more enigmatic and its source in the deep lithosphere has remained ambiguous 5 , 6 , 7 , 12 , 13 . Here we analysed 11 microinclusion-bearing fibrous diamonds from the Fox kimberlite, Ekati mine, Northwest Territories, Canada. They have either coated or cubic-like morphologies, contain nitrogen in A centres (pairs of nitrogen atoms replacing two adjacent carbon atoms) and encapsulate a variety of fluid compositions plus inclusions of their host rocks ( Fig. 1a, b and Extended Data Figs 1 , 2 ), which shows a strong association between fluid composition and mantle host lithology. The majority of diamonds (9 of 11) contain saline HDFs solely associated with peridotite on the basis of their microinclusions of olivine, orthopyroxene, Cr-diopside and chromite. Silicic fluid compositions are related exclusively to eclogitic inclusions of omphacitic clinopyroxene. Both saline and silicic HDFs are enriched in incompatible elements ( Fig. 1c ); they have fractionated rare-earth element (REE) patterns with elevated Ba, U, Th and light-REEs but depleted Nb, Ta and alkalis (K, Rb and Cs). However, the fractionated nature of these patterns and the light-REE/medium-REE and Th/U ratios in particular are more pronounced in saline HDFs than silicic fluids, indicating different sources. The most striking differences between the two HDF compositions are the positive Eu and Sr anomalies within saline fluids versus no Eu anomaly and negative Sr anomalies in the silicic fluids. Initial 87 Sr/ 86 Sr (( 87 Sr/ 86 Sr) i ) values in the saline HDFs are 0.7039–0.7090 compared with 0.7064 and 0.7111 in the two diamonds with silicic fluids. The physical and chemical characteristics of fibrous diamonds and their HDFs sampled by the Fox kimberlite are identical to previously studied fibrous diamonds from both the neighbouring (8 km northeast) Panda kimberlite at Ekati 5 , 14 and to those from kimberlites at Diavik mine 6 , 12 (30 km southeast). Combining these localities reveals that the vast majority of these fibrous diamonds (84%) trapped saline HDFs, which are strongly associated with peridotite hosts. In a single Diavik diamond, the HDFs continuously change from saline to high-Mg carbonatitic compositions, from centre to edge ( Figs 1b , 2 and Extended Data Fig. 3 ); olivine, chromite and Cr-diopside microinclusions in this diamond demonstrate its peridotitic association 6 . A Panda diamond containing HDFs falling between saline to silicic compositions has omphacite microinclusions 5 , providing strong evidence that silicic HDFs may evolve from saline fluids due to wall rock reaction with eclogite ( Figs 1b and 2 ). An absence of included minerals prevented paragenetic typing in only one diamond, from Diavik, containing silicic to low-Mg carbonatitic HDFs ( Fig. 2 ). However, the continuous global compositional array between silicic to low-Mg carbonatitic HDFs, and the similarity of these fluids to the products of low-degree partial melting experiments of carbonated eclogite, suggest a strong genetic link to eclogite 7 . The relative abundance of HDF endmembers in fibrous diamonds, the compositional relationships between the HDFs and their co-existing mineral microinclusions, plus the observed evolutionary trends from saline HDFs to other compositional types ( Fig. 2 ), provide a means of tying the various metasomatic fluids to a common parental saline fluid endmember. A key issue is then the ultimate origin for saline HDFs. Figure 2: MgO and SiO 2 versus Cl content of HDF microinclusions in fibrous diamonds from the central Slave craton. The complete data set shows clear evolution trajectories (shaded arrows) from a parental saline fluid to high-Mg carbonatitic and silicic compositions, formed due to wall rock reaction and local melting induced in peridotite and eclogite, respectively. If carbonate is present in the eclogite, increasing reaction could lead to the formation of low-Mg carbonatitic HDFs (dashed black arrow); however, this trend has not yet been constrained by mineral microinclusions paragenesis. Data points for saline HDFs are from this study and refs 5 , 6 ; silicic HDFs from this study; saline to silicic from ref. 5 ; saline to high-Mg carbonatitic and silicic to low-Mg carbonatitic compositions from ref. 6 . Calculated compositions are assumed to be free of H 2 O and CO 2 and are in weight per cent (wt%). PowerPoint slide Full size image Positive Eu anomalies in low-pressure Cl − -rich hydrothermal fluids are typically interpreted to result from plagioclase control during fluid–rock interaction at high temperatures 15 . Such signatures can also originate from the strong aqueous complexes formed between dissolved Cl − and Eu 2+ , compared to other REE ions 16 . The lack of clear correlation between Cl content and the size of the Eu anomaly in global saline HDFs precludes a simple fluid–rock interaction process being the sole driver for generating the positive Eu anomalies. In Ekati and Diavik diamonds, the pronounced Eu anomalies of saline HDFs are associated with positive Sr anomalies ( Figs 1c and 3a ), mimicking the plagioclase accumulation signature of both oceanic and ophiolitic gabbros 17 , 18 . This correlation suggests a low-pressure crustal origin for the saline HDF elemental signature, through prograde metamorphic reaction of plagioclase to garnet and eclogite formation. Exactly how the saline chemistry of these fluids develops within subducting oceanic crust is not yet clear. One possibility is that they were originally pore fluids trapped in the crust during low-pressure hydrothermal alteration by sea water. Their initial salinity and K/Na ratios evolve as H 2 O and Na are consumed during spilitization, producing hydrated basalt. The highly saline nature of these new solutions potentially prevents dehydration during shallow subduction, allowing the formation of stable Cl − -rich phengite at high pressure and tempreture 19 , 20 . If dehydration should occur, residual chlorides with high K/Na ratios can be subducted and water originating from dehydration of underlying serpentinized peridotite at 150–200 km depth 19 can regenerate highly saline fluids at depth. Figure 3: Trace-element ratios and Sr isotopic signature in HDFs from the central Slave craton. a , Relationship between Eu* (expressed here as Eu/Sm) and Sr* (Sr/√(Pr × Nd)) anomalies in the saline HDFs constrain the subducted endmember, least influenced by interaction with lithosphere wall rock. b , Eu* versus La/Pr ratios. c , Eu* versus ( 87 Sr/ 86 Sr) i (corrected to kimberlite eruption age of 55 Ma). The positive trend formed by saline fluids varies between the Sr isotopic signature of sea water 150–200 Ma (ref. 26 ) and 87 Sr/ 86 Sr measured in megacrystalline clinopyroxene (CPX) from the Diavik CLM 21 . Higher ( 87 Sr/ 86 Sr) i values in silicic HDFs are probably inherited from old phlogopite in the eclogitic lithology within the Slave CLM. HDF composition symbols as in Fig. 2 , large symbols are data from the present study, where each colour represents an individual diamond; small symbols are data from refs 5 , 6 , 12 and 14 . PowerPoint slide Full size image Having established a possible evolutionary link between the spectrum of fluid compositions observed in Slave diamonds and subduction-related crustal protoliths, we can deduce the metasomatic history of the central Slave lithospheric root leading to fibrous diamond formation ( Fig. 4 ). The inherited positive Eu and Sr anomalies in saline diamond HDFs suggest direct ingress of fluids into the lithosphere from a subducting slab closely underlying the continental root. We explain the Eu and Sr anomalies, light-REE/medium-REE enrichment levels and variably radiogenic 87 Sr/ 86 Sr in these fluids ( Fig. 3a–c ) as representing the interaction with peridotite in the lithospheric root. This interaction altered the elemental chemistry of the invading saline fluids, flattening both the Sr and Eu anomalies and lowering the ratios of the most incompatible elements. The 87 Sr/ 86 Sr signature of the saline HDFs experiencing the most extensive fluid–rock interaction were buffered to local peridotite compositions that were relatively low ( ∼ 0.704) (ref. 21 ). Fluid–rock interaction is also reflected by an increase of SiO 2 , MgO and CaO and decrease of Cl and K in the saline HDFs ( Figs 1b and 2 ). The refractory nature of cratonic peridotite dictated that partial melting during saline fluid infiltration only occurred when carbonate metasomes (that is, magnesite) were intersected, leading to the formation of high-Mg carbonatite HDFs with low 87 Sr/ 86 Sr (ref. 12 ). Infiltration of saline fluids into eclogite hosts is tracked by the compositional variation in eclogite-related HDFs, from saline to highly silicic ( Figs 1b and 2 ), leading to the formation of in situ silicic melts. Both of the daughter high-Mg carbonatitic and silicic melts could then crystalize metasomatic phases in their host rocks such as the Cl − -rich phlogopite and apatite documented in an eclogitic xenolith from Diavik 22 . Overall, the Slave CLM was enriched with K, Cl, Ba and incompatible trace elements by the invading saline HDFs, while the oxidation gradient between the evolving fluids and the local lithosphere initiated ephemeral redox processes leading to diamond formation 23 . Figure 4: Schematic illustrating the evolution of saline fluids with increasing fluid–rock interaction as they interact with cratonic mantle lithosphere. The discovery of fluids carrying strong oceanic protolith geochemical signatures (that is, positive Eu and Sr anomalies and 87 Sr/ 86 Sr signature of Mesozoic sea water) in continental lithospheric diamonds suggests that the Slave CLM was directly overlying the subducting slab at the time of Mesozoic metasomatism. Numbers refer to stages in the compositional evolution depicted in the inset MgO–Cl and SiO 2 –Cl trends. When the parental saline fluids (1) ingress into the CLM they react and their oceanic signature is diluted. The melt-depleted nature of cratonic lithospheric peridotite prevents notable melting unless the saline fluids traverse either carbonated-peridotite or eclogite lenses, leading to in situ formation of high-Mg carbonatitic (2) and silicic melts (3), respectively. The possible presence of carbonate in eclogite may lead to formation of low-Mg carbonatite fluids with increasing melting (see Fig. 2 and Extended Data Fig. 5 for data). Rapid diamond formation occurs due to the oxidation gradient between the evolving fluids and local lithosphere, either as new fibrous diamonds or as fibrous coats on previously formed older octahedral diamonds. PowerPoint slide Full size image The issue is the timing of the fluid metasomatism and the nature of the event that triggered the process. The short mantle residence time of fibrous diamonds in the central Slave CLM (<200 million years ago (Ma); Extended Data Fig. 4 ), indicated by their low-aggregated nitrogen impurities, translates into young formation ages for both diamonds and their HDFs. Active subduction zones were a key feature of the complex tectonic setting of western North America and the high Arctic during the Mesozoic era 24 , providing several options for the fluid source in the ideal time window to allow saline HDF generation, diamond formation and eruption of the diamonds in Eocene epoch kimberlites. The low-angle subduction that has been suggested for some of these plates, such as the Farallon slab 25 , provides an opportunity for the direct transfer of slab-derived fluids into the base of the cratonic lithosphere. The most pristine saline HDFs that interacted least with the lithospheric root have Sr isotopic signatures corresponding with early Jurassic period seawater 87 Sr/ 86 Sr values 26 , strengthening the temporal connection between subduction and metasomatism ( Fig. 3c ). The lithosphere beneath western North America was extensively hydrated by shallow subduction 27 , 28 and mantle xenoliths from the Wyoming craton provide direct evidence for chlorine enrichment 29 , 30 . Saline HDFs trapped in fibrous diamonds from the central Slave craton are a deeper manifestation of this lithospheric hydration process, expressed as young diamond formation in the CLM root. The full spectrum of HDF compositional varieties, from saline, silicic and carbonatitic, are present in fibrous diamonds from various cratonic roots 6 , 7 , 12 , 13 , including intermediate composition between saline to silicic found in an eclogitic zoned diamond from Guinea 7 ( Extended Data Fig. 5 ). We suggest that deep mantle saline fluids are directly related to subduction events—they are key metasomatic agents from which the whole spectrum of diamond-forming fluids evolve, and they play a major part in impacting the composition of the deep lithospheric mantle globally. Methods Samples and methods A suite of eleven diamonds from the Ekati mine, Slave Craton, Canada, was selected for EPMA, Fourier-transform infrared (FTIR) and off-line laser ablation inductively coupled plasma mass spectrometry (ICP-MS) analyses. The diamonds have a large range in size with weight varying between 3–83 mg. Each diamond was laser-cut and polished on both sides to create a thin slab that permits the transmittance of light. It was then cleaned ultrasonically in HF 60% and HNO 3 69% for 2 h and washed with ethanol and distilled water before analysis. FTIR Analyses were performed using a Bruker IRscope II microscope coupled to a Nicolet 740 FTIR spectrometer (Globar source, KBr beamsplitter, MCT detector, He–Ne laser). Spectra were taken in the range of 550–4,000 cm −1 with a resolution of 4 cm −1 . Nitrogen concentration and aggregation states were determined using a computer program supplied by D. Fisher and the absorption coefficients of A centres (double substitution of carbon by two nitrogen atoms, type IaA spectrum), B centres (clusters of four nitrogen atoms substituting five carbon atoms, type IaB spectrum) and C centres (single nitrogen replacing a carbon atom, type Ib spectrum) 31 , 32 , 33 , 34 . After baseline correction and subtraction of the diamond bands, the concentrations of water and carbonate were determined using the maximum absorbance of water and carbonate and their absorption coefficients 35 . These concentrations were used to calculate the carbonate mole fraction (CMF = carbonate/(water + carbonate) molar ratio) of the trapped fluids ( Supplementary Table 1 ). EPMA The major element compositions of the microinclusions were determined using a JEOL JXA 8600 EPMA equipped with a Pioneer-Norvar EDS (133 eV) detector. Backscattered electron imaging was used to detect shallow, subsurface microinclusions (<2 µm depth). Each inclusion was analysed for 100 s using an acceleration voltage of 15 kV and a beam current of 10 nA. The spectral data were reduced using the ZAF/PROZA correction procedure software supplied by Noran 36 . The total amount of oxides and Cl in each analysis varied between 1 and 12.4 wt% with an average of 3.3 wt% for all 327 analysed HDF microinclusions and between 1.8 and 78 wt% with an average of 11 wt% for 68 analysed mineral microinclusions. Precision (2 σ (%) = 2 × 1/oxide in wt%) is <20% for oxide concentrations of 0.05 wt%, <10% for 0.25 wt%, <6% for 0.5 wt% and <2% for 1 wt% (M. Jablon and O. Navon, unpublished data). The low and variable sums reflect the small size of the inclusions, their depth and their high content of undetected water and carbonates. The ZAF/PROZA processing assumed that the difference to 100 wt% is composed of pure carbon. Later, all oxide and chlorine concentrations were normalized to 100 wt% on a carbon-free and volatiles-free basis (where Cl is present, excess calculated oxygen leads to a normalized total of more than 100%) and the average composition of the HDF in the diamond was calculated. Offline laser ablation Diamonds were ablated in a custom-designed, sealed PTFE ablation cell capped with a laser window that had been previously cleaned with UpA 6 N HCl and 2 N HNO 3 . Ablations were performed with a UV-213 New-Wave Laser ablation system, with the custom cell replacing that provided by the manufacturer. A pre-weighed diamond was brought into focus and an ablation was performed using a raster pattern. Ablation conditions were: scan speed 50 μm s −1 ; raster spacing 80 μm; energy output 5–6 J cm −2 ; repetition rate 20 Hz; spot size 160 μm; and pass depth 2 μm. Ablation time varied from 3–5 h. After ablation, the laser cell is opened in an ultraclean environment and all ablated material was collected in UpA 6 N HCl before being dried down before further chemistry. The diamond was rinsed in MQ water and dried. Diamonds were re-weighed and the weight loss (0.32–0.71 mg) resulting from the ablation was calculated. Weighing uncertainty is ±0.0007 mg, estimated from 100 repeat weighs of both a gem-quality and a fibrous diamond. The dried ablation product was taken up in 2 N HNO 3 . A 20% aliquot is taken by volume for trace element analysis. The remaining sample was processed for Sr isotopic analysis. The Sr separation procedure is based on the method described previously 37 , using Sr-spec resin but with modifications as outlined 38 for sub-ng samples. Quantifiable data and background corrections We use the limit of quantification (LOQ), as defined previously 39 , as a measure of our ability to quantitatively measure elemental abundances because this parameter is considerably more robust than defining the ‘limits of detection’, or LOD, which merely define the ability to qualitatively detect an analyte. The LOQ for a procedure with a well-characterized blank is defined 39 as: LOQ = 10 σ , where σ is the standard deviation of the blank for the process (here defined as the total procedural blank (TPB)). This approach places clear limits on our ability to quantitatively report concentration data in the diamonds studied. We use a data set of 20 TPBs performed using the same ablation cells and reagents as used for samples, to determine the LOQ for trace element abundances. Within each batch of samples, between five and ten additional TPBs were also run to monitor whether our LOQ estimate was applicable from one batch of samples to another. Any analyte below the LOQ is flagged in the data and not used on a concentration plot. In the previous definition 39 , data can only be quantitative if it exceeds 10 σ of the blank, hence the analyte/blank ratio is a critical parameter to measure. The total amount of analyte and hence the analyte/background ratio is simply a function of the length of the ablation, with the ratio increasing with time. Multi-element ICP-MS TPBs and aliquot sample solutions were analysed for trace element concentrations on the Thermo-Electron Element II ICPMS at Durham University. Each sample aliquot was made to 500 μl with 3% HNO 3 . Instrumental conditions were similar to those described previously 40 . Solution concentrations were measured against 9-point calibration lines constructed from appropriately dilute solutions of the international standards AGV-1, BHVO-1 and W-2. All concentrations were corrected for instrument drift using an 115In internal spike. Oxide correction coefficients were determined by running standard solutions of Ba, La, Ce, Pr, Nd, Sm, Gd and Tb at the beginning of each analytical session to correct for the daily changes in the oxide production rate. All trace element concentrations were normalized to the diamond weight loss during ablation Thermal ionization mass spectrometry With each batch of samples processed for isotopic analysis, between five and ten TPBs were carried out to determine the average size of the blank contribution and its effect on the isotopic composition of the sample. During the course of this study Sr blanks averaged 5 pg ( n = 12). A Sr isotope blank correction was performed using a measured blank isotopic composition based on combining the equivalent of over 60 TPBs to yield sufficient Sr ( ∼ 500 pg) for a precise and accurate thermal ionization mass spectrometry (TIMS) analysis. The average 87 Sr/ 86 Sr composition of the laboratory blank during the course of this work was 0.710853 ± 0.000194 and all Sr samples were blank-corrected based on this value and the average blank set at 5 pg. Sr samples were loaded using procedures described in detail previously 37 , 40 , employing a purified TaF 5 activator. Sr isotope ratios were measured on a ThermoFisher Triton TIMS at Durham University. Sr isotope measurements were carried out using a static multi-collection routine. Each sample measurement achieved between 50 and 300 ratios with an integration time of 4 s per ratio; total analysis time was approximately 3–20 min. Mass fractionation was corrected using an exponential law and an 86 Sr/ 88 Sr ratio of 0.1194. Multiple loads ( n = 43) of NBS987 of between 0.5 and 3 ng size gave an average value of 0.710260 ± 0.00002 (2 standard deviations; n = 43), which compares well to the long-term values reported from the Durham University laboratory for similar sized standards from the same laboratory 37 , 38 , 40 , 41 . As the Durham laboratory reports Sr data relative to an 87 Sr/ 86 Sr ratio of 0.710240 no additional normalization was performed. Average signal size for the 88 Sr for the 0.5 ng and 3 ng standards were 0.8 ± 0.4 V and 5 ± 1.3 V, respectively. Signal sizes for samples were on average 0.2 ± 1 V. We have previously documented in detail the levels of accuracy and repeatability for samples and standards at these low signal intensities 38 . There is no systematic relationship between analyte size and Sr isotope composition after blank correction. Hence we conclude that our blank correction procedures adequately correct for our systematic TPB. Uncertainties in the magnitude and isotopic composition of the blank are incorporated into the reported errors on isotopic compositions at the 2 σ level. Previous experiments 38 indicate that for blanks of ∼ 5 pg, it is possible to make accurate blank corrections to samples containing as little as 20 pg and therefore that level was used as a cut-off for accepting accurate data in this study because similar levels of blank reproduction were achieved. | None | [] | [] | [] | SciNews | Earth | "Highly saline fluids from a subducting slab as the source for fluid-rich diamonds." Nature 524, 339–342 (20 August 2015) DOI: 10.1038/nature14857 "High-density fluids and the growth of monocrystalline diamonds," Geochimica et Cosmochimica Acta, Volume 141, 15 September 2014, Pages 145-159, ISSN 0016-7037, dx.doi.org/10.1016/j.gca.2014.05.050 Journal information: Nature , Geochimica et Cosmochimica Acta | http://dx.doi.org/10.1038/nature14857 | https://phys.org/news/2015-08-diamonds-messages-deep-earth.html | Geochemist Yaakov Weiss studies diamonds, not the gem-grade kind, but the flawed and dirty-looking ones used for industry. He believes that inclusions in these diamonds, which contain chemical impurities, hold the key to understanding the deep, inaccessible regions where diamonds come from. Weiss's research suggests that diamonds crystallize in the earth's mantle, 140-250 kilometers below the surface, and that kimberlites, small volcanic spouts that erupt through ancient continents, are the source of diamonds. His latest study, published in Nature, found that diamonds from the Ekati Mine in Canada's Northwest Territories contain tiny droplets of liquid that are rich in carbon and highly saline, similar to seawater. This suggests that the diamonds formed from fluids that reacted with solid continental rocks, and that the reactions may have driven the kimberlite eruptions to the surface. Weiss's research has implications for understanding the carbon cycle and the formation of diamonds, and could potentially lead to the development of chemical "fingerprints" to identify the origin of commercial diamonds.
Geochemist Yaakov Weiss deals in diamonds. Not the brilliant jewelry-store kind, but the flawed, dirty-looking ones used more for industry than decoration. Gem-grade diamonds are generally pure crystallized carbon, but many lower-grade stones contain so-called inclusions–chemical intruders bottled up inside the crystal. Inclusions lower the stone's value; but they contain worlds of information about the deep, inaccessible regions where diamonds come from. Their compositions speak to not only how diamonds form (and maybe how to find them), but other basic processes below. "They are the most pristine samples we can get from underlying depths," says Weiss, who works at Columbia University's Lamont-Doherty Earth Observatory. "After a diamond captures something, from that moment until millions of years later in my lab, that material stays the same. We can look at diamonds as time capsules, as messengers from a place we have no other way of seeing." Some of his recent studies are providing new insights to these regions. For most of history, almost everything about diamonds was a mystery; no one even knew where they came from. In the late 19th century, geologists figured out that they erupt in small, oddball volcanic spouts, called kimberlites. These eruptions usually punch through the centers of ancient continents, made of rocks that date back billions of years. The diamonds themselves may or may not be that old. Scientists now believe they crystallize in earth's mantle, 140 to 250 kilometers (about 90 to 150 miles) below. A few may come from as deep as 700 kilometers (430 miles)–the deepest direct samples we have from those depths. At the surface, kimberlites are tiny–usually just a few acres–and hard to find. They are also mostly barren of diamonds; of around 1,500 or 2,000 known, only 50 or 60 have ever been found that are worth mining. Because diamonds are so valuable, many scientists are working to better understand them. But many questions remain. Exactly what raw materials and processes go into making diamonds? What causes kimberlites to erupt? Why are kimberlites, and diamonds, found in some areas, and not in others? Weiss's latest study, on the cover of the leading journal Nature, gets at some of these questions. In it, he and colleagues studied diamonds from the tundra of Canada's Northwest Territories. Prospectors have hunted diamonds across the United States and Canada for centuries, but it was not until the 1990s that the continent's first viable mines were discovered here. Some of the surface rocks are billions of years old, but the kimberlites that penetrated them are the youngest known–as young as 45 million years (others elsewhere can be hundreds of millions). Working with colleagues from the University of Alberta and Durham University, Weiss investigated so called fibrous diamonds–inferior stones that consist of multiple layers instead of a single gem-grade crystal–from the rich Ekati Mine. Inside, they found tiny droplets of liquid–apparent remainders of raw material from which the diamonds crystallized. Most researchers believe that diamonds solidify out of some kind of fluid or fluids; but exactly what those fluids are, and what processes are involved, are controversial. Analyses of these inclusions, and separate research on stones from a neighboring mine, showed them to be rich in carbon, and highly saline–plenty of chlorine, potassium and sodium, much like seawater. Weiss thinks this is not a coincidence. In recent years, other researchers have shown that the complex evolutions of the far north has included repeated opening and closing of ocean basins. A few have wondered if these events could be related to the formation of diamond-bearing kimberlites. Weiss and his colleagues connected the dots. Their research suggests that a slab of watery oceanic crust subducted at a shallow angle under the far more ancient continental rocks 150 million to 200 million years ago. The slab, they say, could have slid more or less intact, underneath what is now the present-day Canadian tundra, where the mines are located. There, they say, fluids from the long-traveled ocean crust reacted with solid continental rocks just above them, in exactly the zone where pressure and temperature conditions are right for forming diamonds. To bolster the case, in addition to the salts in the inclusions, there are trace element and isotope fingerprints that match the composition of seawater from this time, they say. Whether the reactions had something to do with driving the kimberlite eruptions to the surface is an open question. The diamonds most useful to geochemists are the least commercially valuable, containing chemical impurities. This one, from northern Canada, contains inclusions of coesite (a form of quartz) and tiny bubbles of fluid. The rough outer coating probably also contains items of interest. Credit: Yaakov Weiss Among other things, the study may help open the way to reconsidering the source of carbon for diamonds. As far as anyone can tell so far, most of the carbon seems to come from the depths of the mantle. But in recent years evidence has been building that at least some of it was once on the surface, and was shoved down by subducting tectonic plates like the ones Weiss proposes. A recent study by Lamont geochemist Peter Kelemen argues that the carbon can come from either the surface or the deep earth, though very little from either source gets turned into diamond. Weiss's current study does not examine this question. Are there more deposits to be found? Since the 1800s, scattered single diamonds have been found in many U.S. states and Canadian provinces, but almost none can be traced back to kimberlite sources. Some kimberlites have been uncovered, but most don't contain diamonds. One small mine operated in rural Arkansas in the early 1900s was quickly worked out; it is now a state park, where amateur diggers occasionally still find diamonds. Diamondiferous kimberlite was found in Colorado in the 1970s, but it was too poor for mining. The processes described in the Northwest Territories might have taken place elsewhere, but that remains to be seen. "Now it's time to look at fluid inclusions from other places," says Weiss. "Maybe the same things are happening in other areas. Maybe not." Weiss continues to work on related questions. At any one time, he has about 100 diamonds used for research. They are generally fingernail-clipping-size chips from larger stones. He keeps them wrapped up in elaborately folded small papers, labeled with origin and other information. In addition to Canadian diamonds, he has stones from Zimbabwe, Guinea, South Africa, Siberia and Brazil. Most have been loans or gifts from friends or colleagues, though a few years back he paid about $700 to a dealer in his native Israel for a half-dozen 1.5-carat African stones. Most of his investigations do not harm the diamonds; inclusions often can be analyzed by passing microscopic light beams or X-rays through them. However, in one new project aimed at diamonds from unusually deep regions, Weiss plans some destruction. To analyze isotopes of helium gas trapped within, he has to pulverize the diamonds to release the gas. (Diamond is the world's hardest substance, almost impossible to wear down–but a direct whack with a hammer will shatter one. Repeated beating turns it to something resembling fine granulated sugar.) "It seems crazy to crush diamonds, right?" he admits. "But it's the only way to get at that particular question." The Ekati diamond mine, on the tundra of Canada’s Northwest Territories, source of some of Weiss’s samples. The geochemist is interested in the origins of North American diamonds. Last year, Weiss published another paper about tiny droplets of fluid found encased within African gem-grade diamonds. Such droplets are fairly common in boart diamonds–inferior specimens like the fibrous type–but not in gems. Many scientists contend that boart and gem-grade stones crystallize out of two different kinds of fluids. To test the idea, Weiss obtained two very rare single-crystal stones containing fluids–one from South Africa's Finsch diamond mine, and one from a river deposit in Kankan, Guinea. The gems' fluids turned out to be similar to those in boart–a challenge to conventional theory. Such research could have practical applications. For one, greater knowledge of trace elements in diamond inclusions could lead to chemical "fingerprints" that would tell where commercial gems originated. This would allow better enforcement of the Kimberley Process, the 2003 UN agreement to blacklist so-called "blood diamonds" from nations where mining is controlled by warlords or corrupt governments. The process currently depends on paperwork that can be easily faked. In the lab, with a mass spectometer, used to analyze minute bubbles of gases trapped within diamonds. Beyond this, "understanding diamond formation can tell us about the deep earth's carbon cycle, which we have very little knowledge about," says Weiss. This is the long-term movement of vast amounts of carbon from the atmosphere and surface down into earth's interior, via biological processes, chemical weathering, subduction of tectonic plates, and then back up again via large, more conventional volcanic eruptions. The cycle is thought to play key roles in controlling climate and biological evolution, and in unseen processes far below the surface. For all his expertise, Weiss has to admit: he has yet to visit a diamond mine. As a student in Israel, he considered collecting samples in conflict-ridden areas of west Africa, but his adviser discouraged him. "He wanted me to stay in the lab and stay alive–not get killed in the field," he says. The mines in northern Canada are safe, but hard to get to–far out on roadless tundra, accessible only by charter aircraft. "I'm still hoping, some day," he says. "Diamonds–they're a very nice stone. It would be fun some day to see where people are finding them." Most of the techniques used to analyze diamonds are harmless–but to get at gas compositions, the diamond has to be crushed. This is all that remains of one previously studied stone. |
10.1038/nm.4063 | Giving antibodies to infant macaques exposed to an HIV-like virus could clear infection | Scientists at the Oregon National Primate Research Center today revealed that infant rhesus macaques treated with antibodies within 24 hours of being exposed to SHIV, a chimeric simian virus that bears the HIV envelope protein, were completely cleared of the virus. The study, published today in Nature Medicine shows that antibodies given after a baby macaque has already been exposed to SHIV can clear the virus, a significant development in the HIV scientific community. SHIV-infected nonhuman primates can transmit SHIV to their offspring through milk feeding, just as humans can transmit HIV from mother to child through breastfeeding and during childbirth (and only rarely during pregnancy). In humans, a combination of measures for mothers and infants, including antiretroviral therapy (ART), Cesarean section delivery and formula feeding (rather than breastfeeding), have decreased the rate of mother-to-child HIV transmission from 25 percent to less than 2 percent since 1994. Despite this decrease, approximately 200,000 children are infected with HIV each year worldwide, primarily in developing countries where ART is not readily available. "We knew going into this study that HIV infection spreads very quickly in human infants during mother-to-child transmission," said Nancy L. Haigwood, Ph.D., senior author of the paper, and director and senior scientist, Oregon National Primate Research Center at Oregon Health & Science University. "So we knew that we had to treat the infant rhesus macaques quickly but we were not convinced an antibody treatment could completely clear the virus after exposure. We were delighted to see this result." Haigwood and colleagues administered the anti-HIV-1 human neutralizing monoclonal antibodies (NmAb) subcutaneously on days 1, 4, 7 and 10 after the macaques were exposed to SHIV orally. The SHIV virus was found in multiple body tissues on day 1 in macaques without antibody treatment. Conversely, they observed an immediate impact of a single dose of antibodies at the start of the infection, with a significant difference in treated versus non-treated macaques. Early short-term administration of powerful antibodies effectively cleared the virus by day 14, with no virus detected at this time. Using highly sensitive methods, they did not detect the virus in any part of the body in 100 percent of the antibody-treated infant macaques for at least six months. Typically, HIV infection rapidly expands and spreads in humans to local draining lymph nodes before disseminating throughout the entire body one week after a person is infected. This study showed that, at least in this model system of oral SHIV exposure in newborn macaques, virus replication is detected in lymphatic tissues 24 hours after exposure and is not locally restricted, as has been suggested previously for humans, due to delays of 5 to 7 days before detection in the blood. The study showed that: 1) antibodies delivered subcutaneously are swiftly distributed to blood and tissues and maintain neutralizing activity at various sites, and, 2) that antibodies are effective at clearing the virus, a different mechanism than that of ART, which is a combination of several antiretroviral medicines used to slow the rate at which HIV makes copies of itself in the body. "Other nonhuman primate studies with antiretroviral therapy suggest that treatment as early as three days after infection is too late to prevent establishment of the HIV reservoir," said Jonah B. Sacha, Ph.D., study co-author and assistant scientist, Oregon National Primate Research Center at OHSU. "So using antibodies to clear the virus after infants have already been exposed could save thousands of lives" if the approach works in human infants. The researchers noted that treating human babies with ART during the last month of gestation, the few days after delivery, and during breastfeeding timeframes, is recommended. However, risks remain, including toxicities associated with long-term ART use, the development of drug-resistant viral variants, and lack of access to prenatal care prior to delivery. This discovery indicates that using new methods, such as antibodies, to limit infection after exposure in newborns could be advantageous. The study authors acknowledge that several relevant questions remain unanswered for treatment of HIV-infected newborns and children born to HIV-positive mothers. These include practical and cultural issues of treating breastfeeding mothers and babies, if the antibodies will work in human infants exposed to HIV, as well as what the optimal antibody formulations will be. Clinical trials in which HIV-exposed newborns are treated with antibodies have begun in the U.S. and South Africa, following a phase I clinical trial in HIV-negative adults that showed the antibodies to be safe and well-tolerated in these individuals. The authors' findings help define the window of opportunity for effective treatment after exposure to HIV during birth. If these primate model results can be applied to human beings in a clinical setting, researchers are hopeful that treating infants who have already been exposed to HIV within 24 hours may provide protection from viral infection, even in the absence of ART. | Scientists at the Oregon National Primate Research Center have made a significant breakthrough in the fight against HIV, discovering that infant rhesus macaques treated with antibodies within 24 hours of being exposed to SHIV, a chimeric simian virus, were completely cleared of the virus. The study, published in Nature Medicine, shows that antibodies given after a baby macaque has already been exposed to SHIV can clear the virus, a development that could potentially save thousands of lives. The researchers administered anti-HIV-1 human neutralizing monoclonal antibodies to the macaques and found that a single dose of antibodies at the start of the infection effectively cleared the virus by day 14, with no virus detected in any part of the body for at least six months. The study's findings suggest that treating human babies with antibodies within 24 hours of exposure to HIV could provide protection from viral infection, even in the absence of antiretroviral therapy (ART). | None | Abstract Prevention of mother-to-child transmission (MTCT) of HIV remains a major objective where antenatal care is not readily accessible. We tested HIV-1–specific human neutralizing monoclonal antibodies (NmAbs) as a post-exposure therapy in an infant macaque model for intrapartum MTCT. One-month-old rhesus macaques were inoculated orally with the simian-human immunodeficiency virus SHIV SF162P3 . On days 1, 4, 7 and 10 after virus exposure, we injected animals subcutaneously with NmAbs and quantified systemic distribution of NmAbs in multiple tissues within 24 h after antibody administration. Replicating virus was found in multiple tissues by day 1 in animals that were not treated. All NmAb-treated macaques were free of virus in blood and tissues at 6 months after exposure. We detected no anti-SHIV T cell responses in blood or tissues at necropsy, and no virus emerged after CD8 + T cell depletion. These results suggest that early passive immunotherapy can eliminate early viral foci and thereby prevent the establishment of viral reservoirs. Main Recent advances in the discovery of human HIV NmAbs that have high potency and breadth of coverage have rekindled an interest in their use as pre-exposure prophylaxis, as well as therapeutic agents, including in the setting of MTCT, in which the time of exposure is known 1 , 2 . A combination of measures—including antiretroviral treatment (ART) of the mother and the infant, cesarean section and formula feeding—have diminished the rate of MTCT from 35% to less than 3% (ref. 3 ). Despite this reduction, HIV infects approximately 200,000 children yearly, primarily in places where ART is not available 4 . Treatment of babies with ART during both the early peripartum and the breast-feeding timeframes is recommended 5 , but risks remain, including the toxicities associated with long-term use and the development of drug-resistant viral variants 6 . Therefore, discovering less toxic methods to limit transmission to newborns would be advantageous 2 . In mucosal HIV and SIV transmission, the virus establishes a small founder population of infected cells after it has traversed the vaginal mucosal barrier 7 . This localized infection rapidly expands and spreads to local draining lymph nodes (LN), before disseminating systemically by 1 week after exposure 8 , 9 . Similarly in nonhuman primate (NHP) models of oral SIV exposure, the oral and esophageal mucosa and the tonsils are sites of early viral infection within 1 d post-exposure (d.p.i.), with rapid systemic dissemination, via the regional lymphatics, occurring within 1 week after exposure 10 , 11 . Because IgG from the circulation contributes substantially to the immunoglobulin pool in tissue and genital tract secretions, passively transferred neutralizing antibodies (NAbs) may have a protective effect by interaction with the virus at the mucosal level 12 , thus preventing systemic spread. In adult NHP models of mucosal SHIV transmission, there is abundant evidence for protective prophylactic efficacy with passively transferred human NmAbs 13 , 14 , 15 , 16 , 17 , 18 . In vitro , NmAbs have been shown to block HIV infection of dendritic cells and subsequent transmission to T cells 19 . Direct vaginal application of NAbs before challenge is protective in macaques 20 , and in HIV-exposed but uninfected humans, mucosal IgA can block transcytosis in vitro 12 . Vaccine-induced protection from vaginal challenge correlates with levels of glycoprotein 41 (gp41)-specific cervicovaginal IgA and IgG that have antiviral and transcytosis-blocking activities 21 . However, the tissue localization and the kinetics of passively transferred antibodies are still not well defined 13 , 22 . There is evidence for an impact by NmAbs in lowering plasma virus levels in established infections in NHP models 23 , 24 , 25 and in humans 25 , 26 . In NHP models, post-exposure prophylaxis using cocktails of the first-generation human NmAbs b12, 2G12, 2F5 and 4E10 partially prevented oral SHIV infection in newborns 24 . A single dose combining the newer, more-potent NmAbs VRC07-523 and PGT121 delivered 10 d after intravenous SHIV infection suppressed acute viremia and limited seeding of viral reservoirs in adult macaques 27 . We have shown that neutralizing polyclonal IgG purified from SIV- or SHIV-infected macaques that are injected subcutaneously (s.c.) can effectively control viremia and accelerate B cell responses, resulting in reduced pathogenesis in SIV-infected adults 28 and in SHIV-infected newborn macaques 29 , 30 . We hypothesized that a cocktail of two potent and broadly cross-reactive NmAbs, VRC07-523 and PGT121, would slow the initial virus expansion and reduce the chance of rapid escape in infant macaques exposed to pathogenic SHIV. We show that combined doses as low as 10 mg per kg body weight (mg/kg) administered 24 h after exposure can intercept replicating viral foci established by day 1 and prevent orally administered virus from establishing permanent viral reservoirs. Results Titration and biodistribution of subcutaneously administered antibodies in macaques We initially conducted studies to define the protective dose and kinetics of the CD4-binding site–directed NmAb VRC01 in blocking newborn macaques from oral SHIV SF162P3 infection after s.c injection and to determine the kinetics of passively transferred IgG in naive and infected macaques. First, we administered VRC01 to a total of seven male and female one-month-old macaques at 20 mg/kg ( n = 2) or 5 mg/kg ( n = 5) 24 h before SHIV exposure. We measured SHIV SF162P3 envelope–specific binding and neutralizing antibody kinetics in vivo . The time to maximal concentration in the plasma was 24 h, independent of dose, and the serum (plasma) half-life of VRC01 was 3.9–4.2 d ( Supplementary Fig. 1 ). Neither of the two macaques injected with the 20 mg/kg dose and only one of five macaques injected with the 5 mg/kg dose became infected. In this macaque, the magnitude and kinetics of virus in the plasma, termed the plasma virus load (PVL), were indistinguishable from that of control animals treated with IgG purified from naive macaques 30 ( Supplementary Fig. 1 ). These data are consistent with results from passive protection studies using VRC01 in juvenile and adult macaques 18 and guided the therapeutic range we used for infant macaques. Next, in a separate study designed to determine whether the kinetics of passively transferred IgG is altered in the presence of viral antigen, we assessed the distribution of purified polyclonal Ig from SIV-infected macaques (referred to as SIVIG) in a total of four male and female macaques, and we compared SIVIG kinetics in SIV-infected macaques to that in naive macaques. SIVIG was rapidly distributed in the plasma and tissues of infected and naive animals ( Supplementary Fig. 2 ). We used in situ hybridization to localize SIV in tissue samples collected at 24 h and at 2 weeks after oral challenge with SIV smE660 . SIV was undetectable in 24-h tissue samples but was detectable after 2 weeks in tissues both adjacent to and distant from the site of challenge ( Supplementary Fig. 3 ). Thus, IgG delivered subcutaneously is rapidly and widely distributed, and is unimpeded by viral antigen. NmAb cocktail immunotherapy in the presence of SHIV We next assessed the effectiveness of HIV-1 NmAbs as post-exposure prophylaxis in one-month-old infant inoculated orally with SHIV SF162P3 . For in vivo therapy, we tested a cocktail of VRC07-523 and PGT121, two potent NmAbs that target different regions of the HIV-1 envelope and that have been shown to have additive effects in vitro 31 . VRC07-523 is an engineered clonal relative of VRC01 that shows increased neutralization of most HIV strains and improved in vivo protection capabilities 32 . Therefore we used VRC07 instead of VRC01 for these therapeutic studies. PGT121 interacts with the variable regions and glycans of HIV-1 gp120 (refs. 33 , 34 ) and protects adult macaques from mucosal challenge at very low plasma titers 35 . Cocktails of PGT121 and VRC07-523 were prepared at total doses of 10 mg/kg (5 mg/kg each antibody) and 40 mg/kg (20 mg/kg each antibody) and delivered subcutaneously. We inoculated 20 one-month-old rhesus macaques orally with SHIV SF162P3 on day 0 and followed them for up to 28 weeks to assess virological, immunological and disease outcomes, with or without NmAb treatment starting on day 1. Pairs of animals were killed at days 1, 2 or 14 after exposure to monitor the development of SHIV SF162P3 infection in the blood and tissues of treated and untreated macaques ( Table 1 ; groups 1–4). We delivered NmAbs on days 1, 4, 7 and 10 after SHIV exposure ( Fig. 1a and Table 1 ; groups 4–6). SHIV SF162P3 infection by the oral route in one-month-old macaques results in reproducible, sustained PVL at >10 7 copies/ml plasma and ∼ 10 4 copies per μg DNA for at least 24 weeks in all of the animals 30 . To conserve animals, historical controls were used as a comparison group for the 24-week follow-up ( Table 1 ; group 7). Table 1 Experimental design for testing a human NmAb cocktail as a therapy Full size table Figure 1: NmAb cocktail dosing and kinetics in plasma. ( a ) Experimental design of the early NmAb therapy experiment. ( b ) ELISA assays using recombinant proteins RSC3 (resurfaced stabilized core gp120 protein) 48 and ST0A9 (scaffold protein displaying the V1V2 region and PGT121 epitope) 18 for the specific detection of VRC07-523 and PGT121, respectively (5 mg/kg and 20 mg/kg each NmAb, respectively) (top). VRC07-523 and PGT121 were combined in a 1:1 ratio by mass (μg/ml) to generate a cocktail for s.c. injection at doses of 10 mg/kg and 40 mg/kg (bottom). The NmAb cocktail was assayed by an SHIV SF162 gp140–specific ELISA (bottom left, 10 mg/kg cocktail; bottom right, 40 mg/kg cocktail). Data shown are NmAb concentrations in the plasma of 12 macaques. The concentrations were determined using nonlinear regression and the half-maximal effective concentration (EC 50 ) of the NmAb cocktail or the individual NmAb, and were graphed in GraphPad Prism. Error bars indicate s.d. The individual NmAbs and the NmAb cocktail were used as standard curves. Pre-treatment plasma (day 0) was used as a negative control for the assay. Source data Full size image We evaluated the kinetics of the individual NmAbs and the cocktail in plasma from all 12 treated infants that were on the study for at least 2 weeks ( Table 1 ; groups 4, 5 and 6). Peak NmAb cocktail concentrations in plasma occurred by 24 h after the s.c. injection in all animals at both NmAb doses. In the four macaques that received the 10 mg/kg dose, the average cocktail concentration during the first 2 weeks was 44 μg/ml, and in the eight macaques that received the 40 mg/kg dose, it was 113 μg/ml ( Fig. 1b ). By using reagents designed to specifically detect each NmAb independently, we found that PGT121 concentrations in the plasma were consistently higher at both doses than those of VRC07-523. Multiple dosing prevented us from calculating the in vivo half-lives of each NmAb, but PGT121 was detectable in plasma for 2 weeks longer than VRC07-523 in several macaques. PGT121, administered at 5 mg/kg, was maintained for >20 weeks in the plasma from a single macaque, 33537 ( Fig. 1b , bottom left). An unusually slow antibody decay rate in the plasma of subjects that were passively infused with PGT121 has been recently reported, in which plasma concentrations of 5–20 μg/ml were still present after 10 weeks 27 . We assessed SHIV SF162P3 -neutralization activity in the plasma of all infant macaques and found that it decayed by 6–7 weeks in all of the animals except macaque 33537, in which declining neutralization of SHIV SF162P3 was detected at titers of 10 2 –10 3 before becoming undetectable at week 20 ( Supplementary Fig. 4 ). Calculation of the average 50% inhibitory concentration in plasma (IC 50 ) of the NmAb cocktail during the first 2 weeks after SHIV SF162P3 exposure was 0.0134 and 0.0120 μg/ml in the 10 mg/kg and 40 mg/kg groups, respectively, which is close to the IC 50 (0.0128 μg/ml) obtained from purified NmAbs specific for SHIV SF162P3 in the TZM-bl standardized cell line that expresses luciferase in the presence of HIV or SIV Tat and that is used to quantify NAbs in vitro ( Supplementary Fig. 4 ). To measure transudation of NmAb into tissues and to assess neutralization potency within tissues and organs during the first 2 weeks, we extracted specimens from six macaques at different necropsy time points. Analysis of the antibody extracted from two macaques (group 4) that were sacrificed at 14 d after four doses of NmAbs showed that NmAbs were systemically distributed at concentrations from 48–700 ng/ml of tissue lysate ( Supplementary Table 1 ). Two macaques (group 2a) exposed to SHIV SF162P3 , treated once with a NmAb cocktail dose of 10 mg/kg 1 d later and sacrificed on day 2 had NmAbs in tissue lysates in concentrations up to 791 ng/ml. Two macaques (group 2b) treated once with 10 mg/kg, without SHIV exposure, had NmAbs in tissues at levels similar to those in macaques 34263 and 34290, which were pre-exposed to SHIV. We assessed the neutralizing activity to SHIV SF162P3 in tissue homogenates from ∼ 100 mg of necropsy samples from the animals sacrificed at 1, 2 and 14 d after exposure (groups 2 and 4). The 50% neutralization titers (ID 50 ) for SHIV SF162P3 of tissue lysates averaged ∼ 1:50 in the tested samples at 1 d after s.c. NmAb injections, increased to an average of ∼ 1:100 by day 2 and were 1:150 at day 14, with good agreement between the titers observed for macaques that were sacrificed at the same time points ( Table 2 ). However, NmAb titers in the colon and reproductive tract from macaque 34290 were about 3–5 times higher than those from macaque 34263, which was necropsied at the same time point. Tissue-associated IC 50 concentrations of the NmAb cocktail in the samples tested ranged from 0.5–10.0 ng/ml, which is similar to the IC 50 of the NmAb cocktail ( Supplementary Fig. 4 ). We conclude that the presence of SHIV during the first day after oral exposure did not affect NmAb distribution or levels in vivo and that the NmAb cocktail was rapidly distributed to tissues. Table 2 Neutralizing activity in tissue homogenates of infant rhesus macaques colocalizes with virus Full size table SHIV SF162P3 dissemination with and without NmAb therapy To determine the in vivo transudation kinetics of SHIV SF162P3 in blood and tissues early in infection with and without NmAb cocktail therapy after exposure, we quantified the amount of virus in blood from macaques killed on days 1, 2 or 14 after oral SHIV SF162P3 exposure ( Table 1 , groups 1–4). The 2-week time point was anticipated to be nearest to the time of peak PVL. Plasma viremia was detected by day 4, increased rapidly and peaked between 1 × 10 8 to 5 × 10 8 copies/ml in macaques that were not treated with NmAbs and necropsied at day 14 ( Fig. 2a ), consistent with results from the VRC01 study ( Supplementary Fig. 1 ) and prior studies 29 , 30 . In stark contrast, no virus was detected in plasma or peripheral blood mononuclear cells (PBMC) from NmAb-treated macaques killed at day 14 ( Fig. 2a,b ). Figure 2: Viral kinetics and tissue distribution during the first 2 weeks after oral SHIV exposure. SHIV SF162P3 viremia was quantified in eight male and female macaques that were either treated or untreated with NmAbs. ( a , b ) PVLs (as assessed by measurements of SIV viral RNA in blood using a qRT-PCR) ( a ) or CAVLs in PBMC (as assessed by qPCR) ( b ). ( c ) Anatomic locations of tissues collected at necropsy following oral inoculation. ( d – g ) Viral DNA in tissues of untreated macaques ( d , f ) or in macaques treated with 10 mg/kg ( e ) or 40 mg/kg ( g ) NmAb cocktails and killed at the indicated times, as detected by an ultrasensitive nested qPCR and RT-PCR 37 assay targeting a highly conserved region in SIV gag encoded in the SHIV. Each sample was assayed in 12 replicates (5 μg each). Virus copy numbers were derived from the frequency of positive replicates using the Poisson distribution and calculated as copies per μg of DNA or copies per 10 6 cell equivalents using the input nucleic acid mass and by assuming a DNA content of 6.5 μg per million cells. Infected tissues are colored to indicate virus amounts, quantified as SIV gag copies/μg of DNA, according to the scale shown at the bottom. Source data Full size image We collected multiple tissues from all of the macaques at necropsy ( Fig. 2c ), and in samples collected within 2 d of SHIV SF162P3 exposure, we measured low levels of SHIV SF162P3 DNA in mucosa and LN that were proximal and distal to the oral exposure site ( Fig. 2d,e ) in treated and untreated animals. In comparison to treated animals, the virus was widespread at day 14 in untreated animals ( Fig. 2f ) and peaked at >3,000 copies/μg of DNA throughout the LN and gut, consistent with levels of DNA in the tissues of adult and six-month-old Macaca nemestrina with high levels of plasma viremia 36 . As seen in the blood, following NmAb treatment on days 1, 4, 7 and 10 at 40 mg/kg, virus was not detectable in any tissue at day 14 ( Fig. 2g and Supplementary Table 2 ). To determine whether the viral DNA–positive tissues contained replicating SHIV, we measured viral RNA in several tissue samples taken from these same macaques that were sacrificed on 1, 2 or 14 d after virus exposure. Viral DNA and RNA levels, tested as blinded samples from these tissues, show that productive SHIV SF162P3 infection had begun in multiple tissues by day 1, increasing exponentially by day 14 ( Table 2 ). However, in two macaques that were treated with 10 mg/kg on day 1 (group 2a; sacrificed on day 2), there was no viral RNA detected in the samples tested. Notably, the NmAbs were colocalized in these virus-positive tissues, suggesting the potential for antibody effects as early as 1 d after treatment. Moreover, the results suggest that virus present early after exposure can be intercepted and cleared by NmAbs present in the same tissues ( Table 2 ). Prevention of productive infection, viral rebound and pathogenesis To evaluate the effect of early short-term NmAb therapy on viral control, we monitored SHIV SF162P3 in blood, LN and tissues in animals followed for 24–28 weeks. PVL in the controls routinely peaked at 2 weeks post-infection (w.p.i.) and persisted at levels that ranged from 10 6 –10 8 copies/ml. In newborns that were treated with 10 mg/kg ( Fig. 3a ) or 40 mg/kg ( Fig. 3b ) NmAbs, there was no plasma viremia detected in any of the samples collected over the course of the study. A single time point in the 40 mg/kg group was positive for only one of two replicates, and additional material to retest this sample was not available. Longitudinal cell-associated viral loads (CAVL) in PBMC DNA were negative for each of the >300 samples tested from the ten macaques in groups 5 and 6 ( Fig. 3c,d ). In short, all of the NmAb-treated infants had undetectable PVL or CAVL in blood. Figure 3: SHIV SF162P3 -associated viremia is not established in plasma or PBMC of NmAb-treated infants. ( a – d ) Quantification of virus in blood ( a , b ) and in peripheral blood cells ( c , d ) in both NmAb dosing groups of male and female infant rhesus macaques ( n = 10). Plasma viral loads were assessed by measurements of SIV viral RNA in blood using a qRT-PCR assay ( a , b ) and in PBMC by qPCR ( c , d ). CD8 + T cell–depletion study timeline is shown in red. Data shown in gray indicate mean levels of virus in plasma (±s.d.) from eight historical controls from an earlier study 18 , 30 . Source data Full size image We also measured the levels of SHIV SF162P3 DNA in >300 homogenized tissue samples obtained at 24–28 w.p.i. and in inguinal LNs collected 12 w.p.i. from all ten macaques, using ultrasensitive qPCR 37 . Tissue samples from all SHIV-exposed infants that received the 2-week course of NmAb cocktail were tested as coded samples and were negative for virus in both dosage groups ( Supplementary Table 2 ). As discussed above, only very low levels of virus were detected in tissue specimens from the group 2a macaques 34263 and 34290, which were sacrificed at 2 d.p.i. after a single NmAb dose ( Fig. 2e ). Similarly to that seen in blood, virus was widespread in tissues at 14 d.p.i. in group 3 control macaques that did not receive NmAbs ( Fig. 2f ). As early as 1 d.p.i., tissue-associated virus in mucosal tissue adjacent to the exposure site, the draining LN and the gut tissue was evident in macaques from group 1 (untreated controls) ( Fig. 2d and Supplementary Table 2 ). As compared to the untreated (group 1) macaques killed at 1 d.p.i., NmAb-treated (group 2a) macaques that were sacrificed 2 d.p.i had significantly lower amounts of tissue-associated virus ( P = 0.0061; Fig. 4 ). The discovery of traces of virus at 2 d.p.i ( Fig. 2e ) and none at 14 d.p.i. ( Fig. 2g ) implies that NmAbs intercepted and neutralized SHIV SF162P3 replication, cleared infected cells and halted the spread of infection in these macaques and in all ten NmAb-treated infants that were followed for 6 months. Consistent with these data, pathology results also showed the absence of organ or tissue pathology ( Supplementary Fig. 5 ). Figure 4: NmAb cocktail lowers tissue-associated viremia within 24 h after s.c. delivery. SHIV DNA quantified by ultrasensitive nested qPCR and RT-PCR 37 in each tissue sample shown from four control animals ( Table 1 , groups 1 and 2a) at either 1 d after SHIV exposure with no NmAb treatment or 1 d after s.c. injection with 10 mg/kg NmAb cocktail and 2 d after SHIV inoculation. P = 0.0061; by Wilcoxon-signed rank test (statistics performed in SAS 9.4 software). Source data Full size image No evidence for T cell immunity or viral rebound To evaluate T cell immunity in the SHIV-exposed macaques, we used intracellular cytokine staining (ICS) to measure PBMC, spleen and mesenteric LN responses specific for the SIV mac239 proteins Gag and Vif (present in the SHIV chimera) in the ten macaques that were studied for 6 months. No SHIV-specific T cell responses were detected in PBMC at week 20 or in tissue samples after necropsy ( Supplementary Fig. 6 ). To determine whether there were any reservoirs controlled by CD8 + T cell–mediated suppression, we depleted CD8 + cells to undetectable levels in the four animals in the 10 mg/kg group and monitored them for viremia for 4 weeks ( Supplementary Fig. 6 ). There was no evidence of virus rebound in the plasma of these animals during the CD8 + depletion phase ( Fig. 3a,c ), further supporting the concept that early passive NmAb therapy with the cocktail of VRC07-523 and PGT121 disrupted establishment of virus reservoirs, thereby preventing exposure to antigen and development of cellular immunity. Discussion Pre-exposure prophylaxis (PrEP) with ART is effective in limiting transmission in the setting of MTCT, as well as in healthy adults 38 . One of the major goals in treating HIV-1 infection is to discover methods that can clear the established viral reservoir 39 . To date, only a single case of a 'functional cure' has been documented following bone marrow transplantation 40 . Vaccine-induced, persistent T cell responses cannot prevent infection but can reduce established SIV reservoirs to undetectable levels in about half of vaccinated macaques 41 . NHP studies with ART suggest that treatment as early as 3 d.p.i. is too late to prevent the establishment of the reservoir, as the virus rebounded after cessation of drug treatment 42 . These data are consistent with the case of the 'Mississippi baby', in which ART therapy was started within 30 h of birth but did not prevent HIV infection 43 , 44 . Thus, the time for intervention is extremely limited, and ART alone may not be effective at eliminating a founder infection. Viral RNA has been detected in macaques as early as 1 d following vaginal exposure to SIV 45 , but there is a common view that HIV and SIV may have a short 'eclipse phase' of limited, localized viral replication that lasts a number of days, in which the spread of the founder infection is dependent upon target cell availability to spread to lymphatic tissues 45 . Our data show that, at least in this model system of oral SHIV exposure in infant macaques, virus replication is detected in lymphatic tissues at 24 h after infection and is not locally restricted. Here we found evidence of an immediate impact of a single dose of passively transferred NmAbs on seeding of the virus, with a significant difference in early tissue-associated viral RNA and DNA in treated versus nontreated infants. Early post-exposure, short-term administration of powerful NmAbs effectively cleared the virus in vivo by 14 d and prevented viral rebound after decay of the passive NmAbs. We present three lines of evidence that the ten macaques studied for >24 weeks were clear of virus. Firstly, all of the macaques failed to develop adaptive immune responses. Secondly, we showed that >300 coded tissue samples from these macaques were virus negative, using an ultrasensitive PCR methodology based on detection of the SIV gag gene. Thirdly, we depleted the CD8 + T cells in the four lower-dose macaques (group 6) and observed no viral rebound. These experiments show that NmAbs delivered subcutaneously are swiftly distributed to blood and tissues and that they maintain neutralizing activity at distal sites. They further indicate that NmAbs are effective at clearing viral foci in blood and tissues during the earliest stages of HIV penetration of the tissues, a different mechanism from that of ART. We hypothesize that antibody-mediated effector functions, including cytotoxicity and phagocytosis, are required for killing infected cells that express HIV envelope gp160 on their surface 46 . If so, then NmAbs present for an extended period of time after exposure would have the capability to destroy infected cells and neutralize virus particles emanating from cells in which infection was established within the first few hours following an exposure event. In this setting, post-exposure therapy, using a NmAb cocktail administered 24 h after SHIV exposure and continuing for 2 weeks, resulted in the maintenance of high NmAb concentrations in vivo for at least 2 weeks. The importance of repeated dosing is not known, but in the case of breast-feeding mothers, there is continued opportunity for transmission of HIV-1. It will be important to understand whether repeated NmAb dosing in babies could expand the protective window. Several relevant questions remain unanswered for the treatment of HIV-infected newborns and children born to HIV-positive mothers, including the practical and cultural issues of treating breast-feeding mothers and babies, as well as a determination of optimal antibody cocktail formulations. Any future use of human NmAbs in the clinic will presumably require several antibodies, or engineered antibodies with multiple specificities, to avoid the potential for the emergence of viral escape mutants. Because ART has a short half-life and requires strict adherence to the drug regimen to be effective, supplementation with passive NmAbs with relatively long half-lives may widen the therapeutic window. Identifying human NmAbs that can impact infection in a macaque model for MTCT can provide a proof of principle for the value of using antibodies to augment ART. In fact, safety trials in HIV-exposed newborns for treatment with the NmAb VRC01 have begun in the USA and South Africa ( ), following a safety trial in adults 47 . Our findings begin to define the window of opportunity for effective treatment after intrapartum exposure. If these results can be applied to clinical settings, then there is optimism that early passive immunotherapy may provide protection from HIV infection, even in the absence of ART. Methods Animal models and humane-care guidelines. The Oregon Health and Science University West Campus Institutional Animal Care and Use Committee approved all macaque studies. Studies were performed at the Oregon National Primate Research Center in Beaverton, Oregon, USA (ONPRC). The ONPRC is accredited by the American Association for the Accreditation of Laboratory Animal Care International and adheres to the Guide for the Care and Use of Laboratory Animals 49 and the United States Public Health Service Policy on the Humane Care and Use of Laboratory Animals ( ). The initial study with SIVIG used four M. mulatta (male and female) of varying ages that were obtained from the breeding colony. For the studies in one-month-old macaques, 27 7-d-old M. mulatta (rhesus macaques, male and female) were obtained from the breeding colony and raised for 3 weeks in the animal biosafety level (ABSL)-2 infant nursery. Time of birth determined animal allocation, so that animals were randomly assigned to the study groups as they accrued. Protection studies with VRC01 ( n = 7 infants) were pilot studies and were not designed for statistical analyses (all or none effects of virus acquisition). Group sizes of six had been previously shown to allow statistically distinguishable measurements in plasma and cell-associated virus loads at 6 months as the primary study outcome for antibody treatment. Serial sacrifice studies included groups of two animals each, and viral quantification analyses included ∼ 30 tissue samples per animal. Infants were excluded if the sire or dam could not be confirmed to be absent of M. mulatta B*08 and B*17 major histocompatibility complex (MHC) class I alleles. At 1 month of age, after adaptation to formula feeding, animals were transferred to ABSL-2+ containment for study procedures. Infants were paired with another macaque of the same age for nursery care and containment housing. In all studies, infants were monitored for clinical signs of disease. Clinical evaluation included measurement of peripheral LN size and body weight, as well as evaluation of appetite, attitude and stool quality. All animals were euthanized under IACUC guidelines and consistent with the recommendations of the American Veterinary Medical Association (AVMA) Guidelines for Euthanasia 50 . IgG and NmAb preparations. Normal IgG was purified from 1 liter of pooled plasma from simian retrovirus (SRV)-negative and SIV-negative adult rhesus macaques as previously described 30 . VRC01, VRC07-523 and PGT121 were expressed as IgG1 antibodies by transient transfection of Expi293F cells (ThermoFisher Scientific Inc.) and purified over protein A columns 32 , 51 . The V H and V L regions of PGT121 were synthesized based on the published sequence 33 . Purified polyclonal anti-SIV smE660 IgG (SIVIG) was pooled from two SIV smE660 -infected animals from a prior experiment 28 . All antibody preparations were delivered subcutaneously at multiple sites around the dorsal cervical and thoracic regions of the animals in the doses described in the text. Virus inoculations. Infant macaques were administered a 50% animal infectious dose (AID 50 ) ( ∼ 7 × 10 8 viral RNA copies) of a macaque cell–grown stock of SHIV SF162P3 (ref. 52 ), divided into two 1-ml oral doses given ∼ 15 min apart. AID 50 was determined in a titration experiment described previously 36 . Virus detection in plasma, PBMC and tissue homogenates. Nucleic acid from plasma, cell culture supernatant or PBMC was purified using a Maxwell 16 instrument (Promega, Madison, WI) according to the manufacturer's protocol, using the LEV Viral Nucleic Acid Kit and the LEV Whole-Blood Nucleic Acid Kit, respectively. SHIV viral loads in plasma and cell culture supernatant were determined by quantitative RT-PCR using the methods developed by Piatak et al . 53 , except for a slightly modified master mix to increase sample input per reaction. SHIV viral loads in PBMC DNA were determined by quantitative PCR using Fast Advanced Mastermix on an Applied Biosystems QuantStudio 6 Flex instrument (Life Technologies, Carlsbad, CA). Reactions were performed with 2 μg nucleic acid input for 45 cycles using the FAST cycling protocol (95 °C for 1 s, 60 °C for 20 s) in a 30-μl reaction volume. Virus copy numbers were estimated by comparison to a linearized pBSII-SIV gag standard curve and calculated per cell equivalent using the input nucleic acid mass and by assuming a DNA content of 6.5 μg per million cells. Primers and probe used for plasma and PBMC assays were those described by Piatak et al . 51 : SGAG21 forward (GTCTGCGTCATPTGGTGCATTC), SGAG22 reverse (CACTAGKTGTCTCTGCACTATPTGTTTTG), and pSGAG23 (5′-(FAM)-CTTCPTCAGTKTGTTTCACTTTCTCTTCTGCG-(BHQ1)-3′). For viral RNA and DNA reservoir detection in tissues, a recently developed ultrasensitive nested quantitative PCR and RT-PCR approach 37 targeting a highly conserved region in SIV and SHIV gag was used. Primers used for DNA pre-amplification were SIVnestF01 (GATTTGGATTAGCAGAAAGCCTGTTG) and SIVnestR01 (GTTGGTCTACTTGTTTTTGGCATAGTTTC). The reverse-transcription step in the RNA assay used the SIVnestR01 primer instead of random hexamers, in order to facilitate priming of specific target sequences. Primers used for quantitative PCR were SGAG21 forward, SGAG22 reverse, and pSGAG23 as described above. PCR reaction conditions for both rounds were as described with minor modifications 54 . Briefly, samples were heated at 95 °C for 5 min and then put on ice. Each sample was assayed in 12 replicates (5 μg each), with two of the reactions including a spike of 10 or 20 copies of DNA or RNA, respectively, containing the SIV gag target sequence in order to assess PCR reaction efficiency. None of the tested RNA and DNA samples showed significant amplification inhibition, which was defined as a 5-cycle amplification delay as compared to the amplification kinetics of reactions containing solely 10 copies of standard. First-round amplification involved 12 cycles (95 °C for 30 s and 60 °C for 1 min) in 50-μl reactions. Then, 5 μl of each pre-amplified replicate was assayed by quantitative PCR using Fast Advanced Mastermix in a 30-μl reaction volume in the QuantStudio 6 Flex instrument. Reactions were performed for 45 cycles using the FAST cycling protocol. Virus copy numbers were derived from the frequency of positive replicates using the Poisson distribution and calculated as copies per μg of DNA. Staff members performing the RNA and DNA assays were blinded to the plasma and tissue samples that were being tested for virus. Antibody detection in tissues and secretions. Tissue samples were sectioned and transferred into radio-immunoprecipitation assay (RIPA) buffer (PI89900, Thermo Fisher Scientific) with protease inhibitor cocktail (P8340, Sigma-Aldrich). Tissue disruption was accomplished with zirconia-silica beads (1.0 mm, Biospec Products) in a Beadbeater (Biospec Products) device with two cycles of 2-min intervals with brief incubations on ice between each cycle. Supernatants were aspirated and centrifuged for 5 min to pellet residual debris. Mucosal secretions were collected on Weck-cel spears and extracted as previously described 55 . Secretions were stored at −80 °C until assayed. Homogenates and secretions containing transudated antibody were used in ELISA and neutralization assays as described below. CD8 + T cell depletion and staining. Four animals were given the CD8-α–depleting antibody M-T807R1 (US NIH, Nonhuman Primate Reagent Resource). Peripheral blood was monitored for the presence of CD8 + T cells for 4 weeks. 1 × 10 5 PBMC were stained with anti-CD3–AlexaFluor 700, anti-CD8–Pacific Blue (BD Biosciences), and anti-CD4–PE-Cy7 (Biolegend). Intracellular cytokine staining (ICS). CD4 + and CD8 + T cell responses were measured from blood and tissues by flow cytometric ICS, as previously described 56 . Briefly, 1 × 10 6 mononuclear cells were incubated with Gag or Vif open-reading-frame pools and the co-stimulatory molecules CD28 and CD49d (BD Biosciences) for 1 h, followed by addition of brefeldin A (Sigma-Aldrich) for an additional 8 h. Co-stimulation without antigen served as a background control, while incubation with staphylococcal enterotoxin B (Toxin Technology) served as the positive control. The cells were then labeled with anti-CD4–PE-Cy7 (Biolegend) and anti-CD8–PerCP-Cy5.5 (BD Biosciences) and fixed with 2% paraformaldehyde. After permeabilization, the cells were stained with anti-CD3–Pacific Blue, anti–IFN-γ–APC, anti–TNF-α–FITC (BD Biosciences, all), and anti-CD6– PE-Texas Red (Beckman Coulter). The cells were fixed and flow cytometric analysis was performed on an LSR-II instrument (BD Biosciences). Analysis was done using FlowJo software (Tree Star, Ashland, OR). In some cases, cells were CD25-depleted before setting up the ICS experiment to remove T reg cells (Miltenyi Biotec). In situ hybridization (ISH). In the SIVIG transudation experiments, measurement of in-situ hybridization for SIV antigen was performed on tissues collected from the two animals that underwent oral SIV challenge. Formalin-fixed, paraffin-embedded tissues were assayed for SIV viral RNA expression by ISH as previously described 57 . Briefly, following deparaffinization, the sections were hybridized overnight at 45 °C with either a sense or an antisense SIVmac239 digoxigenin-UTP–labeled riboprobe. The hybridized sections were blocked with 3% normal sheep and horse serum in 0.1 M Tris, pH 7.4, and then incubated with sheep anti-digoxigenin–alkaline phosphatase (Roche Molecular Biochemicals) and nitroblue tetrazolium-5-bromo-4-chloro-3-indolyl-β- D -galactopyranoside (BCIP; Vector Labs). ISH-stained tissues from submandibular LNs, tonsils and the ileum were visualized and photographed with a Zeiss Axiophot microscope. Enzyme-linked immunosorbent assays (ELISAs). ELISA was used to detect total IgG and SIV gp130–specific antibodies in the SIVIG transudation experiments. Briefly, half-well EIA plates (Costar) were coated with either a goat anti–rhesus IgG (H+L) (unlabeled; Southern Biotech) (for total IgG ELISA) or recombinant SIV smE660 gp130, purified as described 28 (for gp130 ELISA), at 2 μg/ml in carbonate-bicarbonate buffer and incubated overnight. Plates were washed three times (0.1% Triton X-100 in 1× PBS) and blocked with 1% normal goat serum and 5% nonfat dried milk in PBS for 1 h at RT. SIVIG standards and homogenates were diluted in 1% Triton X-100, 2% bovine serum albumin, 5% FBS in PBS. After washing, a 1:4,000 dilution of goat anti–rhesus IgG (H+L)-horseradish peroxidase (HRP) (Southern Biotech) was added and incubated for 1 h at RT followed by TMB substrate (Southern Biotech). Plates were read on a SpectraMax 190 at an absorbance wavelength of 650 nm. Data were reported as the slope of absorbance over time. Concentrations of SIVIG in tissue disruption supernatants were calculated by comparing the average slope numbers to those from the SIVIG standard curve. ELISA was used to assess for the presence of gp140-specific antibodies as previously described 56 in plasma and tissue homogenates. Plasma NmAb levels were quantified using plates coated with either RSC3 (re-surfaced stabilized core gp120 protein) 48 (VRC07-523) or ST0A9 (scaffold protein displaying the V1V2-region and PGT121 epitope) (PGT121) 58 . Briefly, Nunc MaxiSorp (Thermo Fisher) plates were coated overnight with 200 ng/well of RSC3 in PBS, washed with PBST five times, and blocked with TBST with 5% milk and 2% BSA for 1 h at RT. Serial dilutions of all samples were plated in duplicate. Each NmAb (for standard curves) and positive and negative controls were included on each plate. Plasma was incubated for 1 h at RT, followed by a PBS–Tween 20 wash. Bound NmAbs were probed with a HRP-labeled goat anti–human IgG (1:5,000 dilution; Jackson Laboratories) for 30 min at RT. The plate was washed and TMB (Pierce) substrate was added. Once color was developed, stopping buffer was added and the optical density at 450 nm was read. GraphPad Prism and Microsoft Office software was used to calculate NmAb concentrations. TZM-bl neutralization assay. Plasma samples from each animal were tested at all available time points for neutralizing activity using the 96-well TZM-bl neutralization assay described previously 59 . Statistics. The data in Figure 4 shows SIV DNA copies measured in 17 anatomical sites in two paired groups of four control animals. A two-stage sequential approach was used to calculate the averages of SIV DNA copies of two biological replicates for each site in each group and apply a Wilcoxon-signed rank test to account for the matched pair nature of anatomical sites. | None | [] | [] | [] | SciNews | Medicine | Early short-term treatment with neutralizing human monoclonal antibodies halts SHIV infection in infant macaques, Nature Medicine, DOI: 10.1038/nm.4063 Journal information: Nature Medicine | http://dx.doi.org/10.1038/nm.4063 | https://medicalxpress.com/news/2016-03-antibodies-infant-macaques-exposed-hiv-like.html | Scientists at the Oregon National Primate Research Center have made a significant breakthrough in the fight against HIV, discovering that infant rhesus macaques treated with antibodies within 24 hours of being exposed to SHIV, a chimeric simian virus, were completely cleared of the virus. The study, published in Nature Medicine, shows that antibodies given after a baby macaque has already been exposed to SHIV can clear the virus, a development that could potentially save thousands of lives. The researchers administered anti-HIV-1 human neutralizing monoclonal antibodies to the macaques and found that a single dose of antibodies at the start of the infection effectively cleared the virus by day 14, with no virus detected in any part of the body for at least six months. The study's findings suggest that treating human babies with antibodies within 24 hours of exposure to HIV could provide protection from viral infection, even in the absence of antiretroviral therapy (ART).
Scientists at the Oregon National Primate Research Center today revealed that infant rhesus macaques treated with antibodies within 24 hours of being exposed to SHIV, a chimeric simian virus that bears the HIV envelope protein, were completely cleared of the virus. The study, published today in Nature Medicine shows that antibodies given after a baby macaque has already been exposed to SHIV can clear the virus, a significant development in the HIV scientific community. SHIV-infected nonhuman primates can transmit SHIV to their offspring through milk feeding, just as humans can transmit HIV from mother to child through breastfeeding and during childbirth (and only rarely during pregnancy). In humans, a combination of measures for mothers and infants, including antiretroviral therapy (ART), Cesarean section delivery and formula feeding (rather than breastfeeding), have decreased the rate of mother-to-child HIV transmission from 25 percent to less than 2 percent since 1994. Despite this decrease, approximately 200,000 children are infected with HIV each year worldwide, primarily in developing countries where ART is not readily available. "We knew going into this study that HIV infection spreads very quickly in human infants during mother-to-child transmission," said Nancy L. Haigwood, Ph.D., senior author of the paper, and director and senior scientist, Oregon National Primate Research Center at Oregon Health & Science University. "So we knew that we had to treat the infant rhesus macaques quickly but we were not convinced an antibody treatment could completely clear the virus after exposure. We were delighted to see this result." Haigwood and colleagues administered the anti-HIV-1 human neutralizing monoclonal antibodies (NmAb) subcutaneously on days 1, 4, 7 and 10 after the macaques were exposed to SHIV orally. The SHIV virus was found in multiple body tissues on day 1 in macaques without antibody treatment. Conversely, they observed an immediate impact of a single dose of antibodies at the start of the infection, with a significant difference in treated versus non-treated macaques. Early short-term administration of powerful antibodies effectively cleared the virus by day 14, with no virus detected at this time. Using highly sensitive methods, they did not detect the virus in any part of the body in 100 percent of the antibody-treated infant macaques for at least six months. Typically, HIV infection rapidly expands and spreads in humans to local draining lymph nodes before disseminating throughout the entire body one week after a person is infected. This study showed that, at least in this model system of oral SHIV exposure in newborn macaques, virus replication is detected in lymphatic tissues 24 hours after exposure and is not locally restricted, as has been suggested previously for humans, due to delays of 5 to 7 days before detection in the blood. The study showed that: 1) antibodies delivered subcutaneously are swiftly distributed to blood and tissues and maintain neutralizing activity at various sites, and, 2) that antibodies are effective at clearing the virus, a different mechanism than that of ART, which is a combination of several antiretroviral medicines used to slow the rate at which HIV makes copies of itself in the body. "Other nonhuman primate studies with antiretroviral therapy suggest that treatment as early as three days after infection is too late to prevent establishment of the HIV reservoir," said Jonah B. Sacha, Ph.D., study co-author and assistant scientist, Oregon National Primate Research Center at OHSU. "So using antibodies to clear the virus after infants have already been exposed could save thousands of lives" if the approach works in human infants. The researchers noted that treating human babies with ART during the last month of gestation, the few days after delivery, and during breastfeeding timeframes, is recommended. However, risks remain, including toxicities associated with long-term ART use, the development of drug-resistant viral variants, and lack of access to prenatal care prior to delivery. This discovery indicates that using new methods, such as antibodies, to limit infection after exposure in newborns could be advantageous. The study authors acknowledge that several relevant questions remain unanswered for treatment of HIV-infected newborns and children born to HIV-positive mothers. These include practical and cultural issues of treating breastfeeding mothers and babies, if the antibodies will work in human infants exposed to HIV, as well as what the optimal antibody formulations will be. Clinical trials in which HIV-exposed newborns are treated with antibodies have begun in the U.S. and South Africa, following a phase I clinical trial in HIV-negative adults that showed the antibodies to be safe and well-tolerated in these individuals. The authors' findings help define the window of opportunity for effective treatment after exposure to HIV during birth. If these primate model results can be applied to human beings in a clinical setting, researchers are hopeful that treating infants who have already been exposed to HIV within 24 hours may provide protection from viral infection, even in the absence of ART. |
10.1038/s41467-019-08754-5 | Illinois researchers are first to count growth factors in single cells | Whether healthy or diseased, human cells exhibit behaviors and processes that are largely dictated by growth factor molecules, which bind to receptors on the cells. For example, growth factors tell the cells to divide, move, and when to die—a process known as apoptosis. When growth factor levels are too high or too low, or when cells respond irregularly to their directions, many diseases can result, including cancer. "It is believed that cells respond to growth factors at extreme levels of sensitivity," said University of Illinois at Urbana-Champaign Bioengineering Associate Professor Andrew Smith. "For example, a single molecule will result in a major change in cell behavior." In a recent paper published in Nature Communications, Smith reported the invention of a new technology platform that digitally counts, for the first time ever, the amount of growth factor entering an individual cell. Prior to this, researchers inferred growth factor binding based on how the receiving cells responded when the growth factor molecules were introduced. "We showed the first direct cause-and-effect relationships of growth factors in single cells," he said. "We expect the outcomes to lead to a new understanding of cell signaling, how cells respond to drugs, and why cell populations become resistant to drugs, particularly toward improved treatments for cancer." Smith's technology platform tags each growth factor with a single engineered (10 nanometer) infrared fluorescent quantum dot, which can then be viewed using a three-dimensional microscope. In their study, they counted how many epidermal growth factor (EGF) molecules bound to human triple-negative breast cancer cells that were pre-patterned on island-like surfaces. EGF molecules typically signal cell division and lead to tissue growth. Numerous cancers have mutations in their EGF receptors. "We used quantum dots as the fluorescent probe because they emit a lot more light compared to other conventional fluorescent probes such as organic dyes, and we can tune their wavelengths by changing their chemical composition," said Bioengineering doctoral student Phuong Le, the lead author of the paper. "In our study, we demonstrated that quantum dots emitting light in the near-infrared wavelength allowed the most accurate counting of growth factors binding to cells." According to Le, the team also treated the breast cancer cells with quantum dot-tagged EGF in the absence and presence of pharmaceutical drugs that inhibit EGF signaling in cells. "We found that the amount of EGF binding is inversely proportional to drug efficacy," Le said. "This finding is significant as it means that signaling molecules present in the cancer cells' tumor—a place where signaling molecules are often misregulated—can enhance the cancer cells' resistance to pharmaceutical agents." | Researchers have developed a new technology platform that can digitally count the amount of growth factor molecules entering individual cells, providing a direct cause-and-effect relationship between growth factors and cell behavior. The platform uses infrared fluorescent quantum dots to tag growth factors, allowing for accurate counting and visualization of binding using a three-dimensional microscope. In a recent study, the team used this technology to count epidermal growth factor (EGF) molecules binding to human triple-negative breast cancer cells and found that the amount of EGF binding is inversely proportional to drug efficacy, suggesting that signaling molecules in the tumor can enhance cancer cells' resistance to pharmaceutical agents. This breakthrough is expected to lead to a new understanding of cell signaling, how cells respond to drugs, and why cell populations become resistant to drugs, ultimately improving treatments for cancer. | None | Abstract The distribution of single-cell properties across a population of cells can be measured using diverse tools, but no technology directly quantifies the biochemical stimulation events regulating these properties. Here we report digital counting of growth factors in single cells using fluorescent quantum dots and calibrated three-dimensional deconvolution microscopy (QDC-3DM) to reveal physiologically relevant cell stimulation distributions. We calibrate the fluorescence intensities of individual compact quantum dots labeled with epidermal growth factor (EGF) and demonstrate the necessity of near-infrared emission to overcome intrinsic cellular autofluoresence at the single-molecule level. When applied to human triple-negative breast cancer cells, we observe proportionality between stimulation and both receptor internalization and inhibitor response, reflecting stimulation heterogeneity contributions to intrinsic variability. We anticipate that QDC-3DM can be applied to analyze any peptidic ligand to reveal single-cell correlations between external stimulation and phenotypic variability, cell fate, and drug response. Introduction Single-cell analytical techniques are reshaping our understanding of biology by revealing the distribution of gene expression and phenotype across a population of cells 1 , 2 . Applied together with systems biology models and information theory, it is now becoming clear that any population of genetically identical cells naturally exhibits substantial cell-to-cell variability that is integral to the emergence of ensemble biological functions 3 . This heterogeneity has important consequences, as rare cells, rather than cells near the ensemble mean, often dominate clinically meaningful pathogenic processes and drug resistance 4 , 5 , 6 . However, a void exists in experimental techniques to measure how cellular decision-making processes underlying population variability derive from extracellular biochemical signals, such as peptide growth factors and cytokines 7 , 8 , which cannot be easily measured at the single-cell level. Biochemical stimulation, the induction of an intracellular biochemical signal (e.g., receptor activation and translocation) by binding of an exogenous biochemical factor, is usually inferred indirectly from the resulting change in gene expression or cell phenotype 8 . Moreover, input factors are typically applied at stimulation extremes (zero and near saturation) 9 , whereas physiologically relevant tissue concentrations are in intermediate regimes ( c ~ 1–100 pM) 10 , 11 over which cells exhibit sensitive and heterogeneous dose–response relationships (EC 50 ~ 1–100 pM) 12 , 13 . At these concentrations, relevant tissue microdomain volumes (~10 pL) contain just tens to hundreds of factors 14 , 15 , such that signal stimulation is temporally and spatially stochastic 16 . Accurate quantification of initiating signals is therefore very challenging 17 and requires single-molecule sensitivity. Here we describe a technology platform to digitally count growth factors in single cells using fluorescent quantum dots (QDs) and calibrated three-dimensional (3D) deconvolution microscopy (QDC-3DM). As a prototypical example, we focus on epidermal growth factor (EGF) and EGF receptor (EGFR)-positive cells. Fluorescent QDs are used as tags for EGF due to their extremely high fluorescence intensity that is homogeneous and stable at the single-QD level 18 . For maximum signal detection and comprehensive counting of EGF with rapid image acquisition, wide-field excitation is used to collect complete 3D images of cells, and deconvolution is used to reassign photons to their originating focal volumes. We observe that this methodology is only accurate when applying QDs with infrared emission due to interfering fluorescence from cellular components across the visible spectrum. We apply QDC-3DM to analyze EGF-induced cell signaling variability in triple-negative breast cancer cells (MDA-MB-231) grown on micropatterned islands to spatially register signaling events across separate cells. Our results show proportionality between stimulation and both receptor internalization and inhibitor response, reflecting stimulation heterogeneity contributions to intrinsic variability at the single-cell level. Results Imaging and image analysis Figure 1a shows the overarching approach to measure the distribution of stimulation events of growth factors binding to cognate receptors, yielding a response distribution that plays an important role in the variability of signals and behavior between cells. Figure 1b summarizes the imaging and analysis methodology to measure absolute counts of growth factors, using two sequentially collected image stacks. A deconvolved high-resolution 3D epifluorescence image of cells is collected in three colors to distinguish QD-EGF conjugates (in red) spatially registered to the cell location by its fibronectin matrix (in green) and nucleus (in blue). The second image stack is a high temporal resolution video in the QD-EGF color channel. As described in detail in Methods, a three-step process is applied to count EGF molecules per cell: (1) Single QD-EGF spots are identified in videos by distinctive time-course intensity traces, I ( t ), for which two discrete intensities are present in two-dimensional (2D) images, \(I_{1{\mathrm{QD}}}^{2{\mathrm{D}}}\) and \(I_{\mathrm{B}}^{2{\mathrm{D}}}\) , respectively, corresponding to the intrinsic QD intensity and its background due to on-and-off intermittency of emission (i.e., blinking) 19 , 20 . (2) Volumetric intensities of single QDs from deconvolved 3D images are averaged to yield \(\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}}\) , the average intensity of a single QD-EGF. (3) The number of contributing QDs to each spot in 3D images, N QD,spot , is calculated by dividing the volumetric spot intensity by the single-QD intensity. Finally, the total number of QDs is then calculated across each cell to determine the number of EGF per cell, N EGF,cell : $$N_{{\mathrm{EGF}},{\mathrm{cell}}} = \mathop {\sum }\limits_{{\mathrm{spots}}} N_{{\mathrm{QD}},{\mathrm{spot}}} = \mathop {\sum }\limits_{{\mathrm{spots}}} I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} \cdot \overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}} ^{ - 1}.$$ (1) Fig. 1 Quantum dot (QD) calibrated three-dimensional (3D) deconvolution microscopy (QDC-3DM). a Schematic representation of the contribution of single-cell stimulation distribution (growth factor binding) to signaling response distribution (measured by receptor internalization). b Depiction of the QDC-3DM image analysis methodology to count growth factors in single cells. The process begins with acquisition of 3D fluorescence images of single cells to localize single QDs and spatially register their locations. A representative 3D image shows a cell stimulated with QD-epidermal growth factor (QD-EGF) (red) on an Alexa Fluor 488-labeled fibronectin substrate (green) with nucleus labeled with Hoechst (blue). Each 3D image is deconvolved and spatially correlated to two-dimensional (2D) videos in the QD color channel. In the first step shown at right, time traces of spot intensities are used to identify single QDs by their distinctive two-component intensity distributions. In the second step, the average intensity of these single QDs from 3D deconvolved images, \(\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}}\) , is measured. In the third step, the 3D intensity of each spot, \(I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}\) , is measured and registered to the average single-QD intensity, to calculate the number of QD-EGF per spot, N QD,spot . The number of EGF per cell, N EGF,cell , is then calculated as the sum of all N QD,spot Full size image Quantum dot probe engineering Accuracy of Eq. ( 1 ) requires that each QD bound per cell corresponds to a single EGF, thus requiring that each QD is bound to a single EGF. Monovalency between QDs and growth factors is further important to prevent artificial cross-linking between receptors that would not reflect the intrinsic monomeric nature of EGF. We optimized QD-EGF conjugates to ensure functional monovalency using an EGF engineered with a single N-terminal biotin, which self-assembles with covalent QD conjugates of streptavidin (SAv) with near covalent bond strength 21 . We adopted a previous strategy to generate monovalent conjugates by tuning the ratio between QDs and EGF (Fig. 2a ) 22 and used a functional assay to count the number of QD-EGF conjugates per cell. The discrete number of QDs bound to cells followed a linear trend with increasing EGF conjugation up to a 1:1 ratio, at which point multiple EGFs per QD no longer proportionally increased the number of QDs bound (Fig. 2b ). We thus used an EGF:QD ratio of 0.3 to ensure that we were within the linear regime of functional monovalency, and that binding led to endocytosis (Supplementary Figure 1 ). This conjugation scheme left a substantial fraction of the QD population unbound to EGF, which is non-consequential for these studies, as QD binding events were EGF-specific based on a competition assay and absence of bare QD binding to cell (Supplementary Figure 2 ). Fig. 2 Functional and optical characterization of epidermal growth factor (EGF) conjugates of dyes and quantum dots (QDs). a Schematics show assembly of QD-EGF conjugates with controlled valency through biotin-streptavidin conjugation. b Experimental relationship between QDs bound per cell versus EGF:QD conjugation ratio. The red dashed line indicates the average number of QDs per cell for QDs alone, with red shade indicating the background standard deviation. MDA-MB-231 cells were treated with QD-EGF conjugates at 0.5 nM for 10 min on ice. N ≥ 10 cells. c Specific binding isotherm for dye-EGF and QD744-EGF to MDA-MB-231 cells measured by flow cytometry. Raw data are shown in Supplementary Figure 3 . N = 2. d Fluorescence spectra of dye and QDs used in this work and mean autofluorescence, in arbitrary units. For autofluorescence, N ≥ 13 cells. e Representative images of autofluorescence, dye-EGF, QD565-EGF, QD605-EGF, and QD744-EGF bound to cell, measured in their respective spectral bands. All QD-EGF were bound to the same cell. The black square in each brightfield (BF)/nucleus micrograph indicates the zoom-in area shown in the fluorescence images. Yellow arrows indicate autofluorescence; red arrows indicate dyes or QDs. f Two-dimensional intensities of autofluorescence, single dye and single QDs. The box indicates 25/75th percentile; red lines are the mean value; whiskers are s.d.; N ≥ 30 for dye and QDs; N > 20,000 spots for autofluorescence at each wavelength. g Receiver operating characteristic (ROC) curve showing higher detection accuracy of single QD744 (dark red) compared with dye (blue), and QD565 (green) or QD605 (orange) in the presence of autofluorescence shown in f . Numbers indicate areas under the ROC curves (AUROC). h Representative BF/nucleus micrograph with orthogonal 3D fluorescence images of dye-EGF and QD744-EGF bound to the same cell, imaged from the bottom of the cell to the top. A z-projection of summed fluorescence intensity of dye-EGF and QD-EGF is shown at right. i Same as h , but fluorescence images were acquired from the top of the cell to the bottom. In b - d , points indicate mean ± s.d. In e , h , and i , scale bars, 5 μm Full size image We use compact alloyed QDs that we recently developed with hydrodynamic dimensions near 10 nm (Supplementary Figures 4 – 5 ) 23 , compared with 15–35 nm sizes for commercial variants, to be near the 8-nm spacing between adjacent EGF molecules in EGFR oligomers so as to avoid steric hindrance impacts on signaling 24 . Binding isotherms on MDA-MB-231 cells measured by flow cytometry showed nearly identical affinity for the QD-EGF conjugates ( K D = 3.1 ± 0.6 nM) compared to EGF conjugated to a single tetramethylrhodamine dye (dye-EGF; K D = 3.2 ± 0.4 nM) (Fig. 2c ), which has a similar binding affinity as unlabeled EGF 25 . Measured K D values were in the range of those reported previously for EGF-EGFR binding on other cell types 26 , 27 . The similar affinity is logical as the k on for EGF-EGFR binding is orders of magnitude smaller than that of a diffusion-controlled reaction 28 , and the diffusion coefficient of QD-EGF is just 4–5 times larger than that of dye-EGF. In addition, dye-EGF and QD-EGF conjugates resulted in similar number of fluorescent endosomes (Supplementary Figure 6 ), which is consistent with previous findings and indicates similar degrees of receptor activation 29 . Compact QDs with fluorescence in the visible spectrum have similar intensities to cellular autofluorescence in epi-illumination mode (Fig. 2d–f ), making absolute quantification in 3D impossible. Measurements using fluorescent dyes are much worse; single dye-EGF conjugates were eight times dimmer than mean cellular autofluoresence (Fig. 2e, f ) and more than 10 times dimmer than QD605. We thus tuned the QD emission through ternary compositional alloying of the core to the near-infrared, beyond 700 nm, where autofluorescence is substantially attenuated (Fig. 2d ), resulting in facile identification of individual QDs with 40-fold higher mean intensity than autofluorescence (Fig. 2f ) for high-accuracy quantification with an area under receiver operating characteristic (AUROC) of 0.99 (Fig. 2g ). Notably, in this spectral window, the dye intensity is still comparable in intensity to autofluorescence. This unique utility of QDs to emit with high intensity in the near-infrared for low-background cellular imaging adds to the growing value of these materials for applications such as deep-tissue imaging 30 . Absolute quantification of EGF bound to a cell with a thickness of ~10 μm requires photostable labels because imaging the entire 3D cell volume requires repeated excitation to acquire sequential image planes at sufficient z -axis resolution. QDs are expected to outperform dyes for this application due to their exceptional photostability that exceeds that of dyes by orders of magnitude 31 , 32 . We labeled cells with a combination of dye-EGF and QD744-EGF and reconstructed 3D images from slices that were either acquired from the bottom to the top of a cell (Fig. 2h ) or from the top to the bottom (Fig. 2i ). The observed distribution of intensity for the dye-EGF conjugate was substantially different between the two acquisition processes, with photobleaching clearly apparent in the slices acquired at later times in both cases. In contrast, QD-EGF showed similar intensity distributions for both acquisition routines, highlighting the benefit of QD photostability. Fluorescence intensity quantization While absolute quantification of QDs and fluorescent molecules on flat surfaces (e.g., coverslips, basolateral membranes, and microbes) with sparse labeling is well established 33 , counting in 3D presents unique challenges for intensity calibration due to autofluorescence, out-of-focus signals, and random single-QD blinking. Compared with 2D images, the intensity of a single near-infrared QD in 3D overlaps substantially with background (AUROC = 0.96), even in the absence of cells (Fig. 3a ). Deconvolution reassigns out-of-focus light back to its point of origin to increase signal-to-noise ratio 34 , which we observe increases QD intensity significantly over background, with unity AUROC. Shown in Fig. 3b , deconvolved 3D intensities of single QDs verify that multiple QDs in a single diffraction-limited 3D spot can be accurately counted. This deconvolved intensity was independent of the distance across the thickness of a cell (Supplementary Figure 7 ). We analyzed 1, 2, and 3 QD spots, identified by their distinguishable 2D intensity time–trace distributions resulting from blinking, with example data shown in Fig. 3c–e . The quantized number of QDs contributing to each distribution was determined by fits validated by Akaike information criteria (AIC) 35 , with examples shown in Fig. 3e . This outcome is important because EGFR oligomers and coalesced receptors within endosomes will contain numerous EGFs within a diffraction-limited volume. Fig. 3 Intensity calibration for counting growth factors. a Representative images show QD744 imaged in two-dimensional (2D), three-dimensional (3D), and 3D after deconvolution. Histograms depict noise (gray) and quantum dot (QD) intensities (red). Voxel sizes are 3 × 3 for 2D and 3 × 3 × 11 for both 3D and deconvolved 3D. N = 48. Scale bar, 5 µm. b Intensity calibrated deconvolved 3D analysis of diffraction-limited spots containing 1, 2, or 3 QDs. Points indicate mean ± s.e.m. with N = 72, 216, and 27 spots for 1, 2, and 3 QD spot −1 , respectively. The number of QDs per diffraction-limited spot is calculated from distributions of intensity from time traces based on the number of Gaussians fitted to brightness histograms. The number of QDs per spot is the number of fitted Gaussian minus 1 (noise) . c Examples of intensity time traces of 1, 2, and 3 QD spot −1 . d Gaussian fitting for intensity histograms of 1, 2, and 3 QD spot −1 corresponding to examples shown in c . e Minima of Akaike information criteria (AIC) are indicated by red arrows, corresponding to the optimal number of Gaussians to fit each intensity histogram Full size image EGF quantification We used QDC-3DM to count the number of EGF molecules in individual MDA-MB-231 cells using a commonly applied temporal-pulse stimulation experiment 36 . Cells were exposed to QD-EGF conjugates across four log(10)-spaced concentrations for 5 min on ice, and unbound conjugates were washed away. Figure 4a shows example images of cells treated with 0.1, 1, and 10 nM concentrations of QD-EGF, across which counts per cell ranged from 0 to 4000, with mean values linearly correlated with concentration between 0.1 to 10 nM (Fig. 4b and Supplementary Figure 8a ), and some binding saturation at 100 nM (Supplementary Figure 9 ). Importantly, mean stimulation numbers were reproducible between two independent experimental replicates (Fig. 4b and Supplementary Figure 8a ) and EGF distributions fit well to gamma distributions ( p ≥ 0.79 by χ 2 test) (Fig. 4c , red regressions), insinuating a correlation with intrinsic distribution in receptor number 2 , 5 , 37 , 38 . We simulated the bound ligand distribution by applying a ligand–receptor kinetic binding model 12 , 28 , 39 to known distributions of EGFR expression in MDA-MB-231 cells 37 , 38 , 40 , with ligand binding further distributed by a Poisson to simulate ligand binding probabilities per cell (see Methods). The experimental EGF binding distribution matched simulations well both at 37 °C (Supplementary Figure 10 ) and at 4 °C (Fig. 4c, d and Supplementary Figure 8b ), with <13% deviation in mean stimulation magnitude between theory and experiment (Supplementary Table 1 ). Deviations between simulation and experiment were largest for the distribution width at 0.1 nM QD-EGF, for which the coefficient of variation was measured to be 95% but predicted to be 65%, suggesting strong merit in empirical measurements at low physiological stimulation levels, likely deriving from intrinsic noise effects such as local fluctuations in ligand and receptor concentrations 39 , 41 . Fig. 4 Counting and simulating growth factor binding. a Representative z-projections of maximum intensity (left) and three-dimensional (3D) images (right) of cells treated with indicated quantum dot-epidermal growth factor (QD-EGF) concentration for 5 min on ice. b Number of QD-EGF bound per cell at indicated QD-EGF concentrations, showing two independent replicates with N ≥ 17 cells for each condition. c Distributions show the number of EGF per cell measured experimentally at the indicated QD-EGF concentration. Maximum likelihood estimation regressions of gamma distributions are shown as red lines and simulation results are shown as blue lines. For regression, p = 0.79, 0.88, and 0.85 for 0.1, 1, and 10 nM QD-EGF concentrations, respectively. For simulation, p = 0.24, 0.49, and 0.67 for 0.1, 1, and 10 nM QD-EGF concentrations, respectively. All p values were calculated using χ 2 tests. d EGF per cell is shown as experimental results (gray) and simulation results (blue) across different concentrations. Simulation results in d were obtained by sampling cells from the EGFR number gamma distribution (see Methods). e Representative z-projections of maximum intensity of breast cancer cell lines MCF-7, MDA-MB-231, and MDA-MB-468 in order of increasing EGFR expression. Cells were treated with 1 nM QD-EGF for 5 min on ice. Yellow arrow indicates a single QD-EGF bound to an MCF-7 cell. f Number of QD-EGF bound per cell for conditions in e with N ≥ 40 cells for each condition. QDs are shown in red and nuclei are blue. In a , e , QDs are shown in red and nuclei are blue; scale bars, 10 µm. In b , d , f , the box indicates 25/75th percentile; red lines are means; whiskers are s.d. Full size image QDC-3DM also allows absolute counting of EGF binding events on cells with widely ranging EGFR expression. Figure 4e shows images of three human breast cancer cell lines after a pulse of 1 nM QD-EGF on ice. An increase in the number of EGF bound is evident as EGFR expression increases from low (MCF-7; ~10 4 cell −1 ) 42 , 43 to medium (MDA-MB-231; ~10 5 cell −1 ) 42 , 43 to high (MDA-MB-468; ~10 6 cell −1 ) 42 . There were a mean of 1, 80, and 922 EGF per MCF-7, MDA-MB-231, and MDA-MB-468 cells, respectively (Fig. 4f ), and a large fraction of MCF-7 cells were bound to 0 or 1 EGF bound (Fig. 4e, f ), highlighting the necessity of single-molecule counting to assess absolute stimulation. Importantly, EGF binding was almost exclusive to EGFR, as EGF binding was reduced to <0.2% per cell after treatment with Cetuximab, which specifically blocks the EGFR binding site for EGF (Supplementary Figure 11 ) 44 . EGF spatial distribution We demonstrate an example of how QDC-3DM provides empirical correlations between input stimulus and output signaling in single cells based on the localization of EGF, reflecting receptor internalization over time. As a prototypical receptor tyrosine kinase, EGFR undergoes phosphorylation on intracellular domains upon ligand binding, as well as internalization by endocytosis and intracellular trafficking, while signals propagate to downstream kinase cascades 45 . To correlate translocation across multiple cells, micro-contact printed hydrogel matrices were used to normalize adhesive footprints for cultured cells and ensure uniform distributions of size, shape, and organelle location 36 , 46 . Figure 5a shows representative images of individual cells at early (10 min) and late (30 min) stages after 5 min pulsed EGF stimulation, with EGF labeled in red, showing translocation from surface regions to perinuclear regions consistent with late endosomes or lysosomes over time. Example images also show inhibition of internalization by the EGFR-blocking drug gefitinib at two concentrations 47 . These 3D images of patterned cells can be reduced to 2D heat map averages across populations of cells (Fig. 5b ) and 1D projection histograms (Fig. 5c ) to depict the ensemble average of how receptor-ligand signaling events propagate spatially within cells. Notably EGFR was substantially redistributed across the cell with EGF treatment (Supplementary Figure 12 ). Fig. 5 Single-cell epidermal growth factor (EGF) binding correlates with single-cell receptor translocation and drug response. a Representative three-dimensional (3D) images of MDA-MB-231 cells after stimulation with quantum dot-EGF (QD-EGF) in the absence or presence of EGFR inhibitor gefitinib. Times after the start of a stimulation pulse are indicated. QDs are shown in red, nuclei are blue, and Alexa Fluor 488-conjugated fibronectin micropatterns are green. b Two-dimensional (2D) z -projections on xy fibronectin micropattern planes and c one-dimensional (1D) projections on x -axes indicate the localization of single EGF averaged across cells. d Representative image of cell membrane measured through fluorescence imaging of fluorescently labeled receptors (top) and membrane reconstruction using alpha shapes (bottom). e Correlation between EGF number and fraction of EGF internalized in individual cells at 10 and 30 min after the start of a QD-EGF stimulation pulse. f Western blots and g relative pEGFR abundance in MDA-MB-231 whole-cell lysates immediately after stimulation with QD-EGF in the presence of indicated gefitinib concentrations. Uncropped western blots with molecular weight markers are shown in Supplementary Figure 16 . h Fraction of EGF internalized in single cells at different gefitinib concentrations, 30 min after the start of a QD-EGF pulse. The box indicates 25/75th percentile; red lines are means; whiskers are s.d. i Coefficient of variation (CV) of the fraction of EGF internalized in h . j Number of EGF bound impacts the fraction of EGF internalized 10 min after the start of a QD-EGF pulse in the presence of gefitinib at 0, 51, and 5,100 nM concentration. The gray line shown in 51 nM (middle) and 5100 nM (right) gefitinib plots is the linear fit for 0 nM gefitinib condition (left). Data fits are shown in Supplementary Figure 14 . N = 20 and 12 cells for 10 and 30 min after QD-EGF stimulation onset without gefitinib, respectively; N = 12, 10, 20, 23, 12, and 14 cells for 30 min after QD-EGF stimulation onset in the presence of gefitinib at 0, 0.51, 5.1, 51, 510, and 5100 nM concentrations, respectively. All stimulation pulses used 1 nM EGF-QD for 5 min. All scale bars indicate 10 µm Full size image Stimulation magnitude correlation with signaling With these cell imaging tools together, we can extract a wealth of single-cell signaling analytics that are both absolute in molecular number and spatially resolved across any cell across the stimulation distribution. EGF translocation metrics, in particular, are intrinsic signal propagation outputs of the analysis which can be readily correlated to the distance from the membrane using 3D membrane maps which we reconstruct as surfaces using alpha shapes 48 derived from membrane labels in a separate QD channel (Fig. 5d and Supplementary Figure 13 ). We observed that in the absence of inhibitors, the number of internalized EGF after a 5 min pulse was linearly proportional to the number of EGF bound with a slope near 1 and R 2 ≥ 0.99 (Supplementary Figure 14a ). Trends are enhanced when plotted as fraction internalized in Fig. 5e , showing 10 and 30 min after stimulation. The y -axis spread in values indicates that the heterogeneity of internalization is greater at shorter time periods, while the differences in x-intercepts indicate the internalization rate. Between 10 and 30 min, the number of internalized EGF increases by ~120 independently of stimulation value between ~300 and ~2700. We also observed that the absolute number of QD-EGF per cell decreased from ~1300 to ~800 between 10 and 30 min after a 5 min pulse (Supplementary Figure 15a ) likely due to dissociation of QD-EGF from the receptors after the pulse, an outcome that can be directly probed with QDC-3DM. Stimulation impact on pharmacological inhibition Receptor internalization was attenuated by pharmacological EGFR inhibition in a dose-dependent manner that was coupled to EGF stimulation magnitude. Importantly, EGFR is a widely pursued proto-oncogene drug target, and blocking activation correlates both with inhibition of phosphorylation and translocation, which blocks downstream signals driving chemotaxis and proliferation 49 . Unfortunately, drugs targeting EGFR activation such as gefitinib have limited clinical efficacy in cancers such as triple-negative breast cancer, in which up to 76% of cases overexpress EGFR 49 . Western blot analysis of MDA-MB-231 cells exposed to 1 nM QD-EGF conjugates (1267 ± 788 EGF per cell) showed substantial receptor phosphorylation and a dose-dependent decrease in phosphorylation with increasing drug concentration (Fig. 5f–g and Supplementary Figure 16 ) with half-maximal inhibitory concentration (IC 50 ) of 195 nM. Figure 5h shows the dose dependence of EGFR internalization fraction, exhibiting a population-averaged potency of inhibition (IC 50 = 380 nM) similar to that of phosphorylation inhibition and slightly higher than the average equilibrium binding constant to the receptor K D (51 nM) 50 . At the single-cell level, an increase in drug concentration led to higher variability in response of individual cells (Fig. 5i ), with coefficient of variation monotonically increasing from 7.0 to 43% between 0 and 5.1 μM, an effect that has been widely reported for many classes of inhibitors 51 . Note that there was no significant difference in EGF binding for drug concentrations between 0 and 5.1 μM (Supplementary Figure 15b, c ). Figure 5j shows how cell-to-cell variability of EGFR inhibitor response derives substantially from the magnitude of EGF stimulation. At a drug concentration near the K D (51 nM), EGF internalization remained similarly proportional to EGF bound across all cells, but with shifted internalization fraction that was equally diminished in magnitude across the population (see also Supplementary Figure 14b ), suggesting uniform deactivation of membrane-localized EGFR. Moreover, these correlations demonstrated that excess stimulation was sufficient to overcome the biological effect of inhibition, with only 5% drug effect measured for 1500 EGF bound, compared with a 44% drug effect for 200 EGF bound. At 100-fold higher inhibitor concentration (5100 nM), internalization was further reduced, with 25% drug effect at 1500 EGF and 100% effect for 200 EGF bound. From these correlations it is apparent how stimulation can overcome signaling depletion thresholds imposed by inhibitors, and how heterogeneity arises from the proportionality between internalization and stimulation. By mapping the stimulation distribution ( x -axes in Fig. 5j ) to the internalization fraction slope, it can be seen that the low stimulation fraction, where slope is highest, has a dominant contribution to heterogeneity. The slope decreases for higher drug concentrations, so a greater percentage of the stimulation then contributes to the spread of drug effects, as the number of active receptors become depleted to yield a more stochastic population response. Discussion Counting individual growth factors using QDC-3DM requires a combination of molecular probe properties that is uniquely provided by QDs. Tuning emission to the infrared while retaining efficient blue excitation eliminates the vast majority of cellular autofluorescence needed to boost signal-to-noise and increase the accuracy of single-molecule identification from 74 to 99% (Fig. 2g ). In comparison, fluorescent dyes are too dim and do not provide photostability needed to withstand continuous excitation during volumetric image acquisition of 100–200 z-planes (Fig. 2h, i ). QDs further provide a convenient means to internally calibrate spot intensities to discrete ligand numbers due to distinctive binary emission signatures of single QDs derived from on-and-off single-QD blinking (Fig. 3b, c ). However, these intensities measured in 3D only correlate well with discrete QD numbers when 3D images are deconvolved to boost spot intensities and compensate for light from outside the focal plane (Fig. 3a ). Blinking can impact each QDC-3DM step depicted in Fig. 1b , so the analytical performance can depend on both the QD photophysical properties and the image acquisition conditions. The primary interference is that QDs may transition between an “on” and “off” state during the image acquisition time window, so measured intensities can be intermediate between the two states. For the first step of QD time–trace acquisition to identify single QDs, this intermediate intensity could lead to misidentification of single QDs by an automated algorithm, particularly for QDs in the population with short on-time probabilities. We apply stringent criteria 18 so that single-QD exclusion is more common than inclusion of QD multiplets. QDs with higher on-time fractions may correlate with brighter QDs in the population 52 , which could propagate to a calibrated 3D QD intensity that is skewed toward higher values in QDC-3DM step 2. The 3D intensities also depend on blinking in step 3, as some fraction of QDs will remain off over some of the 3D slices, contributing to the 3D intensity distribution width in Fig. 3a, b . Importantly, while off-time probabilities are largely independent of image acquisition conditions and QD structure, on-time probabilities can deviate due to a number of variables, particularly excitation intensity and QD surface passivation 53 , 54 , so different integration times may yield different relative intensities of specific QDs across a population with a distribution of blinking kinetics. For this reason, we used QDs and conditions for which deviations are expected to be minimal, using a thick insulating shell (4.7 monolayer (ML) CdZnS), low laser power (photon flux <10 mW cm −2 ), and short exposure time (~100 ms). Further increasing the shell thickness would reduce the number of QDs with low on-time probabilities and truncated on-time kinetics 55 , 56 . We found that the structure applied here provides an excellent fluorescence intensity together with a balanced physical size, yielding, together with the polymeric coating and a nanoparticle that is 12.6 nm in hydrodynamic diameter, comparable to common biological macromolecules such as antibodies. An increase in shell size to further reduce blinking could be offset by using a smaller core but at the expense of a wider emission band 18 , or by using a thinner coating, which could destabilize conjugates or lead to nonspecific binding. Efforts are underway to further optimize both nanocrystals and coatings to yield still smaller, brighter probes 57 , and further exploit wavelengths deeper in the infrared where Stokes shift is further increased and autofluorescence is further reduced 30 . Using QDC-3DM, it is now possible to directly measure biochemical input signals in single cells where inference from output metrics was only possible previously 8 . Single-cell studies in both prokaryotic and eukaryotic cells have shown that most protein expression number distributions are described by a gamma distribution 37 , 38 , 40 . Likewise, we observe that single-cell growth factor binding distributions to MDA-MB-231 cells fit gamma distributions across three orders of magnitude of concentration (Fig. 4c ). From simulations, distribution widths derive primarily from receptor number distributions convolved with a lesser contribution from intrinsic noise of random binding. Importantly, the contribution from intrinsic noise becomes larger at lower ligand concentrations, insinuating that for experiments under such conditions that simulate relevant physiological tissue states 9 , stimulation magnitudes cannot be directly inferred from receptor numbers. Simulations matched the experimentally measured mean ligand number quite well, but experimental distributions were consistently wider by a small margin (Supplementary Table 1 ), likely deriving from a combination of uncertainty in kinetic rate constants and receptor number distributions, as well as distributions of receptor states involving oligomerization and inhomogeneous localization in membrane microdomains. Notably, autocrine stimulation of cells by secreted factors will be undetectable by this technique, so it is important to determine whether such contributions may confound the analysis of the biological system under study. While MDA-MB-231 cells do not secrete EGF, they do produce noncanonical ligands for EGFR, but at levels far lower than what would contribute during the brief pulsed experiments applied here 58 . QDC-3DM allows empirical mapping between quantized single-cell stimulation and single-cell signaling, allowing extraction of single-cell signaling metrics that are otherwise unobtainable. Because localization and translocation are core contributors to signaling 59 , we used receptor internalization as an easily measured physical corollary of ligand-induced signal propagation downstream of receptor activation by phosphorylation. We find that the correlation between internalized EGF and the number of EGF bound shifts uniformly with time across the cell population (Fig. 5e and Supplementary Figure 14a ). The stimulation distribution further modulates the response to the EGFR inhibitor gefitinib (Fig. 5j and Supplementary Figure 14b ), diminishing drug effects at high EGF stimulation, and mediating heterogeneity at low EGF stimulation. These outcomes suggest that the concentration of growth factors in cell culture medium and local concentrations within tissue microenvironments will dictate drug–response sensitivity and heterogeneity based on how stimulation distributions map to sensitivity curves. These observations are most relevant to human cancers that develop diverse mechanisms to dysregulate EGFR signaling, including overexpression of receptors and overproduction of ligands, resulting in a resistance to signaling inhibition by targeted drugs 60 , 61 , 62 . In conclusion, we developed a functional near-infrared QD suitable for single-molecule counting in autofluorescent cells, as well as a detailed methodology for absolute quantification of growth factors on single cells using 3D fluorescence microscopy. We applied this approach to count growth factors under physiologically relevant stimulation conditions spanning three log(10)-spaced stimulation magnitudes. As a microscopy-based assay, this technology is well suited for pairing with downstream analyses of signaling and phenotype through live-cell fluorescent protein imaging, immunofluorescence, fluorescence in situ hybridization, and high-content microfluidics, and can further be adapted to long-term tracking and steady-state stimulation experiments beyond the acute pulsed experiments used here. The combined capabilities of spatially registered signaling events through cellular micropatterning and highly multiplexed fluorescent color-coding using QDs can form the components of a toolbox for elucidation of signaling biology to connect individual molecular events to comprehensive cell response and population distributions. We expect that this toolbox can be applied to any peptide ligand and used broadly to provide a more comprehensive understanding of the origin of cell heterogeneity and drug effect variability. Methods Chemicals and reagents Cadmium acetate hydrate (Cd(Ac) 2 ·H 2 O, 99.99+%), mercury acetate (Hg(Ac) 2 , 99.999%), selenium dioxide (SeO 2 , ≥99.9%), selenium powder (Se, ~100 mesh, 99.99%), sulfur powder (S, 99.98%), octanethiol (OT, ≥98.5%), behenic acid (BAc, 99%), 1,2-hexadecanediol (HDD, 97%), tetramethylammonium hydroxide solution (TMAH, 25wt.% in methanol), N -methylformamide (NMF, 99%), N , N , N ′, N ′-tetramethylethylenediamine (TEMED, 99%), (3-aminopropyl)triethoxysilane (APTES, 99%), glutaraldehyde, sodium periodate (99%), and 2-azidoacetic acid (97%) were purchased from Sigma-Aldrich. Anhydrous cadmium chloride (CdCl 2 , 99.99%) and zinc acetate (Zn(CH 3 COO) 2 , 99.98%) were obtained from Alfa Aesar. 1-Octadecene (ODE, 90% tech.), oleylamine (OLA, 80–90% C18 content), oleic acid (OAc, 90% tech.), and hydrazine hydrate (55%) were purchased from Acros Organics. DBCO-sulfo-NHS ester was purchased from Click Chemistry Tools. Sodium bicarbonate and glycine were purchased from Thermo Fisher Scientific. Polydimethylsiloxane (PDMS) was purchased from Polysciences. Acrylamide and bisacrylamide were purchased from Bio-Rad. Glacial acetic acid (99.7%) was purchased from JT Baker. Solvents including chloroform, hexane, toluene, methanol, acetone, and diethyl ether were purchased from a variety of sources, including Acros Organics, Thermo Fisher Scientific, and Macron Fine Chemicals. All chemicals above were used as purchased. Dulbecco’s modified Eagle’s Medium (DMEM), fetal bovine serum (FBS), Hank’s balanced salt solution (HBSS), and cell cultured grade bovine serum albumin (BSA) were purchased from VWR. SAv was purchased from ProSpec. Biotinylated EGF, dye-EGF, Hoechst, Alexa Fluor 488 NHS Ester, Alexa Fluor 647-conjugated goat anti-mouse antibodies, and goat serum were purchased from Thermo Fisher Scientific. Paraformaldehyde (PFA, 32% v/v in water) was purchase from Electron Microscopy Sciences. Dimethyl sulfoxide (DMSO), fibronectin from human plasma, Accutase cell detachment solution, and Tris hydrochloride (Tris-HCl, 1 M) were purchased from Sigma. Biotinylated DNA was prepared by Integrated DNA Technologies. MemBrite Fix 640/660 Cell Surface Staining Kit was purchased from Biotium. Phosphate-buffered saline (PBS) was purchased from Corning. His-tag protein A and Cetuximab were purchased from BioVision. Mouse monoclonal immunoglobulin G (IgG) antibody against human EGFR (EGFR.1 clone) was purchased from BD Biosciences. EGF and rabbit monoclonal IgG antibody against EGFR used in western blotting were purchased from Abcam. Mouse monoclonal IgG antibody against phosphorylated EGFR was purchased from R&D Systems. Horseradish peroxidase-conjugated antibodies against mouse and rabbit IgG were ordered from Jackson ImmunoReserach Laboratory. Gefitinib (>99%) was purchased from LC Laboratories. Western blotting reagents including Tris, sodium chloride (NaCl), ethylenediaminetetraacetic acid (EDTA), Triton X-100, sodium dodecyl sulfate (SDS), deoxycholate, sodium fluoride (NaF), sodium metavanadate (NaVO 3 ), Tween-20, glycerol, bromophenol blue, and tris(2-carboxyethyl)phosphine (TCEP) were purchased from various sources including Sigma, Thermo Fisher Scientific, and Bio-Rad. Synthesis of quantum dots QD cores composed of core/shell CdSe/Cd y Zn 1− y S (QD565 and QD605) or Hg x Cd 1 −x Se/Cd y Zn 1 − y S (QD744) were synthesized in-house 18 and coated with the multidentate polymer polyacrylamindo(histamine- co -triethyleneglycol) (P-IM) or polyacrylamindo(histamine- co -triethyleneglycol- co -azido-triethylene-glycol) (P-IM-N 3 ). These polymers yield particles with compact hydrodynamic diameter (7–12 nm) with nearly monomeric size distributions by gel permeation chromatography (>98%). The QDs are functionalized with azides for P-IM-N 3 coatings 23 . The QD565 and QD605 cores were reported in our previously published manuscript 23 while QD744 was synthesized using the process described below. CdSe QDs with 3.2 nm diameter were prepared by a heat-up synthesis method and then exchanged with mercury to yield an alloyed Hg x Cd 1− x Se core. Cd(BAc) 2 (0.2 mmol), SeO 2 (0.2 mmol), HDD (0.2 mmol), and ODE (4 mL) were mixed in a 50-mL round bottom flask and dried under vacuum at ~100 °C for 1 h. The temperature was raised to 230 °C at a rate of ~20 °C min −1 under nitrogen gas and maintained at 230 °C for 15 min. The solution was then cooled to ~110 °C by removing the heating mantle, and the QDs were purified by dilution with chloroform (10 mL) containing OAc (1 mL) and OLA (0.6 mL), and precipitation with a mixed solvent of methanol (15 mL) and acetone (15 mL). The QDs were redispersed in hexane and extracted twice with methanol followed by precipitation with excess methanol. Finally, the QDs were dispersed in a chloroform solution containing OAc and OLA (20 mL, chloroform:OAc:OLA = 20:1:1 by volume). Mercury exchange was initiated by injecting a mercury stock solution (Hg(Ac) 2 in OLA, 0.1 M) into the CdSe solution at room temperature with vigorous stirring. The ratio between total Cd atoms in the CdSe QDs and the injected Hg cations was 1:2. The reaction was allowed to continue for 5 min and then quenched by adding excess OT (~20 eq. to Hg 2+ ). Aliquots (0.2 mL) were collected before mercury addition and 3 min after OT addition, and absorption spectra were measured to analyze spectral shifts and extinction coefficient changes. The resulting Hg x Cd 1 − x Se QDs were purified by precipitation with a methanol/acetone mixture (50% v/v, ~30 mL) containing OAc (~0.2 mL) and OLA (~0.2 mL). The QDs were redispersed in chloroform (~15 mL) containing OAc (~0.2 mL) and OLA (~0.2 mL) and precipitated again by the addition of methanol/acetone (~30 mL). This dissolution–precipitation process was repeated three times to completely remove unreacted Hg(Ac) 2 and any reaction byproducts. Finally, the pure Hg x Cd 1 − x Se QDs with band edge absorption at ~640 nm were dispersed in hexane. A Cd x Zn 1 − x S shell was deposited epitaxially over the Hg x Cd 1 − x Se QD cores 23 . Purified QDs in hexane (~100nmol) were transferred to a 50-mL round bottom flask and the solvent was evaporated under nitrogen flow at 40–50 °C. The dried QDs were immediately redispersed in a mixed solvent of ODE (2 mL) and OLA (1 mL) containing sulfur precursor (S in ODE, 0.1 M) for the first 0.8 MLs of shell. The temperature was raised to ~120 °C under nitrogen and maintained at this temperature for 10 min. Then Cd x Zn 1 − x precursor ( x :1 − x mixture of Cd and Zn precursors, Cd(Ac) 2 and Zn(Ac) 2 in OLA, 0.1 M) in an equivalent mole quantity to the previous sulfur precursor was added dropwise while raising the temperature to ~130 °C. The reaction was allowed to proceed for 10 min at this temperature. This 0.8-ML shell growth cycle was repeated while controlling the composition ( x ) and raising the reaction temperature. Detailed reaction parameters for QD744 are summarized in Table 1 for a nanocrystal with band edge absorption wavelength of 702 nm, peak fluorescence emission of 744 nm with a full width at half maximum of 75 nm. Electron microscopy characterization as well as absorption and fluorescence emission spectra are shown in Supplementary Figure 5 . Table 1 Shell growth conditions for QD744 Full size table Polymer coating of QDs QD565, QD605, and QD744 (~18 µM) were coated with either P-IM or P-IM-N 3 using a two-step process 23 . First, QDs in hexane (0.5 mL) were purified by precipitation by mixing with chloroform (1.5 mL) and acetone (4 mL). The QD pellet was redispersed in hexane (4 mL) and extracted three times with methanol. The purified QDs (~2.5 µM, 3 mL) were mixed with NMF (2 mL) and TMAH solution (195 µL) in a glass vial and vigorously stirred for 1 h until all QDs transferred to the NMF phase. The transparent QD dispersion in NMF (1 nmol, ~280 µL) was diluted with DMSO (750 µL) in a glass vial equipped with a magnetic stir bar. P-IM or P-IM-N 3 dissolved in DMSO (11.3 mg mL −1 , ~159 µL) was added dropwise to the QDs while stirring. This mixture was then bubbled with nitrogen for 2 min and stirred for 2 h at 110 °C. The solution was then cooled to room temperature and the QDs were precipitated with the addition of ether (5 mL) and chloroform (2 mL). The QD pellet was dispersed in sodium borate buffer (50 mM, pH 8.5). Excess polymer was removed by filtration with a 50 kDa molecular weight cutoff (MWCO) Amicon ultra-centrifugal filter (Millipore), and finally dispersed in sodium borate buffer. Homogeneity and hydrodynamic size were analyzed through gel permeation chromatography, shown in Supplementary Figure 4a . Conjugation of QDs to EGF P-IM-N 3 -coated QD565, QD605, and QD744 were conjugated to DBCO-functionalized SAv by click-mediated triazole formation, and then conjugated to EGF through a single N-terminal biotin. QDs were conjugated to SAv using the following protocol 23 . SAv (180 µL, 0.5 mg mL −1 ) was first mixed with a 5-fold molar excess of DBCO-sulfo-NHS ester (1.6 µL, 5 mM in DMSO) and incubated on ice for 2 h. The reaction was quenched by dilution with a Tris-HCl (9 µL, 1 M) solution. Unreacted DBCO-sulfo-NHS ester was removed by filtration using a 0.5 mL Amicon centrifuge filter with 30 kDa MWCO. It was previously verified that these reaction conditions yielded nearly 1:1 conjugates between QDs and SAv 23 , further confirmed by nearly complete shifts of agarose gel electrophoresis bands of the QDs after SAv conjugation; bands further shifted after addition of biotinylated 90-mer single-stranded DNA, shown in Supplementary Figure 4b . The DNA sequence was 5′-Biotin/(T) 68 TAG CCA GTG TAT CGC AAT GAC G-3′. DBCO-SAv was then mixed with P-IM-N 3 -coated QDs at a 1:1 molar ratio (0.5 µM) at 4 °C for 12 h. Then, a 50-fold molar excess of 2-azidoacetic acid was added and unreacted reagents were removed by filtration with a 0.5 mL Amicon centrifuge filter with 100 kDa MWCO. QD-SAv was then conjugated to EGF-biotin by mixing EGF-biotin with QD-SAv at specific ratios to a final QD concentration of 0.2 µM in PBS at 4 °C for 4 h. Gel electrophoresis with a hybrid polyacrylamide (PA)-agarose gel (2% PA and 0.5% agarose) was used to characterize the conjugates 23 , 63 . To ensure that the conjugation between the QD-SAv and biotin-EGF was functionally monovalent, we varied the ratio of biotin-EGF:QD-SAv and observed that a dose response in cells followed a linear trend with increasing conjugation ratio until saturation (Fig. 2b ). Thus by choosing a biotin-EGF:QD-SAv of 0.33:1, well within the linear regime, we could ensure that the QD-EGF complex was largely monovalent. We have also verified that these QD-EGF conjugates are highly specific and functional (Supplementary Figures 1 – 2 ). Conjugation of QDs to IgG P-IM-coated QD605 was conjugated to a monoclonal IgG antibody against EGFR (EGFR.1 clone) through a protein A linker. Protein A contained a single his-tag, allowing rapid, efficient, and functional conjugation to QDs with P-IM coatings by metal chelation of the QD surface 23 . First, the QDs were mixed with a 4-fold molar excess of his-tag protein A in PBS at a QD concentration of 1 µM at room temperature for 2 h. Then, anti-EGFR IgG was added at a molar ratio of 4:1 IgG:QD in PBS to reach a QD concentration of 0.8 µM. The mixture was incubated at room temperature for 3 h and then stored at 4 °C until use. Thirty minutes prior to use, the IgG conjugates were diluted in serum-free, phenol red-free DMEM supplemented with 0.8% BSA. Fibronectin labeling Alexa Fluor 488-labeled fibronectin was prepared by mixing Alexa Fluor 488 NHS ester and fibronectin from human plasma (1 mg mL −1 , 1 mL) at a 10:1 molar ratio in 0.1 M sodium bicarbonate buffer (pH 8.3) at room temperature for 1 h in the dark. Unreacted dye was quenched by the addition of glycine (20 mM), followed by 10 min of incubation and purification using a MiniTrap Sephadex G-25 column (GE Healthcare) with PBS mobile phase. After purification, there was a mean 4.5 Alexa Fluor 488 molecules per fibronectin based on ultraviolet–visible absorption spectrophotometry. Immediately before use, Alexa Fluor488-labeled fibronectin (25 µg mL −1 , 1 mL) was oxidized with sodium periodate (3.5 mg mL −1 ) for 45 min at room temperature to form ketones. The oxidized protein solution was then filtered through a 0.2 µm syringe filter. Hydrogel substrate preparation PA hydrogels were fabricated on glass coverslips (18 mm, Thermo Fisher Scientific) 64 , 65 . First, coverslips were washed with ethanol and deionized water. Each coverslip was placed in a well of a 12-well plate and amine-functionalized with 1 mL APTES (0.5% v/v in deionized water) at room temperature for 3 min. Coverslips were then washed three times with deionized water, followed by 1 mL glutaraldehyde (0.5% v/v in deionized water) at room temperature for 30 min to generate aldehydes. A stock PA solution prepared by mixing acrylamide (25 mL, 20%) and bisacrylamide (4.9 mL, 2%) was passed through a 0.2 µm cutoff filter and degassed by bubbling with nitrogen. For each sample, ammonium persulfate (0.1%) and TEMED (0.1%) were added to the PA solution to initiate cross-linking. The PA solution (20 µL) was then sandwiched between the functionalized glass coverslip and glass slide with hydrophobic surface for 20 min. The hydrogel-coated glass coverslips were then detached from the glass slide and placed in wells of a 12-well plate. The hydrogel surfaces were treated with hydrazine hydrate for 2 h, rinsed with 5% glacial acetic acid for 1 h, and finally incubated in deionized water overnight. Immediately before use, PA hydrogels were dried at room temperature for 1.5 h and sterilized under ultraviolet light for 15 min. Micro-contact printing of fibronectin on hydrogels Microislands of fluorescent fibronectin were deposited by stamping onto PA hydrogels as 500µm 2 rectangles with specific aspect ratios (5, 1.5, and 1). A PDMS stamp was fabricated by polymerization on a patterned master of photoresist (SU-8, MicroChem) coated on a silicon wafer by photolithography. PDMS stamps were cleaned with ethanol and sterile water immediately before use. Oxidized Alexa Fluor 488-labeled fibronectin (25 µg mL −1 , 150 µL) was then added to the top of the patterned PDMS stamp and allowed to adsorb for 30 min. Excess fibronectin solution was quickly removed under nitrogen air stream and the fibronectin-coated PDMS surface was immediately transferred to the dried PA hydrogel by stamping. The fibronectin printed PA hydrogel was then submerged in PBS in a 12-well plate and was ready for cell seeding. Isolated QDs on glass coverslips P-IM-N 3 -coated QD744 in PBS (1 nM) were spin coated (2500 rpm, 30 s) onto #1.5 glass coverslips that were cleaned with ethanol, methanol, and acetone. Unpatterned cells without QD treatment MCF-7 cells (ATCC, HTB-22), MDA-MB-231 cells (ATCC, HTB-26), or MDA-MB-468 cells (ATCC, HTB-132) (50,000 cells mL −1 , 0.5 mL) were cultured on Lab-Tek II eight-well chamber slides (Nunc) in phenol red-free DMEM supplemented with 10% FBS. After 8 h, the cells were starved overnight in serum-free, phenol red-free DMEM containing 0.8% BSA. The cells were then fixed with 4% PFA in PBS on ice for 15 min, washed three times with ice-cold PBS, and permeabilized with methanol on ice for 6 min. Cells were then stained with 1 µg mL −1 Hoechst at room temperature for 10 min and washed three times with PBS. Unpatterned cells treated with QD-EGF or dye-EGF Samples were prepared similarly to unpatterned cells without QD treatment with the following changes: after overnight starvation, the medium was removed and replaced with ice-cold serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing different concentrations of QD-EGF and/or dye-EGF. Cells were incubated on ice for 5 or 10 min, and then washed three times with ice-cold PBS and then fixed, permeabilized, and stained with Hoechst as described above. Unpatterned cells for EGF internalization assay Samples were prepared similarly to unpatterned cells without QD treatment with the following changes: after overnight starvation, the medium was removed and replaced with pre-warmed serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing different concentrations of QD-EGF, dye-EGF, or QD-SAv. Cells were incubated at 37 °C for 5 min, and then washed three times with pre-warmed serum-free, phenol red-free DMEM supplemented with 0.8% BSA. Cells were further incubated at 37 °C for 25 min, then fixed, permeabilized, and stained with Hoechst as described above. Patterned cells treated with QD-EGF To visualize the spatial localization of QD-EGF across multiple cells, cells were shaped to specific geometries by growth on islands using the micro-contact printing methodology described above. MDA-MB-231 cells (30,000 cells mL −1 , 1 mL) in phenol red-free DMEM supplemented with 10% FBS were seeded into each well of a 12-well plate containing coverslips with fibronectin patterned PA hydrogels. After 2.5 h, cells were starved in serum-free, phenol red-free DMEM supplemented with 0.8% BSA for 5 h. Cells were then treated with QD744-EGF (1 nM) in the same medium for 5 min. Cells were then washed three times with serum-free medium and maintained for specific time periods in serum-free medium. The cells were washed three times with ice-cold serum-free medium and incubated with QD605-IgG (20 nM) on ice for 6 min. Cell were washed three times with ice-cold PBS, fixed, and stained with Hoechst as described in Protocol 2. Patterned cells treated with QD-EGF and gefitinib Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were treated with different concentrations of gefitinib as indicated for 40 min in serum-free DMEM. The medium was removed and replaced with ice-cold serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing QD744-EGF and the same concentration of gefitinib for 5 min. Cells were then washed three times with serum-free medium and maintained in serum-free medium with the same concentration of gefitinib for the indicated time. The cells were then treated with the QD605-IgG membrane stain according to Protocol 5, and the remainder of the protocol was followed. Patterned cells treated with QD-EGF and Cetuximab Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were treated with Cetuximab (20 nM) as indicated for 1.5 h in serum-free DMEM. The medium was removed and replaced with pre-warmed serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing 1 nM QD744-EGF and the same concentration of Cetuximab for 5 min. Cells were then washed three times with serum-free medium and maintained in serum-free medium with the same concentration of Cetuximab for the indicated time. Cells were then then fixed, permeabilized, and stained with Hoechst as described above. Patterned cells with membrane stain Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were stained with the MemBrite Fix Cell Surface Staining Kits following the manufacturer’s protocol. Briefly, cells were treated with pre-staining solution in HBSS at 37 °C for 5 min. Cells were then treated with staining solution diluted in HBSS (1:1000 dilution) at 37 °C for 5 min. The cells were washed three times with ice-cold serum-free medium, and treated with the QD605-IgG membrane stain according to Protocol 5. The remainder of the protocol was followed. Patterned cells with EGFR stain Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were washed three times with PBS, fixed, and permeabilized as described above. Cells were then blocked with 1% BSA in PBS at room temperature for 15 min and stained with mouse anti-EGFR antibody (1 μg mL −1 ) in 1% BSA at 4 °C overnight. After incubation, cells were washed three times with PBS and blocked with 1% BSA and 2% goat serum in PBS at room temperature for 15 min. Cells were then stained with Alexa Fluor 647-conjugated goat anti-mouse secondary antibody (1:300 stock dilution) and Hoechst (1 μg mL −1 ) at room temperature for 1 h. Western blot MDA-MB-231 cells (300,000 cells) were seeded in each well of a 6-well plate for 72 h in DMEM supplemented with 10% FBS. Cells were then starved in serum-free DMEM supplemented with 0.8% BSA for 5 h. Serum-starved cells were then treated with gefitinib at the indicated concentrations for 40 min in serum-free DMEM containing with 0.8% BSA. Cells were then stimulated with QD744-EGF (1 nM) in the presence of different concentrations of gefitinib for 5 min and washed three times with ice-cold Tris-buffered saline (TBS; 50 mM Tris, 150 mM NaCl, pH 7.5). Cells were lysed by treatment with radioimmunoprecipitation assay buffer (50 mM Tris, 150 mM NaCl, 2 mM EDTA, 1% Triton X-100, 0.1% SDS, 0.5% deoxycholate) supplemented with Halt Protease Inhibitor Cocktail (Thermo Fisher Scientific) and phosphatase inhibitors (50 mM NaF, 1 mM NaVO 3 ) on ice for 15 min. Cell lysates were collected after centrifugation for 15 min at 14,000 g at 4 °C; a small fraction was aliquoted for protein concentration measurement using the bicinchoninic acid assay. Protein concentrations for each sample were adjusted to ~0.9 mg mL −1 . Cell lysates were then mixed with 5× sample buffer (1 M Tris, pH 9, 10 g SDS, 12.5 mL glycerol, 100 µL 0.5 M EDTA, 50 mg bromophenol blue, 100 mM TCEP) to a final concentration of 1×, heated at 75 °C for 20 min, aliquoted, and stored at −80 °C until use. Samples were loaded into wells of an SDS-polyacrylamide gel; electrophoresis was performed, and gels were transferred to a polyvinylidene difluoride membrane (Immubilon-P membrane, Millipore). The membrane was washed three times with deionized water followed by Tween-20 (0.1%) in TBS for 5 min each. The membrane was then blocked with 5% milk and 0.1% Tween-20 in TBS for 1 h. The membrane was treated overnight at 4 °C with a solution of primary antibodies in 1% milk and 0.1% Tween-20 in TBS. Primary antibodies used were rabbit anti-EGFR (1:500 dilution), mouse anti-human pEGFR (1:250 dilution), and rabbit anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) (1:1000 dilution; Cell Signaling). Membranes were washed with 1% milk and 0.1% Tween-20 in TBS five times before incubation with horseradish peroxidase-conjugated secondary antibodies (anti-mouse or anti-rabbit, 1:5000 dilution) for 1 h. Membranes were again washed five times with 1% milk and 0.1% Tween-20 in TBS, and one time with 0.1% Tween-20 in TBS before bands were developed by enhanced chemifluorescence substrate (ECL, Thermo Fisher Scientific) and imaged on autoradiography film (Denville Scientific). Images were analyzed using ImageJ software (National Institutes of Health). The band intensities for pEGFR and EGFR were divided by that of GAPDH; then, the band intensity of pEGFR/GAPDH was divided by EGFR/GAPDH. The intensities were normalized to sample treated with 1 nM QD-EGF without gefitinib to calculate the ratio of pEGFR to total EGFR under the different experimental conditions. Flow cytometry MDA-MB-231 cells were seeded in a T-75 cell culture flask in DMEM supplemented with 10% FBS and cultured until 90% confluence. Cells were washed once with PBS and treated with 5 mL Accutase at room temperature until fully detached from the surface. Accutase was removed by centrifugation for 5 min at 200 g and cells were washed once with ice-cold PBS containing 0.5% BSA and resuspended in the same medium at 3 × 10 6 cells mL −1 . Cell suspensions were then mixed in equal volume (25 μL) with ice-cold solutions of QD-EGF (0.06–120 nM; EGF:QD = 0.33) or dye-EGF (0.02–40 nM). Control samples to measure nonspecific binding were prepared identically but with 2 μM unlabeled EGF. The cells were incubated at 4 °C for 4 h with rocking, washed three times with ice-cold PBS containing 0.5% BSA, and resuspended in PBS. Fluorescence intensities of cells were measured with 488 nm laser excitation, a 685 LP dichroic mirror, and a 695/40 nm BP emission filter for QD-EGF, or 561 nm laser excitation and 582/15 nm BP emission filter for dye-EGF. Single cells were selected using a forward scatter width gate and a minimum of 10,000 single cells were measured for each condition. The percent of maximum EGF bound for each condition, P ( c ), was calculated using the following equation: $$P(c) = \frac{{\overline {I_{c,{\mathrm{tot}}}} - \overline {I_{c,{\mathrm{ns}}}} }}{{\overline {I_{c_{{\mathrm{max}}},{\mathrm{tot}}}} - \overline {I_{c_{{\mathrm{max}}},{\mathrm{ns}}}} }} \times 100,$$ (2) where \(\overline {I_{c,{\mathrm{tot}}}}\) and \(\overline {I_{c,{\mathrm{ns}}}}\) are the mean fluorescence intensities of cells treated with c concentration of QD-EGF or dye-EGF in the absence and presence of unlabeled EGF, respectively, and \(\overline {I_{c_{{\mathrm{max}}},{\mathrm{tot}}}}\) and \(I_{c_{{\mathrm{max}}},{\mathrm{ns}}}\) are the mean fluorescence intensities of cells treated with the maximum concentration ( c max ) of QD-EGF or dye-EGF in the absence and presence of unlabeled EGF, respectively. The dissociation constant, K D , was calculated based by fitting the QD-EGF or dye-EGF binding curve to the following equation using Prism (Graphpad Software): $$P(c) = \frac{{B_{{\mathrm{max}}} \cdot c}}{{K_{\mathrm{D}} + c}},$$ (3) where B max is the maximum percent of specific binding. 2D and 3D microscopy Fluorescence microscopy of isolated QDs and cells was performed using wide-field illumination on a Zeiss Axio Observer Z1 inverted microscope with a ×100 1.45NA alpha Plan-Fluar oil immersion objective, 100 W halogen lamp illumination, 488 nm/100 mW OPSL laser, and 561 nm/40 mW diode laser units. Images were acquired using a Photometrics eXcelon Evolve 512 EMCCD camera through the Zeiss ZEN software. Excitation light was filtered using Semrock and Zeiss filters (G 365, BP 470 nm/40 nm, BP 482/18, BP 561/14 nm). Emission signals were filtered using Semrock bandpass filters (445/50, 525/50, 562/40, 600/37, and 732/68 nm). Brightfield images were acquired using transmitted-light illumination (12 V, 100 W Halogen lamp) with DIC prism III/0.55. Cellular autofluoresence spectrum measurement Cellular autofluorescence spectra were acquired with 488 nm excitation using two different instruments. For wavelengths between 530 and 727 nm, a Zeiss 710 confocal scanner Azio Observer Z1 inverted confocal microscope with a ×63 1.4 NA oil immersion objective and a tunable Mai-Tai Ti-Sapphire laser (Spectra Physics) with 488 nm laser excitation was used. Intensities were acquired using a QUASAR 34 channel spectral detector with 9.7 nm wavelength increments. For wavelengths above 727 nm, measurements were performed using the Zeiss Axio Observer Z1 inverted microscope described above using bandpass filters with one redundant wavelength to that for the confocal scanner to allow normalization of the data between the two instruments. Individual cells from samples prepared using Protocol 2 were imaged to collect autofluorescence intensity measurements at a specific emission wavelength, I AF ( λ em ), normalized to the detector sensitivity using the equation below: $$I_{{\mathrm{AF}}}(\lambda _{{\mathrm{em}}}) = \frac{{\overline {I_{{\mathrm{px}},{\mathrm{cell}}}(\lambda _{{\mathrm{em}}})} - \overline {I_{{\mathrm{px}},{\mathrm{b}}}(\lambda _{{\mathrm{em}}})} }}{{\mathop {\int }\nolimits_{\lambda _1}^{\lambda _2} {\mathrm{\Phi }}(\lambda ){\mathrm{d}}\lambda }},$$ (4) where \(\overline {I_{{\mathrm{px}},{\mathrm{cell}}}(\lambda _{\mathrm{em}})}\) is the mean pixel intensity on a cell at wavelength λ em , \(\overline {I_{{\mathrm{px}},{\mathrm{b}}}(\lambda _{{\mathrm{em}}})}\) is the mean pixel intensity of background (non-cell regions) at wavelength λ em , \(\mathop {\int }\nolimits_{\lambda _1}^{\lambda _2} {\mathrm{\Phi }}(\lambda ){\mathrm{d}}\lambda\) is the integrated quantum efficiency of the camera spanning the spectral channel bandwidth centered at wavelength λ em , and λ 1 and λ 2 are the lower and upper cutoff of the emission bandwidth. Autofluorescence at each wavelength was normalized by dividing by I AF (562nm). Autofluorescence and single fluorophore intensities Unpatterned cell samples were prepared as described above and stained with EGF conjugates of three different QDs emitting at 565, 605, and 744 nm or with a dye. The cells were then imaged at three emission wavelengths (562, 600, and 732 nm) for QDs under otherwise identical conditions and instrument settings or imaged with 561 nm laser excitation and 600 nm emission for the dye. Single QDs/dye were identified using methods 18 in which videos of QD/dye spots were saved as TIFF stacks and imported into Matlab for QD/dye spot detection and single-QD/dye identification. QD/dye spot centroids ( x 0 , y 0 ) were obtained from images using the detection/estimation/deflation algorithm from the multiple-target tracing (MTT) algorithm of Sergé et al. 66 . Centroid locations were rounded to the closest integral pixel values, ([ x 0 ],[ y 0 ]), and an intensity histogram of a 3 × 3 pixel array centered at this position for the video was then fit to a sum of two functions, a Gaussian background (mean [ μ 1 ], standard deviation [ σ 1 ], and area [ a 1 ]) and a skewed Gaussian QD/dye signal (mean [ μ 2 ], standard deviation [ σ 2 ], area [ a 2 ], and skew factor [ r ]). Curve fits that satisfied previous criteria to distinguish single-QD/dye photophysical dynamics were used to identify single QDs/dyes, for which the intensity, I QD/dye ( λ em ), was determined as: $$I_{{\mathrm{QD}}/{\mathrm{dye}}}\left( {\lambda _{{\mathrm{em}}}} \right) = \frac{{\mu _2 - \mu _1}}{{{\mathrm{\Phi }}(\lambda _{{\mathrm{em}}})}},$$ (5) where Φ( λ em ) is the quantum efficiency of the camera at wavelength λ em . Autofluorescence at a specific wavelength was calculated on the cell area for which there were no QDs, using the following equation: $$I_{{\mathrm{AF}}}(\lambda _{{\mathrm{em}}}) = \frac{{\mathop {\sum }\nolimits_{x = [x_0] - 1}^{[x_0] + 1} \mathop {\sum }\nolimits_{y = [y_0] - 1}^{[y_0] + 1} I(x,y,\lambda _{{\mathrm{em}}}) - \overline {I_{3 \times 3,{\mathrm{b}}}(\lambda _{{\mathrm{em}}})} }}{{{\mathrm{\Phi }}(\lambda _{{\mathrm{em}}})}},$$ (6) where I ( x , y , λ em ) is the intensity for pixel ( x , y ), \(\overline {I_{3 \times 3,{\mathrm{b}}}(\lambda _{{\mathrm{em}}})}\) is the mean 3 × 3 pixel intensity sum of background regions, and ([ x 0 ], [ y 0 ]) is centroid of each 3 × 3 pixel array of autofluorescence. Deconvolution 3D volumetric stacks (250 nm z-spacing, 80–200 images) of QDs were deconvolved using AutoQuantX3 (Media Cybernetics). All stacks were deconvolved using the following settings: fixed point spread function (PSF), 60 iterations, and noise level low as recommended by Media Cybernetics. PSF images were experimentally acquired using fluorescent TetraSpeck microspheres (0.1 μm diameter; Thermo Fisher Scientific), and calculated using the PSF image processing tool in Zeiss ZEN software. Isolated QD intensity calibration Two stacks of images of isolated QDs on glass coverslips were were collected in wide-field excitation mode: a time stack at a single z-focal plane (4000 images; 100 ms exposure time) and a 3D volumetric stack (250 nm z-spacing, 80 images; 100 ms exposure time). 3D z-stacks were deconvolved using AutoQuantX3. Using custom Matlab codes, the deconvolved 3D intensity of each spot \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}})\) was then calculated as the integrated intensity of a 3 × 3 × 11 voxel centered at the centroid position according to the following equation: $$I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} = \mathop {\sum }\limits_{x = [x_0] - 1}^{[x_0] + 1} \mathop {\sum }\limits_{y = [y_0] - 1}^{[y_0] + 1} \mathop {\sum }\limits_{z = [z_0] - 5}^{[z_0] + 5} I(x,y,z) - \overline {I_{3\times3\times11,{\mathrm{b}}}},$$ (7) where [ x 0 ], [ y 0 ], and [ z 0 ] are the centroid positions rounded to the nearest pixel integer, I ( x , y , z ) is the intensity of a single pixel, and \({\overline {I_{3\times3\times11,{\mathrm{b}}}}}\) is the mean 3 × 3 × 11 voxel intensity sum of background region. Using the same 2D spot ([ x 0 ], [ y 0 ]) centroid positions, 3 × 3 time-course intensities \(( {I_{{\mathrm{spot}}}^{2{\mathrm{D}}}})\) were calculated according to the following equation: $$I_{{\mathrm{spot}}}^{2{\mathrm{D}}}(t) = \mathop {\sum }\limits_{x = [x_0] - 1}^{[x_0] + 1} \mathop {\sum }\limits_{y = [y_0] - 1}^{[y_0] + 1} I(x,y,t).$$ (8) Using Matlab, all intensities for a spot were binned into a histogram composed of 100 bins. The intensity histogram was fitted using least square estimate to a Gaussian mixture model with 2–5 Gaussians, for which one was the background noise function corresponding to the off-state of QD blinking. To maximize the accuracy in fitting, we imposed the following fitting criteria: (1) correlation coefficient greater than or equal to 0.98 between the fit and data, (2) each Gaussian area contributes at least 8% the total area, (3) maximum 75% overlap between any two Gaussians, and (4) maximum 20% difference in area between each Gaussian and its corresponding data region. For each spot, the number of Gaussians that yields the minimum AIC value was identified as optimal. AIC was calculated according the following equation: $${\mathrm{AIC}} = n_{{\mathrm{bin}}}{\mathrm{ln}}\left( {\frac{{\mathrm{RSS}}}{{n_{{\mathrm{bin}}}}}} \right) + 2(3n_{{\mathrm{Gauss}}} - 1),$$ (9) where n bin is the number of bins used to construct the intensity histogram, RSS is the residual sum of squares, and n Gauss is the number of Gaussians used to fit the intensity histogram. QDC-3DM methodology Two stacks of images of the QDs were collected in wide-field excitation mode: a time-stack at a single z-focal plane (600 images; 50 ms exposure time) and a 3D volumetric stack (250 nm z-spacing, 100–200 images; 50 ms exposure time). 3D z-stacks were deconvolved using AutoQuantX3. Deconvolved 3D images were then imported into Imaris (Bitplane) which has an automatic 3D detection algorithm (surface mode) to determine the centroid positions ( x 0 , y 0 , z 0 ) and intensity \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}})\) of spots with a range of sizes. These spot data, the time-stack images, and the deconvolved 3D images were imported into Matlab and a custom script was used to calculate the number of QD-EGF per cell. [1] Single-QD identification : Spot positions ( x 0 , y 0 , z 0 ) were rounded to the nearest integer pixel values, ([ x 0 ], [ y 0 ], [ z 0 ]), and time-course intensities of the corresponding 2D spots, \(I_{{\mathrm{spot}}}^{2{\mathrm{D}}}(t)\) , were summed over a 3 × 3 voxel centered about the centroid positions ([ x 0 ], [ y 0 ]) at each time point using equation 8. Temporal intensities \(I_{{\mathrm{spot}}}^{2{\mathrm{D}}}(t)\) for each spot were binned into histograms and fit to a sum of two functions, a Gaussian background and skewed Gaussian signal. Single QDs were identified from istribution fits that satisfy previous criteria to distinguish single-QD photophysical dynamics 18 . [2] Single-QD intensity calibration : Deconvolved 3D spot intensities \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}})\) for which spots correspond to single QDs \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} = I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}})\) were averaged to calculate the mean single-QD intensity, $$\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}} = \frac{1}{n}\mathop {\sum }\limits_{i = 1}^n I_{1{\mathrm{QD}}_i}^{3{\mathrm{DD}}},$$ (10) where n is the number of QDs identified as single. [3] Spot intensity calibration : The number of QDs within each deconvolved 3D spot, N QD,spot , for images collected under the same conditions and experimental set was then calculated as: $$N_{{\mathrm{QD}},{\mathrm{spot}}} = I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} \cdot \left( {\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}} } \right)^{ - 1}$$ (11) For any 3D field of view, such as a single cell ( N QD,cell ) containing m spots, the total number of QDs can be calculated as the sum of QDs in each spot as: $$N_{{\mathrm{QD}},\,{\mathrm{cell}}} = \mathop {\sum }\limits_{i = 1}^m N_{{\mathrm{QD}},\,{\mathrm{spot}}_i}.$$ (12) Internalization fraction calculation Cells membranes were mapped using 3D images of QD605-IgG membrane stains using the Matlab alphaShape function by importing ([ x ], [ y ], [ z ]) coordinates of QD605-IgG with an alpha radius of 50. Spatial coordinates for QD605-IgG spots were obtained using the MTT detection/estimation/deflation algorithm 66 for each 2D image of a 3D z-stack spanning the entire cell thickness. For spots detected in the same ([ x ], [ y ]) positions across adjacent z-planes, [ z ] values were averaged. Nucleus ([ x nuc ], [ y nuc ], [ z nuc ]) coordinates were determined using Imaris. In Matlab, a vector was constructed connecting the nucleus and surface through each QD744-EGF spot centroid position ([ x QD ], [ y QD ], [ z QD ]) derived from the above deconvolved 3D images, with surface intersection coordinates ([ x surf ], [ y surf ], [ z surf ]). An EGF spot was identified as internalized if it satisfied the following condition of relative distance from the surface: $$\left[ {\frac{{\left( {x_{{\mathrm{QD}}} - x_{{\mathrm{nuc}}}} \right)^2 + \left( {y_{{\mathrm{QD}}} - y_{{\mathrm{nuc}}}} \right)^2 + \left( {z_{{\mathrm{QD}}} - z_{{\mathrm{nuc}}}} \right)^2}}{{\left( {x_{{\mathrm{surf}}} - x_{{\mathrm{nuc}}}} \right)^2 + \left( {y_{{\mathrm{surf}}} - y_{{\mathrm{nuc}}}} \right)^2 + \left( {z_{{\mathrm{surf}}} - z_{{\mathrm{nuc}}}} \right)^2}}} \right]^{1/2} \le 0.8.$$ (13) The fraction of EGF internalized ( f ) was then calculated using the following equation for a cell in which there are n spots internalized. $$f = \frac{1}{{N_{{\mathrm{QD}},{\mathrm{cell}}}}}\mathop{\sum }\limits_{i = 1}^n N_{{\mathrm{QD}},{\mathrm{spot}}_i}.$$ (14) Membrane stain analysis To evaluate the accuracy of the QD membrane stain, cells were co-stained with MemBrite according to Protocol 7. Volumetric images of membranes were collected using a Zeiss 710 confocal scanner Azio Observer Z1 inverted microscope with ×63 1.4 NA oil immersion objective with 250 nm z-spacing and 640/660 nm excitation/emission bands. The cell membranes at each z-plane of the confocal images were then manually segmented to serve as the membrane standard to calculate the accuracy of membrane maps obtained from QD605-IgG membrane stains and alpha shape analysis from epifluorescence images as described above for calculating the internalization fraction. Differences in distances between the two cell membrane maps were calculated for each pixel of the membrane obtained via confocal and plotted in 3D using Matlab. 2D and 1D projections of EGF localization Cells grown on micro-contact printed surfaces have the same adhesion shapes, which can be observed using Alex Fluor 488-labeled fibronectin. The fluorescent adhesion patterns were aligned using a custom Matlab code. The EGF locations were transformed similarly and projected either onto a 2D surface or 1D line. EGF-binding simulation EGF-EGFR binding kinetics on a population of cells with heterogeneous EGFR expression was modeled using a Matlab code. The EGF-EGFR kinetic model involves three processes: association, dissociation, and internalization. Three differential equations were used to solve for the concentration of free receptor [EGFR]( t ), ligand–receptor complexes [EGF|EGFR]( t ), and internalized complexes [EGF|EGFR] int ( t ): $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGFR}}} \right]\left( t \right) = - k_{{\mathrm{on}}} \cdot \left[ {{\mathrm{EGF}}} \right]\left( t \right) \cdot \left[ {{\mathrm{EGFR}}} \right]\left( t \right) + k_{{\mathrm{off}}} \cdot \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right](t),$$ (15) $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right]\left( t \right) = k_{{\mathrm{on}}} \cdot \left[ {\mathrm{EGF}} \right]\left( t \right) \cdot \left[ {\mathrm{EGFR}} \right]\left( t \right) - (k_{{\mathrm{off}}} + k_{{\mathrm{int}}}) \cdot \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right](t),$$ (16) $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right]_{{\mathrm{int}}}\left( t \right) = k_{{\mathrm{int}}} \cdot \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right](t),$$ (17) where k on , k off , and k int are kinetic rate constants for ligand–receptor association, ligand–receptor dissociation, and ligand–receptor internalization, respectively, provided in Tables 2 and 3 . Table 2 EGF-EGFR kinetic rate parameters at 37 °C Full size table Table 3 EGF-EGFR kinetic rate parameters at 4 °C Full size table Because experiments were performed in a large medium volume ( V cell ~16.7 nL extracellular volume per cell compared to ~1.7 pL intracellular volume for ~15 µm spherical cells), EGF concentration is approximately constant and equal to the initial value [EGF] 0 , which was 0.03, 0.3, 3, or 30 nM, corresponding to 0.1, 1, 10, or 100 nM of QD with QD:EGF = 3:1. $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGF}}} \right]\left( t \right) = 0;\left[ {{\mathrm{EGF}}} \right]\left( t \right) = \left[ {{\mathrm{EGF}}} \right]_{\mathrm{0}}.$$ (18) The discrete steady-state population distribution of active EGFR copy number per cell ( N R ) is approximated as a gamma distribution 37 , 38 , 40 , for which: $$p\left( {N_{\mathrm{R}}} \right) = \frac{{N_{\mathrm{R}}^{a - 1}e^{ - N_{\mathrm{R}}/b}}}{{\Gamma (a)b^a}}.$$ (19) Here Γ is the gamma function, a is the inverse of noise \(( {\overline {N_{\mathrm{R}}} ^2 \cdot \sigma ^{ - 2}})\) that defines the distribution shape, and b is the Fano factor \(( {\sigma ^2 \cdot \overline {N_{\mathrm{R}}} ^{ - 1}})\) that defines the scale, or translation burst size. \(\overline {N_{\mathrm{R}}}\) and σ are the mean and standard deviation of the protein number distribution, respectively. The average number of active receptors per cell is \(\overline {N_{\mathrm{R}}} = 100,000\,{\mathrm{cell}}^{ - 1}\) based on the average EGFR number per MDA-MB-231 cell (200,000), of which ~50% are on the membrane 42 , 69 . Based on previous quantification of EGFR on MDA-MB-231 cells by flow cytometry using antibody fragments, we use a = 3.34 70 . The rate equations were then solved for $$N_{{\mathrm{EGF}}}(N_{\mathrm{R}}) = \left( {\left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right] + \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right]_{{\mathrm{int}}}} \right) \cdot V_{{\mathrm{cell}}} \cdot N_{\mathrm{A}},$$ (20) where N A is Avagadro’s number. N EGF is solved for each discrete N R to yield the average number of EGF ligands for each discrete cell, \(\overline {N_{{\mathrm{EGF}},{\mathrm{R}}}}\) . Each average is then spread by a Poisson distribution to account for intrinsic noise 39 as $$p\left( x \right) = {\mathrm{e}}^{ - \bar x}\frac{{\bar x^x}}{{x!}},$$ (21) where \(\bar x = \overline {N_{{\mathrm{EGF}},{\mathrm{R}}}}\) and x = N EGF,R . Then, each p ( x ) = p ( N EGF,R ) is scaled by p ( N R ) and summed across N R to generate the N EGF distribution. For Fig. 4c , the complete cell population was simulated. For Fig. 4d , individual cells were sampled from the N R distribution in the same number as those in the experimental data, and then used to calculate the number of EGF bound by sampling the Poisson distribution spread of kinetic binding. The statistical difference between the distribution of N EGF between experiment and simulation was calculated using the Mann–Whitney U test. Instrumentation Cell and QD imaging was performed using a Zeiss Axio Observer Z1 inverted microscope for wide-field illumination in the Smith Lab, or a Zeiss 710 confocal scanner Azio Observer Z1 inverted microscope in the Carl R. Woese Institute for Genomic Biology core facility at the University of Illinois. Gel electrophoresis for QDs and QD conjugates was performed using an EPS-300X system (C.B.S. Scientific company Inc.). Gel images were collected using a Bio-Rad Molecular Imager Gel Doc XR system. Gel electrophoresis for western blot was performed using a Bio-Rad mini Protean tetra cell. Western blotting was carried out using a Bio-Rad Criterion Blotter and films were imaged using a Konica SRX-101A film processor. Flow cytometry data were acquired using a BD Biosciences LSR Fortessa Cytometry Analyzer equipped with 488 and 561 nm lasers in the Roy J. Carver Biotechnology Center at the University of Illinois. Absorption spectra of QDs were acquired using an Agilent Cary 5000 UV–Vis–NIR spectrometer. All measurements were carried out within the dynamic range of the instrument (absorbance < 4) in the entire spectral range. Fluorescence spectra of QDs using 491 nm excitation were acquired using a Horiba NanoLog spectrofluorometer. Raw fluorescence signal was adjusted for the wavelength-dependent detector sensitivity and excitation power fluctuations. Electron microscopy images were acquired using a JOEL 2010 LaB6 high-resolution microscope in the Frederick Seitz Materials Research Laboratory Central Research Facilities at the University of Illinois. Hydrodynamic sizes of QDs were measured via an ӒKTApurifier UPC10 (GE Healthcare) with a Superose™ 6 10/300GL column (GE Healthcare), controlled using the UNICORN 5.31 Workstation software. Photolithography was performed using a Karl Suss MJB3 Mask Aligner in the Micro and Nanotechnology laboratory at the University of Illinois. Statistical information Except where otherwise noted, values are reported as mean ± standard deviation (s.d.). Statistical significance analyses were calculated using two-tailed Mann–Whitney test in Origin Pro 9.1. A statistically significant value was denoted with an asterisk (*) for p < 0.05. χ 2 goodness-of-fit tests were performed using a built-in function in Matlab. Code availability All codes used in this study are available from the corresponding author upon reasonable request. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. | None | [] | [] | [] | SciNews | Biology | Phuong Le et al, Counting growth factors in single cells with infrared quantum dots to measure discrete stimulation distributions, Nature Communications (2019). DOI: 10.1038/s41467-019-08754-5 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-019-08754-5 | https://phys.org/news/2019-02-illinois-growth-factors-cells.html | Researchers have developed a new technology platform that can digitally count the amount of growth factor molecules entering individual cells, providing a direct cause-and-effect relationship between growth factors and cell behavior. The platform uses infrared fluorescent quantum dots to tag growth factors, allowing for accurate counting and visualization of binding using a three-dimensional microscope. In a recent study, the team used this technology to count epidermal growth factor (EGF) molecules binding to human triple-negative breast cancer cells and found that the amount of EGF binding is inversely proportional to drug efficacy, suggesting that signaling molecules in the tumor can enhance cancer cells' resistance to pharmaceutical agents. This breakthrough is expected to lead to a new understanding of cell signaling, how cells respond to drugs, and why cell populations become resistant to drugs, ultimately improving treatments for cancer.
Whether healthy or diseased, human cells exhibit behaviors and processes that are largely dictated by growth factor molecules, which bind to receptors on the cells. For example, growth factors tell the cells to divide, move, and when to die—a process known as apoptosis. When growth factor levels are too high or too low, or when cells respond irregularly to their directions, many diseases can result, including cancer. "It is believed that cells respond to growth factors at extreme levels of sensitivity," said University of Illinois at Urbana-Champaign Bioengineering Associate Professor Andrew Smith. "For example, a single molecule will result in a major change in cell behavior." In a recent paper published in Nature Communications, Smith reported the invention of a new technology platform that digitally counts, for the first time ever, the amount of growth factor entering an individual cell. Prior to this, researchers inferred growth factor binding based on how the receiving cells responded when the growth factor molecules were introduced. "We showed the first direct cause-and-effect relationships of growth factors in single cells," he said. "We expect the outcomes to lead to a new understanding of cell signaling, how cells respond to drugs, and why cell populations become resistant to drugs, particularly toward improved treatments for cancer." Smith's technology platform tags each growth factor with a single engineered (10 nanometer) infrared fluorescent quantum dot, which can then be viewed using a three-dimensional microscope. In their study, they counted how many epidermal growth factor (EGF) molecules bound to human triple-negative breast cancer cells that were pre-patterned on island-like surfaces. EGF molecules typically signal cell division and lead to tissue growth. Numerous cancers have mutations in their EGF receptors. "We used quantum dots as the fluorescent probe because they emit a lot more light compared to other conventional fluorescent probes such as organic dyes, and we can tune their wavelengths by changing their chemical composition," said Bioengineering doctoral student Phuong Le, the lead author of the paper. "In our study, we demonstrated that quantum dots emitting light in the near-infrared wavelength allowed the most accurate counting of growth factors binding to cells." According to Le, the team also treated the breast cancer cells with quantum dot-tagged EGF in the absence and presence of pharmaceutical drugs that inhibit EGF signaling in cells. "We found that the amount of EGF binding is inversely proportional to drug efficacy," Le said. "This finding is significant as it means that signaling molecules present in the cancer cells' tumor—a place where signaling molecules are often misregulated—can enhance the cancer cells' resistance to pharmaceutical agents." |
10.1038/s41598-023-31294-4 | Iron oxide nanoparticles for medical applications: Study clarifies effect of microstructure on magnetic properties | Iron oxide nanoparticles are often used in medical technology as contrast agents for magnetic resonance imaging or as transport agents for drugs in the bloodstream, for example in tumor therapy. For these applications, the nanoparticles have to be biocompatible and superparamagnetic. Thus, they must be strongly magnetizable in a magnetic field and also must lose their magnetization when the magnetic field is switched off. Using analytical high-resolution transmission electron microscopy, a team at TU Bergakademie Freiberg investigated how the magnetic properties of the nanoparticles can further be improved via microstructure design. The researchers published their results in the current issue of Scientific Reports. The knowledge of the exact structure of the iron oxide nanoparticles sized between 20 and 30 nanometers helps to optimize the manufacturing process and to improve the magnetic properties of the particles systematically. Each particle consists of at least two superparamagnetic nanocrystalline cores and a shell that does not contribute to the magnetic properties. The maximum magnetization of the nanoparticles depends on the mutual orientation of the individual cores. How well are the cores oriented to each other? "The current state of research assumed that a strong alignment of magnetic moments in multi-core iron oxide nanoparticles is enabled by the same crystallographic orientation of individual cores. However, our analyses showed that this is not necessarily true," says Stefan Neumann, research associate at TU Bergakademie Freiberg and the first author of the publication. "Also other, but still specific crystallographic orientation relationships of the cores can promote their magnetic interaction. Nevertheless, a fully random alignment of the cores deteriorates the magnetic properties of the nanoparticles," says Neumann. "In order to be able to produce highly superparamagnetic iron oxide nanoparticles for future applications in medicine on demand, we need knowledge of their internal structure," says co-author Prof. David Rafaja, head of the Institute of Materials Science at TU Bergakademie Freiberg. "During the production of the nanoparticles, individual cores are formed first. When the cores get more time to align in the right way, then the magnetic properties of the nanoparticles can further be improved." Background: Analyzing ultra-fine particles The results were obtained within the priority program "MehrDimPart—Highly Specific and Multidimensional Fractionation of Fine Particle Systems with Technical Relevance." The aim of the research is to develop technological approaches that enable a controlled production of highly specific and technologically relevant particle systems with desired properties. In addition to the team from TU Bergakademie Freiberg, scientists from the Karlsruhe Institute of Technology have also contributed to the current publication. The basic research behind this work was focused on the structure of the nanoparticles to be able to optimize the production of particles with specific magnetic properties. A toxicological study was not carried out. | Researchers at TU Bergakademie Freiberg have investigated the magnetic properties of iron oxide nanoparticles, which are used as contrast agents for magnetic resonance imaging and transport agents for drugs in the bloodstream. Using high-resolution transmission electron microscopy, the team found that the nanoparticles' magnetic properties can be improved through microstructure design. Specifically, they discovered that the mutual orientation of the individual cores within each particle affects its magnetization, and that a strong alignment of magnetic moments is not necessarily achieved through the same crystallographic orientation of individual cores. Instead, other specific crystallographic orientation relationships can promote magnetic interaction, while a fully random alignment can deteriorate the magnetic properties. The research aims to optimize the production of particles with specific magnetic properties, which is crucial for their use in medical applications. | None | Abstract Magnetic properties of superparamagnetic iron oxide nanoparticles are controlled mainly by their particle size and by their particle size distribution. Magnetic properties of multi-core iron oxide nanoparticles, often called iron oxide nanoflowers (IONFs), are additionally affected by the interaction of magnetic moments between neighboring cores. The knowledge about the hierarchical structure of IONFs is therefore essential for understanding the magnetic properties of IONFs. In this contribution, the architecture of multi-core IONFs was investigated using correlative multiscale transmission electron microscopy (TEM), X-ray diffraction and dynamic light scattering. The multiscale TEM measurements comprised low-resolution and high-resolution imaging as well as geometric phase analysis. The IONFs contained maghemite with the average chemical composition \(\gamma\) -Fe \(_{2.72\pm 0.02}\) O \(_4\) . The metallic vacancies located on the octahedral lattice sites of the spinel ferrite structure were partially ordered. Individual IONFs consisted of several cores showing frequently a specific crystallographic orientation relationship between direct neighbors. This oriented attachment may facilitate the magnetic alignment within the cores. Individual cores were composed of partially coherent nanocrystals having almost the same crystallographic orientation. The sizes of individual constituents revealed by the microstructure analysis were correlated with the magnetic particle sizes that were obtained from fitting the measured magnetization curve by the Langevin function. Introduction In recent decades, magnetic iron oxide nanoparticles (IONPs) have emerged as one of the most promising nanomaterials for biomedical applications, for example as heat mediator for hyperthermia cancer treatment 1 , as carrier for drug delivery 2 or as contrast agent in magnetic resonance imaging 3 . The manifold applications of IONPs arise from a combination of excellent properties including superparamagnetic behavior, high saturation magnetization, good biocompatibility and the possibility to functionalize IONPs by attaching various bioactive molecules. IONPs usually consist of magnetite (Fe \(_3\) O \(_4\) ) and/or maghemite ( \(\gamma\) -Fe \(_2\) O \(_3\) ), which crystallize in a spinel-like structure with tetrahedrally and octahedrally coordinated iron cations. Magnetite (space group \(Fd{\bar{3}}m\) ) accommodates Fe \(^{2+}\) and Fe \(^{3+}\) cations on the Wyckoff positions 8 b and 16 c , respectively 4 . This distribution of the cations guarantees charge neutrality. However, in contrast to magnetite, some octahedral iron sites in maghemite must stay vacant to preserve the chemical composition Fe \(_2\) O \(_3\) that corresponds to Fe \(_{2.67}\) O \(_4\) in the spinel-like crystal structure. The oxygen sublattice is still fully occupied. It has been shown that the Fe vacancies tend to order, which leads to the formation of different crystal structures of \(\gamma\) -Fe \(_2\) O \(_3\) . The crystal structure of \(\gamma\) -Fe \(_2\) O \(_3\) with randomly distributed vacancies can still be described as a simple cubic spinel with the space group \(Fd{\bar{3}}m\) 5 . \(\gamma\) -Fe \(_2\) O \(_3\) with vacancies partially ordered only on one of two distinct octahedral sites was described in the space group \(P4_332\) 6 , \(\gamma\) -Fe \(_2\) O \(_3\) with vacancies partially ordered on one of three distinct octahedral sites in the tetragonal space group \(P4_32_12\) but with almost identical lattice parameters a and c 7 . \(\gamma\) -Fe \(_2\) O \(_3\) with fully ordered vacancies was described as a tetragonal superstructure in the space group \(P4_12_12\) with \(c\approx 3a\) 8 . Vacancy ordering and the tetragonal distortion of the cubic spinel unit cell were originally reported for ‘microcrystalline’ \(\gamma\) -Fe \(_2\) O \(_3\) . However, the same phenomena were also observed in IONPs 9 , 10 , 11 . The chemical composition (the [Fe]/[O] ratio) and related ordering of vacancies influence the magnetic properties of IONPs. They depend strongly on the fractions of Fe \(_3\) O \(_4\) and \(\gamma\) -Fe \(_2\) O \(_3\) 12 , 13 , 14 , because Fe \(_3\) O \(_4\) shows a higher saturation magnetization than \(\gamma\) -Fe \(_2\) O \(_3\) 15 . The size of IONPs is another important factor affecting their magnetic properties. When it decreases below a certain threshold value, IONPs become superparamagnetic 16 as required for many biomedical applications 17 , 18 , 19 , 20 . The size threshold value is around 25 nm for Fe \(_3\) O \(_4\) and 30 nm for \(\gamma\) -Fe \(_2\) O \(_3\) 21 . Therefore, the size of IONPs needs to be tailored for the respective application in order to obtain the best possible combination of properties. However, when IONPs are significantly smaller, their saturation magnetization is reduced by a disorder of the spins either in the interior of the IONPs or in their surface layer. The spin disorder in the interior of the IONPs was explained by inhomogeneous ordering of the cation vacancies 22 .The spin disorder in the surface layer of IONPs is usually explained by the incomplete coordination of superficial iron ions and the likely occurrence of structural defects at the IONP rim 23 , 24 , 25 . At 300 K, a thickness of the disordered spin layer of 0.54 nm was reported by Sharifi Dehsari et al. 26 , whereas a thickness of 1 nm was reported by Millan et al. 25 (for IONPs larger than 3 nm). Furthermore, the different [Fe]/[O] ratio in magnetite and maghemite is a reason for their different oxidation stability. Under aerobic conditions, maghemite is much more stable than magnetite 27 . Thus, the exact phase composition and distribution of Fe \(_3\) O \(_4\) and \(\gamma\) -Fe \(_2\) O \(_3\) can vary, in particular, if IONPs are in contact with oxygen. While a full oxidization of the iron oxide to \(\gamma\) -Fe \(_2\) O \(_3\) was observed for smaller particles 28 , IONPs with intermediate sizes were found to contain non-stoichiometric Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) with \(2.667<\langle 3-\delta \rangle <3\) 12 , 28 . Large IONPs are generally assumed to have a core/shell structure with a Fe \(_3\) O \(_4\) core and an oxidized shell 12 , 13 , 28 , 29 , 30 , 31 . Recently, multi-core IONPs, often referred to as iron oxide nanoflowers (IONFs) 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , have attracted attention of many research groups, as they show superior properties with respect to their mono-core counterparts, for instance a significantly enhanced specific loss power in magnetic hyperthermia 32 , 33 , 34 , but also increased cytotoxicity to cancer cells when applying an alternating magnetic field 36 . Lartigue et al. 33 showed that the oriented attachment of individual cores building up the IONFs and the resulting continuity of their crystallographic orientation with a misalignment of the cores of only a few degrees 32 , 33 , 38 , 39 , 40 favor a magnetic ordering across the interface and consequently a cooperative magnetic behavior. As a result, IONFs show enhanced magnetic susceptibility and smaller surface disorder 33 , while their superparamagnetic behavior is preserved 40 . Still, the magnetic performance of IONFs depends on many different properties such as the size of the cores, the size of the entire particles 33 , 38 , 39 , 40 , 41 , 42 , the number of the cores within the particles 39 and their alignment 43 . While a lot of research has been dedicated to the optimization of the synthesis process of IONFs 35 , 39 and to the understanding of the magnetic interaction between individual cores within IONFs 44 , a profound description of the hierarchical structure of IONFs on the atomic scale, which is expected to influence the magnetic properties of IONFs significantly, has not been provided so far. In the present study, we describe the architecture and structure of IONFs on the nanoscopic and atomic scale, including crystallographic orientation relationships and structural coherence of the individual cores, and correlate these characteristics with the magnetic properties obtained from alternating gradient magnetometry (AGM) measurements. This contribution illustrates the capability of transmission electron microscopy (TEM) applied in high-resolution and low-resolution modes and networked by a correlative multiscale approach 45 , complemented by X-ray diffraction (XRD) and dynamic light scattering (DLS), to reveal detailed and statistically relevant information about the structure of IONFs on different length scales. Materials and methods IONFs investigated in this study are commercially available dextran-coated maghemite IONFs (synomag-D, micromod Partikeltechnologie GmbH, Rostock, Germany) with a nominal hydrodynamic diameter of 50 nm, which were synthesized by a polyol method adapted from Lartigue et al. 33 . Details on the synthesis of the IONFs can be found in the paper from Gavilán et al. 35 . For the TEM analysis, IONFs originally suspended in water were nebulized on a standard copper TEM grid covered with an amorphous carbon film. TEM experiments were carried out in a JEOL JEM-2200FS transmission electron microscope, which was equipped with a field emission gun operating at 200 kV, with a CESCOR probe aberration corrector (CEOS GmbH, Germany), with an ultra-high resolution objective lens ( \(C_S = 0.5\) mm), with an in-column energy filter ( \(\Omega\) -filter) and with a highly sensitive 2k \(\times\) 2k CCD camera (Gatan, Inc., USA). The \(\Omega\) -filter was used to remove inelastically scattered electrons from the beam and thus to improve the quality of the TEM images. The IONFs were characterized by high-resolution transmission electron microscopy (HRTEM), by scanning transmission electron microscopy (STEM) using an upper high-angle annular dark-field (HAADF) detector (EM-24630 UHADF, JEOL Ltd., Japan) and by selected area electron diffraction (SAED). Local diffraction patterns were obtained from HRTEM images using fast Fourier transform (FFT). For XRD experiments, IONFs were dried in a fume hood and then spread on a ‘zero-background’ sample holder, which was a \(\langle 5\,1\,0\rangle\) -oriented Si single crystal. XRD measurements were carried out in symmetrical Bragg-Brentano geometry on a Seifert-FPM URD6 diffractometer (Freiberger Praezisionsmechanik, Germany) that was equipped with a sealed X-ray tube with a Cu anode, with a Soller collimator in the primary beam and with a graphite monochromator in the diffracted beam. The Soller collimator reduced the axial divergence of the primary beam. The graphite monochromator eliminated diffraction lines stemming from the spectral line CuK \(_\beta\) and the fluorescence radiation of the sample. Measured XRD patterns were subjected to Rietveld refinement 46 , 47 as implemented in the MAUD software 48 . DLS experiments were carried out in backscatter mode using a ZetaSizer Nano ZS (Malvern Panalytical, UK). The laser wavelength was set to 632.8 nm, the detected scattering angle to \(173^{\circ }\) . In the DLS experiments, 100 \(\upmu\) L of IONF sample material ( \(c_{\text {IONF}}\) = 0.1 g/L) was injected into the capillary cell. The temperature (25 \(^{\circ }\) C) was controlled by the device. Due to the low IONF concentration, the viscosity of pure water at 25 \(^{\circ }\) C ( \(\eta _{\text {L}}\) = 0.89 mPa·s) was assumed, when the results of the DLS experiments were evaluated. AGM measurements were performed at room temperature in a gradient magnetic field that was generated by two magnetic coils. The maximum intensity of the external magnetic field ranged between \(-4\cdot\) 10 \(^5\) A/m and \(+4\cdot\) 10 \(^5\) A/m. The magnetic force induced by the external magnetic field was measured by a piezoelectric sensor. As the magnetic properties of the cores were of interest, the dextran shell of the IONFs was removed prior to the AGM measurements. In this preparation step, 300 \(\upmu\) L of a 25 g/L IONF suspension was mixed with 700 \(\upmu\) L pure ethanol and subsequently evaporated under stirring for 60 min at 95 \(^{\circ }\) C and at 300 min \(^{-1}\) . After evaporation, 1 mL pure ethanol was added in order to resuspend the dry IONFs. The suspension was stirred again at 300 min \(^{-1}\) and 95 \(^{\circ }\) C for 60 min. After the second ethanol evaporation step, a dry, grey IONF powder was obtained. Approximately 1.5 to 3.0 mg of the powder was fixed between two adhesive films to produce a sample suitable for the AGM measurements. This sample was attached to a pendulum connected with the piezoelectric sensor. The measured magnetization curve was normalized to the sample mass and volume in order to determine characteristic magnetic values, i.e., the specific remanence and the specific saturation magnetization. Results Phase composition and vacancy ordering Figure 1 ( a ) XRD pattern of the IONFs under study. Rietveld refinement was carried out using space group \(Fd{\bar{3}}m\) . ( b ) Dependence of the cubic lattice parameter of IONPs on their stoichiometry. The horizontal dashed lines mark the lattice parameters of Fe 3 O 4 and \(\gamma\) -Fe \(_2\) O \(_3\) . The Vegard dependence (ascending gray dashed line) was calculated for large crystallites ( \(D\rightarrow \infty\) in Eq. ( 1 )). The black crosses 49 , blue circles 29 , green triangles 13 and orange squares 14 represent values taken from literature; the red pentagon with error bars marks the lattice parameter from the present work. Full size image As mentioned in Introduction, the transition between Fe \(_3\) O \(_4\) and \(\gamma\) -Fe \(_2\) O \(_3\) is accompanied by a change in the oxidation state of the iron cations, which induces the formation and ordering of vacancies on the iron positions. Although the ordering of vacancies has to be described by different space groups ( \(Fd{\bar{3}}m\) , \(P4_332\) , \(P4_32_12\) ) from the crystallographic point of view 5 , 6 , 7 , the impact of vacancy ordering on the powder XRD pattern is rather weak 11 , 50 . The possible tetragonal distortion of the spinel-like cubic cell is small and thus hardly visible in powder XRD patterns, in particular in XRD patterns of NPs, which produce strongly broadened diffraction lines. Still, it has been demonstrated by many authors that the lattice parameter of IONPs with cubic or pseudo-cubic spinel structure depends linearly on the mole fraction of magnetite in maghemite 13 , 14 , 29 , 49 . Cervellino et al. 30 extended this Vegard-like dependence to account for the effect of the crystallite size on the lattice parameter: $$\begin{aligned} a = \bigl [(1-x_{\textrm{Fe}_3\textrm{O}_4})\cdot a_{\gamma \text{- }\textrm{Fe}_2\textrm{O}_3}+x_{\textrm{Fe}_3\textrm{O}_4}\cdot a_{\textrm{Fe}_3\textrm{O}_4}\bigr ](1-\Omega /D) \end{aligned}$$ (1) In Eq. ( 1 ), \(a_{\gamma \text{- }\textrm{Fe}_2\textrm{O}_3} = 0.83474\) nm 6 and \(a_{\textrm{Fe}_3\textrm{O}_4} = 0.83941\) nm 51 are the terminal lattice parameters of maghemite and magnetite, respectively, \(x_{\textrm{Fe}_3\textrm{O}_4}\) is the mole fraction of magnetite in maghemite, \(\Omega\) is an empiric constant and D is the NP size. The ‘correction factor’ \((1-\Omega /D)\) describes the expansion of the lattice parameter in very small NPs, which results from surface relaxation effects 52 , 53 , 54 . Cervellino et al. 30 determined \(\Omega\) to be about \(-2.05\times 10^{-3}\) nm. However, the effect of the NP size is apparent only for very small particles. Rietveld analysis of the XRD pattern of the IONFs under study (Fig. 1 a), which was carried out assuming a single-phase nature of the Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) sample and the space group \(Fd{\bar{3}}m\) , revealed the lattice parameter 0.8353(3) nm and a crystallite size of ( \(22\pm 3\) ) nm. In the Vegard-like dependence from Cervellino et al. 30 (Fig. 1 b), the refined lattice parameter (0.8353 nm) corresponds to the mole fraction \(x_{\textrm{Fe}_3\textrm{O}_4}\) = 0.12(6) and to the stoichiometric coefficient \(\langle 3-\delta \rangle\) = 2.71(2) of Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) . Rietveld refinement of the site occupancy factors (SOFs) of the iron cations indicated that the majority of vacancies occurs on the octahedral sites 8 b [SOF = 0.867(8)], while the tetrahedral sites 16 c are almost fully occupied [SOF = 0.992(8)]. The oxygen anion sites 32 e were assumed to be fully occupied [SOF = 1]. These SOFs correspond to the mole fraction \(x_{\textrm{Fe}_3\textrm{O}_4}\) = 0.18(1) and to the stoichiometry \(\langle 3-\delta \rangle =2.726(2)\) of Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) . It should be mentioned that although iron vacancies are in general expected to occur exclusively on the octahedral sites 6 , 7 , 11 , Cooper et al. 50 showed that in IONPs the number of tetrahedrally coordinated cation vacancies increases, when the particle size decreases below 8 nm. Figure 2 ( a ) SAED pattern of a large ensemble of IONFs. ( b ) HRTEM image of a single IONF core. ( c ) FFT of the HRTEM image shown in ( b ). ( d ) Amplitude image of the reflection 102. Diffraction patterns in ( a ) and ( c ) are indexed using space group \(P4_332\) . Reflections associated with vacancy ordering are marked in yellow. Full size image The SAED pattern (Fig. 2 a) and the FFT (Fig. 2 c) of the HRTEM image (Fig. 2 b) show superstructure reflections (marked in yellow). Their presence indicates that the vacancies in the IONFs are ordered to a certain extent, as it would correspond, e.g., to the space group \(P4_332\) . In order to rule out a tetragonal distortion of the cubic unit cell, which was reported by Jørgensen et al. 9 and Andersen et al. 11 for IONPs with ordered vacancies, the XRD pattern from Fig. 1 a was alternatively refined using the tetragonal space group \(P4_32_12\) . However, this Rietveld refinement revealed the same lattice parameters \(a = c = 0.8353(5)\) nm, as no noticeable tetragonal distortion was observed. In order to find out, whether the vacancies are ordered throughout the whole particle or just locally, the amplitude images of the lattice fringes \(\{1\,0\,2\}\) obtained from geometric phase analysis (GPA) 55 , 56 were taken into consideration. As the lattice fringes \(\{1\,0\,2\}\) only appear in crystal structures with ordered vacancies (space group \(P4_332\) or \(P4_32_12\) ), the magnitude of the local amplitudes obtained from GPA is a measure of the amount of ordered vacant octahedral positions. In the amplitude image (Fig. 2 d), bright colors correspond to a higher amount of ordered vacancies, dark colors to a lower amount of ordered vacancies. A highly non-homogeneous distribution of ordered vacancies is apparent. Complementarily to the results of XRD, which proved that the IONFs under study are almost entirely oxidized to maghemite (cf. Fig. 1 b), the amplitude image from Fig. 2 d shows that the vacancies are ordered only in few regions, which form subdomains with a size of few nanometers. Arrangement and coherence of individual cores in the IONFs Although separated cores were found occasionally for the IONFs under study (Fig. 2 b), the majority of IONFs consists of agglomerated cores (Fig. 3 ). Several authors reported that individual cores within IONFs tend to have the same crystallographic orientation 32 , 33 , 35 , 43 . The cores in the IONFs under study possess distinct crystallographic orientation relationships, but the majority of them was mutually twisted. The IONF in Fig. 3 a is composed of two cores, which are attached along their lattice planes \((2\,2\,0)\) and mutually twisted by about \(35.3^{\circ }\) around the crystallographic direction \([1\,1\,0]\) . The twist angle was determined from the angle between the crystallographic directions \([{\bar{1}}\,1\,1]\) and \([{\bar{1}}\,1\,4]\) , which were assigned to the direction of the primary electron beam for the core A and B, respectively (Fig. 3 b and c). Note that the angle of \(35.3^{\circ }\) corresponds to the smallest angle between the crystallographic directions \(\langle 1\,1\,1\rangle\) and \(\langle 1\,1\,4\rangle\) . The filtered inverse FFT image showing strongly magnified \((2\,2\,0)\) lattice fringes (Fig. 3 d) reveals some discontinuities at the interface of the cores, which resemble dislocations. The presence of these crystal structure defects is confirmed by the strain field component perpendicular to the \((2\,2\,0)\) lattice planes of the cores (Fig. 3 e), which corresponds to the strain distribution that is typically observed around the cores of edge dislocations 56 , 57 . Figure 3 ( a ) HRTEM image of a double-core IONF. The outer boundaries of the individual cores and their interface are indicated by a solid line and by a dashed line, respectively. Panels ( b ) and ( c ) show local FFTs of the cores labeled A and B in ( a ), respectively. In panel ( b ), reflections associated with the ordering of vacancies are marked by arrows. ( d ) Filtered inverse FFT showing the \((2\,2\,0)\) lattice fringes from the region in the middle of panel (a) that is marked by a square. ( e ) Strain field component perpendicular to the \((2\,2\,0)\) lattice planes of the cores as determined by GPA. Full size image Figure 4 ( a ) HRTEM image of a double-core IONF. The outline of the IONF, the interface between the two cores, and the interface between individual nanocrystals within the larger core are indicated by a solid, dashed and dotted line, respectively. Panels ( b ) and ( c ) show local FFTs of the cores labeled A and B in ( a ), respectively. The spots marked by yellow circles were used for GPA. Reflections associated with the ordering of vacancies are marked by arrows in ( b ). The strain field components \(\varepsilon _{xx}\) and \(\varepsilon _{yy}\) and the rigid rotation field \(\omega _{xy}\) determined by GPA are shown in panels ( d ), ( e ) and ( f ), respectively. The coordinate system is provided in the lower left corner of panel ( a ). Full size image Another double-core IONF is depicted in Fig. 4 a. Also in this case, individual cores possess a specific orientation relationship. They share the lattice planes \(\{3\,1\,1\}\) and are mutually twisted by about \(19.2^{\circ }\) , which is the angle between the crystallographic directions \([{\bar{1}}\,1\,2]\) and \([{\bar{2}}\,1\,7]\) (cf. Fig. 4 b,c). Moreover, these cores share additional lattice planes, e.g., \((0\,4\,{\bar{2}})_{\text {A}} \parallel (2\,4\,0)_{\text {B}}\) , \((0\,{\bar{4}}\,2)_{\text {A}} \parallel ({\bar{2}}\,{\bar{4}}\,0)_{\text {B}}\) , \((3\,{\bar{3}}\,3)_{\text {A}} \parallel (1\,{\bar{5}}\,1)_{\text {B}}\) or \(({\bar{3}}\,3\,{\bar{3}})_{\text {A}} \parallel ({\bar{1}}\,5\,{\bar{1}})_{\text {B}}\) . Note that the lattice planes \(\{3\,3\,3\}\) and \(\{5\,1\,1\}\) have the same interplanar spacing in cubic structures. The coincidence of several lattice planes is a possible reason for the shape of the interface between the individual cores. In contrast to the straight interface between the cores from Fig. 3 , which is more or less perpendicular to the shared lattice planes \((2\,2\,0)\) , the interface between the cores in Fig. 4 is rather curved, because its direction is not restricted by a single coinciding family of lattice planes. A more detailed information about the local misorientations of the cores was obtained from GPA 55 , 56 that was performed on the ‘non-colinear’ reflection spots \(3\,1\,1_\text {A}\parallel 3\,{\bar{1}}\,1_\text {B}\) and \(3\,{\bar{3}}\,3_\text {A}\parallel 1\,{\bar{5}}\,1_\text {B}\) . The strain field components \(\varepsilon _{xx}\) and \(\varepsilon _{yy}\) shown in Fig. 4 d,e, which represent the strain parallel and perpendicular to the \(\{3\,1\,1\}\) lattice planes of the cores, reveal that the lattice strain is primarily concentrated at the interface of the cores, whereas no apparent strain seems to be present within the cores. The rigid rotation field \(\omega _{xy}\) shown in Fig. 4 f disclosed that the cores A and B are additionally twisted along the viewing direction by about \(2^{\circ }\) . Moreover, Fig. 4 f suggests that core B is further fragmented into smaller nanocrystals (NCs) that are slightly twisted with respect to each other along the viewing direction by about \(0.3^{\circ }\) . Thus, the size of the primary building blocks within the IONFs is actually smaller than 10 nm. Figure 5 ( a ) Dependence of the XRD line broadening expressed in the reciprocal space units, \({\text {FWHM}}({\text {rad}}) \cdot \cos \theta /\lambda\) , on the magnitude of the diffraction vector, \(|\textbf{q}| \equiv q = 4\pi \sin \theta / \lambda\) . Black circles represent experimental data, the black solid line shows the dependence of the line broadening on \(|\textbf{q}|\) calculated for partially coherent NCs according to Rafaja et al. 58 . ( b ) Schematic representation of the effect of the mutual misorientation of crystallites by the angle \(\omega\) in direct space on the rotation of their reciprocal lattices, adapted from Rafaja et al. 59 . The reciprocal lattice points of two different crystallites are shown by filled and empty circles, respectively. The overlap of the reciprocal lattice points (hatched areas) represents the degree of partial coherence of the crystallites that decreases with their increasing distance from the origin of the reciprocal lattice 58 , 59 . Solid ellipses mark two examples of overlapping pairs of reciprocal lattice points. The dashed ellipse marks separated (non-coherent) reciprocal lattice points. Full size image The fragmentation of the IONF cores was confirmed by XRD. The XRD line broadening that was obtained by fitting individual XRD lines with Pearson VII functions 60 , 61 increased steeply at \(|\textbf{q}| \approx 75\,\textrm{nm}^{-1}\) (Fig. 5 a), which is an indicator of the partial crystallographic coherence of adjacent NCs 58 , 59 . In previous reports 58 , 59 , it was shown that adjacent crystallites can be partially coherent for XRD, if they are sufficiently small and if they possess very similar crystallographic orientations. Such crystallites cannot be distinguished by XRD from each other and appear larger. The degree of the partial coherence corresponds to the volume of the overlapping parts of the reciprocal lattice points (Fig. 5 b), which depends on the size of the reciprocal lattice points (approx. reciprocal value of the size of individual NCs), on the misorientation of neighboring NCs ( \(\omega\) ) and on the magnitude of the diffraction vector. A consequence of the partial coherence of NCs is a ‘narrowing’ of the XRD lines that appears at short diffraction vectors. The dependence from Fig. 5 a was described by a model from Rafaja et al. 58 . The refinable parameters of the model were the size of the crystallites and their local misorientation. The cluster size corresponds to the reciprocal value of the XRD line broadening extrapolated to \(|\textbf{q}| = 0\) . The refinement revealed a cluster size of 16 nm, a primary crystallite size of 7 nm and a crystallite misorientation of \(0.25^{\circ }\) . The cluster size, the crystallite size and the misorientation of crystallites agree very well with the parameters determined from HRTEM and GPA (cf. Fig. 4 ). Statistical determination of particle, core and shell size The results of HRTEM and XRD experiments discussed above confirmed that the majority of IONFs under study consists of agglomerates of nanocrystalline cores having specific mutual crystallographic orientations. However, these techniques cannot reveal statistically reliable information about the size distribution of the respective objects. HRTEM is typically applied to image few particles, thus its statistical reliability is low. XRD probes a significantly larger volume of the sample. However, the crystallite size distribution is usually obtained from the shape of the XRD lines assuming a certain shape of the distribution function 62 . This approach is not easily applicable for partially coherent NCs, because the partial coherence of adjacent NCs affects the shape of the XRD lines in addition to the size distribution and microstrain (variation of the interplanar spacing) 63 . Figure 6 Schematic representation of the multi-stage segmentation routine used for the determination of the particle size and core size distribution. ( a ) Original low-magnification HAADF-STEM image of the IONFs. ( b ) HAADF-STEM image segmented into individual particles by the semi-automatic segmentation routine from Neumann et al. 45 . ( c ) Single IONF segmented into several cores by a shape-based segmentation routine. ( d ) Binary image of a single segmented IONF. (e) Shape of the IONF and its individual cores approximated by ellipses based on the DTECMA algorithm 64 . ( f ) Shape markers determined on the basis of the ellipses from ( e ). ( g ) Outer Euclidean distance transform of the shape markers from ( f ) used as the marking function for the watershed segmentation of the IONF into its cores. Full size image In order to gain statistical insights into the size distribution of the entire IONFs and the individual cores, low-magnification HAADF-STEM imaging was employed. This technique allows to visualize 50-100 particles in a single low-magnification HAADF-STEM image. The HAADF-STEM images were evaluated using a multi-stage segmentation routine based on the watershed algorithm 65 . In the first stage of the routine, accumulated IONFs (Fig. 6 a) were segmented into individual particles (Fig. 6 b) by a semi-automatic segmentation routine 45 , 66 . For this segmentation step, the image intensity was adjusted, the noise was reduced using a Gaussian filter, the pre-processed images were binarized and morphologically smoothed 45 . Finally, individual particles were segmented using a marker-based watershed transformation. The markers were determined based on the extended minima transform of the inverted inner Euclidean distance transform of the pre-processed binary image 67 . The result of the segmentation routine was inspected and critical regions of the image were segmented manually. From the segmented images (Fig. 6 b), the area-equivalent diameter \(d_A\) of individual IONFs was determined using $$\begin{aligned} d_A = \sqrt{\frac{4A}{\pi }} \end{aligned}$$ (2) where A is the area of the IONFs. In the second step of the multi-stage segmentation routine, every individual IONF was segmented into its cores by a segmentation routine that considers mainly the IONF shape (Fig. 6 c). When an IONF consists of coalesced cores, its contour shows concave points (Fig. 6 d). Individual cores were localized using the Distance Transform-based Ellipse Contour Matching Algorithm (DTECMA) 64 that was applied to binary images of individual IONFs (Fig. 6 e). This algorithm identifies overlapping objects—in this case individual cores of an IONF—by approximating their two-dimensional projections with ellipses. Afterwards, shape markers were determined based on the extended minima transform of the inverted inner Euclidean distance transform 67 of the binary images of the individual ellipses determined by the DTECMA algorithm (Fig. 6 f). Finally, the outer Euclidean distance transform of the shape markers (Fig. 6 g) was determined and used as the marking function for the watershed segmentation of the IONFs into their cores. The segmentation of the IONFs into their cores was controlled by adjusting the parameters of the DTECMA algorithm, i.e., the distance threshold influencing the extraction of concave points and the regularization parameter balancing the number of ellipses, as well as by adjusting the threshold value of the extended minima transform that was used to determine the shape markers. The size of the individual cores was then determined analogously to the size of the IONFs (Eq. 2 ). The size distribution of the IONFs and the individual cores determined from HAADF-STEM images using the multi-stage segmentation routine are depicted in Fig. 7 together with the size distribution of the hydrodynamic diameter of the IONFs that was determined using DLS. In order to be able to compare the size distribution determined using DLS with the size distributions derived from HAADF-STEM images, the intensity distribution density \(q_{6}(D_h)\) that is primarily provided by DLS must be converted to the number distribution density \(q_{0}(D_h)\) using 68 $$\begin{aligned} q_{0}(D_h) = \frac{D_h^{-6}q_{6}(D_h)}{\int _{D_{h,\text {min}}}^{D_{h,\text {max}}}D_h^{-6}q_{6}(D_h)\text {d}D_h} \end{aligned}$$ (3) Note that the hydrodynamic diameter of the IONFs corresponds to their size including the dextran shell. As HAADF-STEM imaging uses electrons scattered by atomic nuclei to high angles, it is highly sensitive to the atomic number of the scattering atoms 69 . For this reason, HAADF-STEM imaging visualizes IONFs almost without their light dextran shell. Moreover, the dextran shell degrades quickly under the impact of the high-energy electron beam. Consequently, the size of IONFs determined using HAADF-STEM ( \(D_P^{\text {STEM}}\) ) is smaller than the hydrodynamic diameter ( \(D_h^{\text {DLS}}\) ) determined using DLS 37 . Figure 7 Number distribution density ( \(q_0\) ) of the size of the IONFs ( \(D_P^{\text {STEM}}\) ), their cores ( \(D_C^{\text {STEM}}\) ) and the hydrodynamic diameter ( \(D_h^{\text {DLS}}\) ) as determined using HAADF-STEM and DLS, respectively. Full size image The mean sizes \(\langle D^{\textrm{DLS}}_h\rangle\) and \(\langle D^{\textrm{STEM}}_P\rangle\) and their standard deviations ( \(\sigma\) ), which are summarized in Table 1 , were determined from the obtained size distributions (Fig. 7 ) using $$\begin{aligned} \langle D\rangle = \int _{D_{\text {min}}}^{D_{\text {max}}}Dq_0(D)\text {d}D \end{aligned}$$ (4) and $$\begin{aligned} \sigma =\sqrt{\int _{D_{\text {min}}}^{D_{\text {max}}}\left[ (D-\langle D\rangle )^2 q_0(D)\right] \text {d}D} \end{aligned}$$ (5) The difference between the mean hydrodynamic diameter, \(\langle D^{\textrm{DLS}}_h\rangle = (29\pm 8)\) nm, and the mean diameter of the IONFs determined by HAADF-STEM, \(\langle D^{\textrm{STEM}}_P\rangle = (20\pm 4)\) nm, reveals an estimate of the mean thickness of the dextran shell ( \(\approx 5\) nm). The mean IONF size obtained from HAADF-STEM, \(\langle D^{\textrm{STEM}}_P\rangle = (20\pm 4)\) nm, agrees very well with the mean IONF size obtained from HRTEM, \(\langle D^{\textrm{HRTEM}}_P\rangle = (19\pm 4)\) nm. A good agreement was also achieved for the mean size of the cores, \(\langle D_C\rangle\) , determined using HAADF-STEM and HRTEM. Additionally, HRTEM revealed the size of the slightly twisted core fragments, \(\langle D_F\rangle\) , which was visible by XRD as the mean size of individual crystallites (Fig. 5 ). Note that \(\langle D^{\textrm{XRD}}_F\rangle\) is slightly smaller than \(\langle D^{\text {HRTEM}}_F\rangle\) , because XRD recognizes mainly the undisturbed interior of the NCs, while their possibly defect-rich rim contributes rather to diffuse scattering than to the diffraction lines. Thus, the difference between \(\langle D^{\text {HRTEM}}_F\rangle\) and \(\langle D^{\text {XRD}}_F\rangle\) can be understood as a first estimate of the thickness of the disordered surface layer of the core fragments, which is approximately 1 nm. The ‘cluster size’ of approx. 16 nm obtained from XRD corresponds to the size of agglomerates of partially coherent twisted domains. Its value is between the size of the cores \(\langle D_C\rangle\) and the size of the IONFs \(\langle D_P\rangle\) (Table 1 ), which illustrates once more the crystallographic partial coherence of the cores within IONFs discussed above. Table 1 Hydrodynamic diameter \(\langle D_h\rangle\) , particle diameter \(\langle D_P\rangle\) , core diameter \(\langle D_C\rangle\) and diameter of the core fragments \(\langle D_F\rangle\) as determined by DLS, low-magnification HAADF-STEM, HRTEM and XRD. Full size table Influence of the structure of the IONFs on their magnetic properties The magnetization curve of the IONFs measured by AGM and normalized to the sample density is depicted in Fig. 8 a. The IONFs show superparamagnetic behavior that is characterized by negligible remanent magnetization and coercive field. The normalized (mass) saturation magnetization was ( \(50\pm 1\) ) Am \(^2\) /kg, which is lower than the saturation magnetization of bulk maghemite (74.3 Am \(^2\) /kg) 15 . Assuming that the saturation magnetization is reduced by the spin disorder in the surface layer of the magnetic particles, the ratio between the thickness of the disordered spin layer ( t ) and the particle size ( D ) can be calculated using the relation 24 , 25 , 26 $$\begin{aligned} M_S = M_S^{\textrm{bulk}} \left( 1 - \frac{6t}{D}\right) \end{aligned}$$ (6) For \(M_S = (50\pm 1)\) Am \(^2\) /kg and \(M_S^{\textrm{bulk}} = 74.3\) Am \(^2\) /kg, t / D is \((0.055 \pm 0.001)\) . A disordered spin layer having a thickness of 1 nm 25 would be consistent with a particle size of 18 nm, which agrees best with \(\langle D_P\rangle\) from Table 1 . A disordered spin layer having a thickness of 0.54 nm 26 would correspond to a particle size of 10 nm, which is between \(\langle D_F\rangle\) and \(\langle D_C\rangle\) . Figure 8 ( a ) Magnetization curve of the IONFs as measured by AGM (crosses) and the Langevin fits using three log-normal functions (solid blue line) and using the Kaczmarz method 70 , 71 (dashed red line). ( b ) Distributions of the magnetic particle size corresponding to the fits in panel ( a ). Full size image For modelling of the measured magnetization curve, two approaches were used. Both are based on the approximation of the M ( H ) dependence by the Langevin function: $$\begin{aligned} M(H) = M_S{\mathcal {L}}(\xi ) \end{aligned}$$ (7) where \(M_S\) is the saturation magnetization and \({\mathcal {L}}(\xi ) = \coth (\xi ) - 1/\xi\) . The parameter \(\xi\) is related to the (volume) saturation magnetization ( \(M_S\) ), to the strength of the external magnetic field ( H ), to the permeability of vacuum ( \(\mu _0\) ), to the Boltzmann constant ( \(k_B\) ) and to the sample temperature ( T ) 71 , 72 : $$\begin{aligned} \xi (H) = \frac{M_S\pi d_c^3H\mu _0}{6k_BT} \end{aligned}$$ (8) Note that in Eq. ( 8 ), \(M_S\) has the unit of A/m like H . As the recorded signal is a superposition of the magnetizations of all particles in the sample, the size distribution of the magnetic particles must be taken into account. In the first modelling approach, it was assumed in analogy to previous reports 33 , 35 , 72 , 73 , 74 , 75 that the size distribution can be described by log-normal functions. As microstructure analyses revealed the existence of three different types of magnetic ‘objects’ (Table 1 ), a sum of three log-normal functions was employed for the Langevin fit: $$\begin{aligned} P(d_c) = \sum _{i=1}^3w_i\frac{1}{\sqrt{2\pi }\sigma _i d_c}\exp \left[ -\frac{\left( \ln d_c - \mu _i\right) ^2}{2\sigma _i^2}\right] \end{aligned}$$ (9) The refinable parameters were the weights of the log-normal functions ( \(w_i\) ), the medians of the magnetic particle sizes ( \(\mu _i\) ) and the widths of the log-normal functions ( \(\sigma _i\) ). The fitting function based on Eq. ( 7 ) had the form: $$\begin{aligned} M(H) = M_S\int _0^\infty P(d_c){\mathcal {L}}(\xi )\text {d}d_c \end{aligned}$$ (10) The best fit of the magnetization function (Fig. 8 a) was obtained for the sizes of magnetic particles of ( \(6\pm 4\) ) nm, ( \(12\pm 1\) ) nm and ( \(20\pm 5\) ) nm, which agree well with the size of the fragments ( \(D^{\textrm{HRTEM}}_F\) and \(D^{\textrm{XRD}}_F\) ), with the size of the cores ( \(D^{\textrm{STEM}}_C\) and \(D^{\textrm{HRTEM}}_C\) ) and with the size of the IONFs ( \(D^{\textrm{STEM}}_P\) and \(D^{\textrm{HRTEM}}_P\) ) from Table 1 , respectively. The resulting size distribution function obtained from the Langevin fit is depicted in Fig. 8 b. The sizes of very small particles (fragments of the cores and the cores themselves) determined from the magnetization curve are slightly smaller than the corresponding sizes \(D_F\) and \(D_C\) determined using HAADF-STEM, HRTEM and XRD as expected, because the magnetization of small particles is reduced by a disordered spin layer at their surface 24 , 25 , 26 . In the second approach, the particle size distribution was substantially less constrained, as the shape of the distribution was determined using Kaczmarz’ iterative method 70 , 71 without any a priori assumption (except keeping the values of the distribution function non-negative). Within this method, a matrix \({\textbf{A}}_{ji}\) is composed, which contains magnetization values calculated according to Eqs. ( 7 ) and ( 8 ) for individual values of the magnetic particle size ( \(d_{c,i}\) ) and for individual values of the external magnetic field ( \(H_j\) ). This matrix is used for iterative calculation of the ‘weighting factors’ W : $$\begin{aligned} W^{k+1} = W^k +\frac{M_j-{\textbf{A}}_j W^k}{\Vert {\textbf{A}}_j\Vert ^2}{\textbf{A}}_j^\intercal \end{aligned}$$ (11) that describe the particle size distribution. In Eq. ( 11 ), k is the iteration number. The starting set of the ‘weighting factors’ ( \(W^0\) ) is a zero vector having the same length like the vector \(d_{c,i}\) . \(M_j\) are the magnetization values measured at different intensities of the external magnetic field \(H_j\) , and \({\textbf{A}}_j\) the corresponding row vectors of the \({\textbf{A}}_{ji}\) matrix (calculated for the same magnetic field \(H_j\) but for different particle sizes \(d_{c,i}\) ). After each iteration, negative values of W are reset to zero. Following previous reports 70 , 71 , 10,000 iterations were employed. The final fit of the magnetization curve obtained from $$\begin{aligned} M_j^{\mathrm{(calc)}} = \sum _i W_i^{10,000}{\textbf{A}}_{ji} \end{aligned}$$ (12) is depicted in Fig. 8 a, the size distribution ( \(P(d_c)\,\widehat{=}\,W^{10,000}\) ) in Fig. 8 b. It can be seen from Fig. 8a that both approaches, which are the Langevin fit with three log-normal functions corresponding to the size distributions of the whole particles (IONFs), their cores and fragments, and the Langevin fit using Kaczmarz’ method, reveal almost the same magnetization curve despite the relatively large differences in the corresponding size distribution. This shows a relatively low sensitivity of the magnetization curve to the exact particle size distribution and suggests that additional information obtained from structure analysis, e.g., information about the number of different magnetic objects, can help to improve the reliability of the size distribution. Discussion In analogy with the paper from Gavilán et al. 35 , where a hierarchical structure of similarly synthesized IONFs was characterized and described by a multimodal size distribution, the IONFs under study were found to be composed of agglomerated maghemite NCs (Fig. 9 ). Our XRD and HRTEM analyzes identified the NCs as elementary blocks forming the magnetic cores and IONFs. The mean sizes of the NCs were \(\langle D^{\text {XRD}}_F \rangle = (7 \pm 1)\) nm and \(\langle D^{\text {HRTEM}}_F \rangle = (9 \pm 3)\) nm, cf. Table 1 . The difference in the size of the core fragments obtained from XRD and HRTEM is connected with a different sensitivity of the analytical techniques to the structural disorder at the surface of the NCs. XRD recognizes only the coherent part of the NCs as the core fragments. Therefore, it reveals the size of their undisturbed interior, while HRTEM sees the core fragments including their rim, in particular for isolated NCs. The NCs were also recognized by the Langevin fit of the magnetization curve. Their ‘magnetic’ size was \((6 \pm 4)\) nm. The amount of the NCs determined from the magnetic measurement was relatively low (Fig. 8 b), because the majority of neighboring NCs possessed almost the same crystallographic orientation, as revealed by HRTEM (Fig. 2 b) and as concluded from the coherence phenomena affecting the XRD line broadening (Fig. 5 ). The misorientation of the NCs within the cores was below \(1^{\circ }\) , as revealed by GPA of the HRTEM images (Fig. 4 ) and by XRD (Fig. 5 ). This kind of crystallographic coherence facilitates coupling of magnetic moments in individual NCs forming the cores 33 , 42 . Thus, the magnetic measurement recognized much more cores than isolated NCs (Fig. 8 ). The size of the cores can be determined most reliably using HRTEM in combination with local orientation analysis (FFT/HRTEM or GPA). HAADF-STEM may overestimate the size of the cores, because it uses a shape-based segmentation routine to identify individual cores in the IONFs (Fig. 6 ). However, this routine cannot distinguish parts of the IONFs with different crystallographic orientations from each other like HRTEM complemented by FFT or GPA. XRD can only estimate the size of the cores from the size of the clusters composed of partially coherent NCs (core fragments). The ‘magnetic’ size of the cores, \(\langle D^{\text {AGM}}_C \rangle = (12 \pm 1)\) nm, refers to the size of magnetic domains with uniform orientation of spin moments. Thus, half of the difference between \(\langle D^{\text {HRTEM}}_C \rangle = (13 \pm 3)\) nm and \(\langle D^{\text {AGM}}_C \rangle\) can be understood as the thickness of the disordered spin layer of the cores. According to Eq. ( 6 ), a disordered spin layer having a thickness of \(\approx 0.5\) nm would reduce the saturation magnetization of the cores from 74.3 15 to 57.1 Am \(^2\) /kg, which approaches the saturation magnetization of 50 Am \(^2\) /kg obtained from the Langevin fit of the magnetization curve (Fig. 8 ). Note that Sharifi Dehsari et al. 26 reported about a disordered spin layer having a thickness of 0.54 nm. As reported by Morales et al. 22 , an additional reason for the reduction of the saturation magnetization might be a certain degree of disorder of the spins even in the volume of the IONFs as a result of an inhomogeneous ordering of the cation vacancies in the IONFs (Fig. 2 d). A large part of the cores in the IONFs possessed distinct mutual crystallographic orientation relationships (Figs. 3 and 4 ), which resulted from their attachment along lattice planes with matching interplanar spacings. The attachment of the cores along lattice planes with the same interplanar spacing is a phenomenon, which was observed even in dual-phase systems with different crystal structures of the counterparts 76 . Such cores are not mutually coherent for XRD, and can be easily distinguished by FFT/HRTEM because of their different crystallographic orientations. In contrast to XRD and HRTEM, low-magnification HAADF-STEM cannot distinguish these two kinds of cores from each other directly, but it identifies these cores just as convex parts of the IONFs. Furthermore, it should be mentioned that the determination of the size of the cores from low-magnification HAADF-STEM images does not succeed, when the cores overlap in the projection direction. However, this was rarely the case in our IONFs. Figure 9 Schematic illustration of the hierarchical structure of a dextran-coated IONF, adapted from Gavilán et al. 35 and modified. Hydrodynamic diameter \(D_h\) , particle diameter \(D_P\) , core diameter \(D_C\) and diameter of the core fragments \(D_F\) are indicated. Red and purple arrows mark neighboring cores with lattice planes with matching interplanar spacings and fragmented cores with nearly identical crystallographic orientation, respectively. Full size image The IONFs under study are agglomerates of cores consisting of individual NCs. The size of the IONFs was quantified using both, HRTEM and HAADF-STEM (Table 1 ). Still, low-magnification HAADF-STEM is more reliable than HRTEM from the statistical point of view, because it allows more IONFs to be analyzed (Fig. 6 ). The accuracy of low-magnification HAADF-STEM for the determination of the size of the IONFs is sufficient, as only one segmentation step, i.e., the semi-automatic segmentation based on a marker-based watershed algorithm, is required 45 , 66 . From the point of view of the magnetic properties, the IONFs can behave as magnetic particles with uniform orientation of magnetic moments, even if their cores are crystallographically non-coherent. Still, adjacent cores should be attached along specific lattice planes like in Figs. 3 and 4 , and the angle between the easy magnetization axes of the individual cores should be small. Therefore, a cooperative magnetic behavior is expected also within the multi-core IONFs. A magnetic coupling was confirmed by the presence of magnetic particles having a size of \((20 \pm 5)\) nm as concluded from the Langevin fit of the magnetization curve (Fig. 8 ). This particle size agrees very well with the size of the IONFs, which was \((20 \pm 4)\) nm and \((19 \pm 4)\) nm according to HAADF-STEM and HRTEM, respectively. The structure of the IONFs under study can be summarized as follows. The IONFs with the size \(D_P\) are composed of several cores having the size \(D_C\) (Fig. 9 ). The cores consist of several NCs having the size \(D_F\) . Individual NCs contain maghemite with the average chemical composition \(\gamma\) -Fe \(_{2.72\pm 0.02}\) O \(_4\) and with partially ordered vacancies on metallic positions (Fig. 2 d). The main driving force for the clustering of NCs and for the formation of the cores and IONFs is the minimization of the surface energy via oriented attachment of primary NCs along certain crystallographic facets 33 , 40 , 41 , 42 , 43 . This mechanism generally involves rotations of the NCs in three-dimensional space, until they share the same facets 77 . However, this process depends strongly on the reaction conditions. It has been shown previously that the internal structure of IONFs is influenced by many different parameters of the synthesis process, e.g., by the nature of the polyol solvent 41 , 43 , by the heating temperature, heating time and heating rate 38 , 39 , 78 , by the stoichiometry of the iron precursor 10 , 39 and by the presence and concentration of a reducing agent 32 , 41 , 78 . The arrangement of the cores in IONFs is controlled primarily by the kinetics of the nucleation and aggregation of the primary NCs, which in turn depends on the type of polyol used for the synthesis 43 . Higher formation and growth rates of the NCs cause a faster aggregation resulting in a higher misalignment of the NCs within the IONFs. As we observed not only a fully epitaxial alignment but also specific orientation relationships between individual NCs building up the IONFs, we can conclude that the nucleation and aggregation of the NCs in our IONFs was slightly too fast. Consequently, not all NCs did have enough time to order to possess the same crystallographic orientation. Some NCs were just oriented along specific lattice planes that were parallel to each other. This kind of alignment of NCs might partially reduce the surface energy but also inhibit a full alignment of the NCs. Moreover, this alignment of NCs produces local strain fields, which are compensated by crystal structure defects, possibly dislocations (Fig. 3 ). Conclusions A combination of TEM, XRD and DLS disclosed the hierarchical architecture of dextran-coated multi-core IONFs prepared by a polyol method. The TEM measurements combined high-resolution (HRTEM with FFT and GPA) and low-resolution (HAADF-STEM) modes in a correlative multiscale approach in order to describe the internal structure of the IONFs on the atomic scale including the orientation relationships between individual NCs and cores, and to determine the size distribution of the constituents in a statistically relevant manner. It was shown that the basic units of the IONFs are maghemite NCs with partially ordered vacancies on the iron sites. NCs with distinct crystallographic orientation relationships form magnetic cores, which agglomerate and build up the IONFs. Neighboring cores were typically attached by sharing lattice planes with the same interplanar distance. The presence of these objects was confirmed by the Langevin fit of the magnetization curve measured using AGM. As the magnetic sizes of the NCs, of the cores and of the IONFs were very close to the corresponding sizes obtained from the microstructure analysis, it was concluded that the magnetic moments of individual NCs interact mutually. It was shown that the magnetic interaction between individual NCs and cores is strongly affected by their mutual crystallographic orientation. The strongest coupling of magnetic moments was observed between neighboring NCs that had almost the same crystallographic orientation and that formed the magnetic cores. A weaker but still existing magnetic interaction was detected between the magnetic cores within individual IONFs, which had a distinct orientation relationship but no full crystallographic coherence. From the difference between the particle sizes obtained from the microstructure analysis and from the magnetic measurement, it was concluded that the magnetic cores have a disordered spin layer at the rim. This layer, which has a thickness of approximately 0.5 nm, reduces the saturation magnetization of the IONFs together with the inhomogeneous ordering of the vacancies on the iron sites in \(\gamma\) -Fe \(_{2.72\pm 0.02}\) O \(_4\) . Data availability The datasets analyzed in the current study are available from the corresponding author on request. | None | [] | [] | [] | SciNews | Nano | Stefan Neumann et al, Influence of the hierarchical architecture of multi-core iron oxide nanoflowers on their magnetic properties, Scientific Reports (2023). DOI: 10.1038/s41598-023-31294-4 Journal information: Scientific Reports | https://dx.doi.org/10.1038/s41598-023-31294-4 | https://phys.org/news/2023-04-iron-oxide-nanoparticles-medical-applications.html | Researchers at TU Bergakademie Freiberg have investigated the magnetic properties of iron oxide nanoparticles, which are used as contrast agents for magnetic resonance imaging and transport agents for drugs in the bloodstream. Using high-resolution transmission electron microscopy, the team found that the nanoparticles' magnetic properties can be improved through microstructure design. Specifically, they discovered that the mutual orientation of the individual cores within each particle affects its magnetization, and that a strong alignment of magnetic moments is not necessarily achieved through the same crystallographic orientation of individual cores. Instead, other specific crystallographic orientation relationships can promote magnetic interaction, while a fully random alignment can deteriorate the magnetic properties. The research aims to optimize the production of particles with specific magnetic properties, which is crucial for their use in medical applications.
Iron oxide nanoparticles are often used in medical technology as contrast agents for magnetic resonance imaging or as transport agents for drugs in the bloodstream, for example in tumor therapy. For these applications, the nanoparticles have to be biocompatible and superparamagnetic. Thus, they must be strongly magnetizable in a magnetic field and also must lose their magnetization when the magnetic field is switched off. Using analytical high-resolution transmission electron microscopy, a team at TU Bergakademie Freiberg investigated how the magnetic properties of the nanoparticles can further be improved via microstructure design. The researchers published their results in the current issue of Scientific Reports. The knowledge of the exact structure of the iron oxide nanoparticles sized between 20 and 30 nanometers helps to optimize the manufacturing process and to improve the magnetic properties of the particles systematically. Each particle consists of at least two superparamagnetic nanocrystalline cores and a shell that does not contribute to the magnetic properties. The maximum magnetization of the nanoparticles depends on the mutual orientation of the individual cores. How well are the cores oriented to each other? "The current state of research assumed that a strong alignment of magnetic moments in multi-core iron oxide nanoparticles is enabled by the same crystallographic orientation of individual cores. However, our analyses showed that this is not necessarily true," says Stefan Neumann, research associate at TU Bergakademie Freiberg and the first author of the publication. "Also other, but still specific crystallographic orientation relationships of the cores can promote their magnetic interaction. Nevertheless, a fully random alignment of the cores deteriorates the magnetic properties of the nanoparticles," says Neumann. "In order to be able to produce highly superparamagnetic iron oxide nanoparticles for future applications in medicine on demand, we need knowledge of their internal structure," says co-author Prof. David Rafaja, head of the Institute of Materials Science at TU Bergakademie Freiberg. "During the production of the nanoparticles, individual cores are formed first. When the cores get more time to align in the right way, then the magnetic properties of the nanoparticles can further be improved." Background: Analyzing ultra-fine particles The results were obtained within the priority program "MehrDimPart—Highly Specific and Multidimensional Fractionation of Fine Particle Systems with Technical Relevance." The aim of the research is to develop technological approaches that enable a controlled production of highly specific and technologically relevant particle systems with desired properties. In addition to the team from TU Bergakademie Freiberg, scientists from the Karlsruhe Institute of Technology have also contributed to the current publication. The basic research behind this work was focused on the structure of the nanoparticles to be able to optimize the production of particles with specific magnetic properties. A toxicological study was not carried out. |
10.1038/s44161-022-00041-9 | AI predicts if—and when—someone will have cardiac arrest | A new artificial intelligence-based approach can predict, significantly more accurately than a doctor, if and when a patient could die of cardiac arrest. The technology, built on raw images of patient's diseased hearts and patient backgrounds, stands to revolutionize clinical decision making and increase survival from sudden and lethal cardiac arrhythmias, one of medicine's deadliest and most puzzling conditions. The work, led by Johns Hopkins University researchers, is detailed today in Nature Cardiovascular Research. "Sudden cardiac death caused by arrhythmia accounts for as many as 20 percent of all deaths worldwide and we know little about why it's happening or how to tell who's at risk," said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and Medicine. "There are patients who may be at low risk of sudden cardiac death getting defibrillators that they might not need and then there are high-risk patients that aren't getting the treatment they need and could die in the prime of their life. What our algorithm can do is determine who is at risk for cardiac death and when it will occur, allowing doctors to decide exactly what needs to be done." The team is the first to use neural networks to build a personalized survival assessment for each patient with heart disease. These risk measures provide with high accuracy the chance for a sudden cardiac death over 10 years, and when it's most likely to happen. The deep learning technology is called Survival Study of Cardiac Arrhythmia Risk (SSCAR). The name alludes to cardiac scarring caused by heart disease that often results in lethal arrhythmias, and the key to the algorithm's predictions. The team used contrast-enhanced cardiac images that visualize scar distribution from hundreds of real patients at Johns Hopkins Hospital with cardiac scarring to train an algorithm to detect patterns and relationships not visible to the naked eye. Current clinical cardiac image analysis extracts only simple scar features like volume and mass, severely underutilizing what's demonstrated in this work to be critical data. "The images carry critical information that doctors haven't been able to access," said first author Dan Popescu, a former Johns Hopkins doctoral student. "This scarring can be distributed in different ways and it says something about a patient's chance for survival. There is information hidden in it." The team trained a second neural network to learn from 10 years of standard clinical patient data, 22 factors such as patients' age, weight, race and prescription drug use. The algorithms' predictions were not only significantly more accurate on every measure than doctors, they were validated in tests with an independent patient cohort from 60 health centers across the United States, with different cardiac histories and different imaging data, suggesting the platform could be adopted anywhere. "This has the potential to significantly shape clinical decision-making regarding arrhythmia risk and represents an essential step towards bringing patient trajectory prognostication into the age of artificial intelligence," said Trayanova, co-director of the Alliance for Cardiovascular Diagnostic and Treatment Innovation. "It epitomizes the trend of merging artificial intelligence, engineering, and medicine as the future of healthcare." The team is now working to build algorithms now to detect other cardiac diseases. According to Trayanova, the deep-learning concept could be developed for other fields of medicine that rely on visual diagnosis. The team from Johns Hopkins also included: Bloomberg Distinguished Professor of Data-Intensive Computation Mauro Maggioni; Julie Shade; Changxin Lai; Konstantino Aronis; and Katherine Wu. Other authors include: M. Vinayaga Moorthy and Nancy Cook of Brigham and Women's Hospital; Daniel Lee of Northwester University; Alan Kadish of Touro College and University System; David Oyyang and Christine Albert of Cedar-Sinai Medical Center. | A new artificial intelligence-based approach, developed by Johns Hopkins University researchers, can predict with high accuracy whether a patient with heart disease will die of cardiac arrest and when it will occur. The technology, called Survival Study of Cardiac Arrhythmia Risk (SSCAR), uses raw images of patients' diseased hearts and patient backgrounds to build a personalized survival assessment for each patient. The algorithm, trained on contrast-enhanced cardiac images and 10 years of standard clinical patient data, can detect patterns and relationships not visible to the naked eye and predict the chance of sudden cardiac death over 10 years and when it's most likely to happen. The team's approach significantly outperformed doctors in predicting cardiac death and has the potential to revolutionize clinical decision-making, increasing survival rates from sudden and lethal cardiac arrhythmias. | None | Abstract Sudden cardiac death from arrhythmia is a major cause of mortality worldwide. In this study, we developed a novel deep learning (DL) approach that blends neural networks and survival analysis to predict patient-specific survival curves from contrast-enhanced cardiac magnetic resonance images and clinical covariates for patients with ischemic heart disease. The DL-predicted survival curves offer accurate predictions at times up to 10 years and allow for estimation of uncertainty in predictions. The performance of this learning architecture was evaluated on multi-center internal validation data and tested on an independent test set, achieving concordance indexes of 0.83 and 0.74 and 10-year integrated Brier scores of 0.12 and 0.14. We demonstrate that our DL approach, with only raw cardiac images as input, outperforms standard survival models constructed using clinical covariates. This technology has the potential to transform clinical decision-making by offering accurate and generalizable predictions of patient-specific survival probabilities of arrhythmic death over time. Main Sudden cardiac death (SCD) continues to be a leading cause of mortality worldwide, with an incidence of 50–100 per 100,000 in the general population in Europe and North America 1 , and accounts for 15–20% of all deaths 2 . Patients with coronary artery disease are at the highest risk of arrhythmic sudden cardiac death (SCDA) 3 , 4 . Although implantable cardioverter devices (ICDs) effectively prevent SCD due to ventricular arrhythmias, current clinical criteria for ICD candidacy—that is, left ventricular ejection fraction (LVEF) <30–35% 5 —capture only a mere 20% all SCDAs 6 , highlighting the critical need to develop personalized, accurate and cost-effective arrhythmia risk assessment tools to mitigate this enormous public health and economic burden. Several studies have identified risk factors for SCDA, and many risk stratification approaches have attempted to transcend LVEF 7 , 8 . However, limitations in these approaches have been barriers to their clinical implementation. Previous attempts have broadly stratified populations based on subgroup risk, failing to customize predictions to patients’ unique clinical features 9 . SCDA risk has been typically assessed at pre-defined finite time points, ignoring the likely patient-specific time evolution of the disease 10 . Additionally, in previous work, confidence estimates for predictions have been ‘one-size-fits-all’, varying only by risk subgroup, thus preventing the identification of low-confidence, potentially highly erroneous prediction outliers 11 . Moreover, few previous studies have validated their results externally or comprehensively compared model performance to standard approaches. A robust, generalizable SCDA risk stratifier with the ability to predict individualized, patient-specific risk trajectories and confidence estimates could considerably enhance clinical decision-making. Finally, although arrhythmia arises, mechanistically, from the heterogeneous scar distribution in the disease-remodeled heart, machine learning the features of that distribution has not been explored for risk analysis. Image-derived mechanistic computational models of cardiac electrical function that incorporate scar distribution have proven successful in predicting arrhythmia risk 12 ; however, they remain exceedingly computationally intensive. Therefore, computational models are impractical as a first-stage screening tool in a broad population. Using raw contrast-enhanced (late gadolinium enhancement (LGE)) cardiac images that visualize scar distribution in a DL framework, which additionally draws on standard clinical covariates, could overcome these limitations and lead to accurate patient-specific SCDA probabilities in fractions of a second. Here we present a DL technology for prediction of SCDA risk in patients with ischemic heart disease. Our approach, which we term Survival Study of Cardiac Arrhythmia Risk (SSCAR), embeds, within a survival model, neural networks to estimate individual patient times to SCDA ( T S C D A ). The neural networks learn from raw clinical imaging data, which visualize heart-disease-induced scar distribution, as well as from clinical covariates. The predicted patient-specific survival curves offer accurate SCDA probabilities at all times up to 10 years. The performance and high generalizability of the approach are demonstrated by testing on an external cohort, after internal cross-validation. Our technology represents a fundamental change in the approach to arrhythmia risk assessment, as SSCAR uses the data to directly estimate uncertainty in its predictions. Therefore, SSCAR has the potential to considerably shape clinical decision-making regarding arrhythmia risk, offering not a simple ‘at-risk/not-at-risk’ prediction but, instead, an estimate of the time to SCDA together with a sense of ‘how certain’ the model is about each predicted T S C D A . Results SSCAR overview The arrhythmia risk assessment algorithm in SSCAR is a DL framework that incorporates multiple custom neural networks (which fuse different data types), combined with statistical survival analysis, to predict patient-specific probabilities of SCDA at future time points. Figure 1 presents an overview of SSCAR. On the left and right, cardiac magnetic resonance (CMR) images and clinical covariates (yellow panel) are used as inputs to the two corresponding branches of the model. The goal of each of the branches is to predict the patient-specific survival curve. In the left branch, cardiac CMR images—visualizing the patients’ three-dimensional (3D) ventricle geometry and contrast-enhanced remodeled tissue—are used as input by a custom-designed encoder–decoder convolutional neural sub-network (red panel, left). This CMR sub-network is trained to reduce the dimension of the input (that is, encode) and to discover and extract imaging features associated with SCDA risk directly from the CMR images by learning and applying filters (that is, convolving). The encoder–decoder design of the sub-network ensures that resulting imaging features retain sufficient information to be able to reconstruct the original images (red panel, left, decoder path). In the right branch, the 22 clinical covariates in Table 1 are provided to a dense sub-network (green panel, right), which discovers and extracts non-linear relationships between the input variables. The outputs of the sub-networks are combined (ensembled) in a way that best fits the observed SCDA event training data (center path, dot-dashed) to estimate the most probable T S C D A and the uncertainty in the prediction. The output of the model is a per-patient cause-specific survival curve (bottom, blue). Fig. 1: Schematic overview of SSCAR. Top panel (yellow) shows patient data used in this study. SSCAR uses contrast-enhanced (LGE)-CMR images with the left ventricle automatically segmented (left inset) and clinical covariates (right inset; see Methods and Table 1 for a complete list) as inputs to the two sub-networks (left and right pathways). Labels associated with each patient (SCDA events, middle inset, dot-dashed contour)—consisting of the observed times to event and indicators whether the events were SCDA or non-SCDA—are used as targets during training only. LGE-CMR data is taken as input by a 3D convolutional neural network constructed using an encoder–decoder architecture (red panel, left). Clinical covariates are fed to a dense neural network (green panel, right). The sub-networks are trained to estimate two parameters (location μ and scale σ ) specific to each patient, which fully characterize the probability distribution of the patient-specific time to SCDA (top blue panel; the time to SCDA is modeled as probabilistic, assumed to follow a log-logistic distribution). During training (dot-dashed arrows and white middle panel), the neural network weights are optimized via a maximum likelihood process, in which a probability distribution is sought (blue double-headed arrow in the middle white panel) to best match the observed survival data (yellow ‘x’s in the middle white panel). Finally, the optimized probability function is used on test LGE-CMR images and covariates to predict patient individualized survival curves (blue bottom panel). Full size image Table 1 Clinical covariate data. Full size table SSCAR overall risk prediction performance SSCAR was developed and internally validated using data from 156 patients with ischemic cardiomyopathy (ICM) enrolled in the Left Ventricle Structural Predictors of Sudden Cardiac Death (LVSPSCD) prospective observational study 11 , 13 . SSCAR performance was evaluated comprehensively on this internal set using Harrell’s concordance index (c-index) 14 —range is [0, 1], higher scores are better—and the integrated Brier score ( \(\overline{\,{{\mbox{Bs}}}\,}\) ) 15 —range is [0, 1], lower scores are better. SSCAR has excellent concordance on the internal set (.82–.89) for all times up to 10 years (Fig. 2a ). Additionally, the \(\overline{\,{{\mbox{Bs}}}\,}\) ranges from .04 to 0.12, suggesting strong calibration, given the high concordance. The model maintains its risk discrimination abilities at all times, as further evidenced by the high areas under the receiver operator characteristic (ROC) curves evaluated at years 2–9 (Extended Data Fig. 1 ). All events up to 10 years are used to construct the cross-validated ROC and precision-recall (PR) curves for the internal validation set (Fig. 2b,c ). The area under the ROC curve is 0.87 (95% confidence interval (CI): 0.84–0.90), whereas the area under the PR curve is 0.93 (95% CI: 0.91–0.95). Fig. 2: SSCAR overall performance. a , C-index (top, blue) measuring model risk discrimination—higher is better—and integrated Brier score (bottom, red) showing overall fit—lower is better—for various time points. b , ROC curve at 10 years for the internal validation and external test cohorts, with the respective areas under the curve (AUROC). c . PR curve at 10 years for the internal validation and external test cohorts, with the respective areas under the curve (AUPR). For all panels, shaded areas represent approximate 95% CIs; solid and dashed lines indicate averages for the internal and external cohorts, respectively; and random chance performance thresholds are shown using dotted lines (the dot-dashed line is used to differentiate the internal random chance performance from the external). The chosen time of 10 years was used to capture all SCDA events in the population. Source data Full size image To demonstrate the model’s performance, an external test was performed using an independent case–control set of 113 patients with coronary heart disease selected from participants with available CMR images and the same list of covariates enrolled in the PRE-DETERMINE study 16 . These patients had less severe left ventricular systolic dysfunction but otherwise had similar inclusion/exclusion criteria to patients in the LVSPSCD study ( Methods ). Despite the dissimilarities between cohorts, SSCAR performance carries over well to the external cohort, resulting in a c-index of 0.71–0.77 and \(\overline{\,{{\mbox{Bs}}}\,}\) of 0.03–0.14 (Fig. 2a ; dashed lines). The area under the ROC curve is 0.72 (95% CI: 0.67–0.77), and the area under the PR curve is 0.73 (95% CI: 0.68–0.78), on the external set (Fig. 2b,c ). Patient-specific survival curves predicted by SSCAR The SSCAR survival model presented here predicts cause-specific survival curves for each patient through two individualized parameters—the location μ and scale σ —characterizing the probability distribution of T S C D A ( Methods ). Using deep neural networks to directly learn these parameters from CMR images and from clinical covariates in a way that best models the survival data produces highly individualized survival probability predictions. Extended Data Fig. 2a illustrates individualized cause-specific survival curves (solid, blue) for a patient with T S C D A around 6 years (left panel) and a patient censored (non-SCDA event) at around 7 years (right panel). In both cases, the survival curves estimated by SSCAR accurately predict the event probabilities: in the first case, the estimated survival probability crosses the 50% threshold close to the event time; in the censored case, SSCAR predicts more than 80% probability of survival at the time of the (non-SCDA) event. For reference, two commonly used survival curves are depicted: the Kaplan–Meier estimate (purple, dot-dashed) and the Breslow estimate based on a Cox proportional hazards model using the clinical covariates (green, dashed), demonstrating worse performance by underestimating the risk for the patient with SCDA and overestimating the risk for the censored patient. Further details on SSCAR’s internal performance compared to the Cox proportional hazards model are presented in Fig. 3 . Fig. 3: Model comparison. In blue (left y axis), c-index (dark blue), balanced accuracy (BA, mid-blue), F-score (light blue) and in red (right y axis) integrated Brier score ( \(\overline{\,{{\mbox{Bs}}}\,}\) ) are shown for a standard Cox proportional hazards model fit on the clinical covariates (linear Cox PH), the covariate sub-network of the SSCAR approach using clinical covariates with a Cox survival model (covariate only, Cox), the covariate sub-network of SSCAR using clinical covariates and the log-logistic survival model (covariate only), the CMR sub-network using images only (CMR only) and the full arrhythmia prediction neural network model (SSCAR). Random chance performance thresholds are shown using dotted lines. All performance measures are calculated using data up to τ = 10 years. All model comparison values are based on averages over 100 cross-validation train/test splits of the internal validation dataset. The error bars represent approximate 95% CIs. Source data Full size image The predicted location parameter estimates the most probable T S C D A , and the predicted scale parameter provides a measure of confidence for the location. The inclusion of both a location and a scale parameter in the model offers the advantage of building in uncertainty directly into the T S C D A prediction. Notably, this uncertainty is patient-specific and learned from data. Extended Data Fig. 2b presents examples of predicted T S C D A probability distributions for two patients (P1 and P2) with different scale parameters, visualized as the widths of the distributions. Shown are the actual (dotted) and predicted (solid) T S C D A as well as the probability distributions (shaded). For P1, the prediction error is small (solid versus dashed vertical lines), and the model is certain, as seen by the narrower probability distribution of P1’s T S C D A or, equivalently, a smaller predicted scale parameter. In the case of P2, the prediction error is larger, and the model predicts a wider distribution or, equivalently, a larger scale parameter, indicating higher uncertainty. Remarkably, using the entire internal cohort to quantify this direct relationship between prediction error—calculated as the relative mean absolute difference of actual and predicted times—and scale parameter reveals significant positive correlation (Pearson’s r = 0.42, P < 0.001), demonstrating that SSCAR recognizes which predictions of T S C D A will turn out inaccurate and ‘lowers the confidence’ in them through a larger scale parameter. Image-based risk prediction The CMR sub-network (see Extended Data Fig. 3 for architecture details) in SSCAR integrates neural network DL on images within an overall statistical survival model. This branch of SSCAR uses LGE-CMR—a modality uniquely suited for visualizing ventricle geometry and portions of the myocardium with contrast-enhanced remodelling—to learn image features most useful in predicting a patient’s survival T S C D A . CMR raw pixel values from the automatically segmented left ventricle are directly provided to the network, eliminating the need for arbitrary thresholds aiming to delineate areas of enhancement. Using only images as inputs (Fig. 3 and Supplementary Table 1 ), SSCAR achieves 0.70 (95% CI: 0.67–0.72) c-index and 0.17 (95% CI: 0.167–0.178) \(\overline{\,{{\mbox{Bs}}}\,}\) for event data truncated at 10 years on the internal validation set. On the external testing set, the CMR-only model achieves 0.63 (95% CI: 0.59–0.66) c-index and 0.19 (95% CI: 0.186–0.200) \(\overline{\,{{\mbox{Bs}}}\,}\) . It is noteworthy that, among the covariate sub-network’s 22 clinical covariates, it already includes manually engineered features from the CMR images. For example, infarct size—calculated as the percentage of left ventricular tissue deemed fibrotic using manual segmentation performed by trained experts—was among the 22 and, indeed, had considerable effect on lowering T S C D A . Despite including CMR-based features in the covariate network, the CMR sub-network (using only CMR as inputs) achieves similar performance to the covariate one (Fig. 3 ). Furthermore, ensembling the two sub-networks together leads to a significant increase in overall performance compared to using just the covariate-based one, demonstrating that the CMR sub-network identifies different CMR-based features than the manually engineered ones. Imaging features learned by the CMR network can be interpreted using a gradient-based sensitivity analysis (Fig. 4a ). The gradient here quantifies the effect on the predicted T S C D A of features identified by the CMR neural network, which are averaged per patient to form the gradient map ( Methods ). This map overlaid on the myocardium (right column, blue and red heat map) shows the degree of contribution of the local pixel intensity to the most probable T S C D A (that is, to the location parameter) for a patient without an SCDA event (top) and one with SCDA (bottom). Myocardial regions found to be characterized with large positive gradient (dark blue) are interpreted as having high importance in increasing T S C D A , and, conversely, regions with large magnitude negative gradient (dark red) represent areas that are responsible for decreasing the predicted T S C D A . The areas of contrast-enhanced myocardium (middle column, in brighter green) do not fully overlap with the gradient map, which suggests that, although features learned by the CMR neural network may co-localize with enhanced tissue, the algorithm does not act as a mere enhancement locator. For example, the patient who did not experience SCDA has contrast-enhanced tissue, but the effect of these regions is to increase the predicted T S C D A , suggesting a nuanced relationship between presence of enhancement and propensity of SCDA. Fig. 4: SSCAR interpretation. The features learned by SSCAR are interpreted by performing a gradient-based sensitivity analysis of the location parameter (the most probable T S C D A ) to changes in the neural network input or features. The gradient value quantifies this sensitivity. The magnitude of the gradient measures the strength of the sensitivity of the predicted T S C D A to inputs or intermediary features. The sign of the gradient shows the direction of the effect. That is, for a small increase in the value of inputs or features, a positive gradient (blue) indicates a higher predicted T S C D A , whereas a negative gradient (red) indicates a decrease in the predicted T S C D A . a , Shown is the CMR sub-network feature interpretation for an example patient who did not experience SCDA (No SCDA, top) and for a patient who did (SCDA, bottom). For each patient, a subset of 3 of the 12 contrast-enhanced short-axis CMR images (corresponding to three locations in the heart, base to apex, top to bottom, left column) used as inputs by SSCAR are overlaid with blood pool and myocardium segmentation (middle column, orange and green, respectively). A heat map of extracted features scaled by the value of the gradient shows contribution of the local pixel intensity to the predicted location parameter for the last convolutional layer (right column, blue and red heat maps). Of note, although the patient with SCDA shows high gradients in areas with contrast enhancement, the patient on the left shows that enhancement can also lead to positive gradients, suggesting that the network does not simply create a mask of the enhanced regions to make predictions but learns a nuanced relationship between scar and propensity for SCDA. b , Covariate sub-network interpretation based on an average of all patients (mid-blue and mid-red bars), patients with SCDA (dark blue and dark red bars) and patients with no SCDA (light blue and light red bars). Top four highest (blue bars) and bottom four lowest (red bars) average gradients of the neural network output (that is, the predicted location parameter) with respect to the clinical covariate inputs are shown. The error bars represent approximate 95% CIs. LVEF CMR, left ventricular ejection fraction computed from CMR; betablock, use of β -blocker medication; ECG hr, heart rate from ECG; digoxin, use of digoxin medication; infarct %, infarct size as % of total volume; ECG QRS, QRS complex duration from ECG; LV mass ED, left ventricular mass in end-diastole. Source data Full size image Non-linear neural network for covariate data SSCAR incorporates patient clinical covariate data (Table 1 ) through the use of a dense, multi-layer neural network (Fig. 1 , green panel). This sub-network discovers and extracts potential non-linear relationships between the covariates and integrates them within SSCAR’s overall survival predictions. We demonstrate the utility of the sub-network by comparing its performance with a (linear) Cox proportional hazards model (Fig. 3 ). To avoid mis-attributing performance differences to the underlying statistical models, we consider an intermediary model that uses neural network feature extraction with a Cox proportional hazards model. Using clinical covariate data only, SSCAR with a Cox survival model (covariate only, Cox) outperforms the standard Cox proportional hazards model (linear Cox PH) in terms of c-index (0.73 versus 0.58, dark blue, left y axis), balanced accuracy (0.65 versus 0.45, mid-blue, left y axis), F-score (0.78 versus 0.69, light blue, left y axis) and \(\overline{\,{{\mbox{Bs}}}\,}\) (0.14 versus 0.30, red, right y axis). We show that the neural network model maintains interpretability by performing a sensitivity analysis of the predicted T S C D A with respect to changes in the covariates (Fig. 4b ). As above, high positive gradients (blue) denote covariates for which small increases in their values lead to large increases in T S C D A , whereas small negative gradients (red) represent covariates for which small increases lead to large decreases in T S C D A . The top four positive gradient covariates are LVEF computed from CMR, β -blocker medication, heart rate computed from electrocardiogram (ECG) and use of digoxin. The bottom four negative gradient covariates are left ventricular mass at end-diastole, use of diuretic medication, QRS duration computed from ECG and infarct size (%). Discussion Here we present an approach to SCDA risk assessment, termed the SSCAR framework, which uses a deep neural network survival model to predict patient-specific survival curves in ischemic heart disease. SSCAR consists of two neural networks: (1) a 3D convolutional network learning on raw unsegmented for enhancement LGE-CMR images that visualize heart-disease-induced scar distribution and (2) a fully connected network operating on clinical covariates. SSCAR’s predicted patient-specific survival curves offer accurate SCDA probabilities at all times up to 10 years. SSCAR is not only a highly flexible model, able to capture complex imaging and non-imaging feature inter-dependencies, but is also robust owing to the statistical framework governing the way these features are combined to fit the survival data. Our framework predicts entire probability distributions for the T S C D A , allowing for uncertainties in predictions to be themselves patient-specific and learned from data, thereby equipping the model with a self-correction mechanism. This approach remedies a well-known important limitation of neural networks—the high confidence in erroneous predictions. SSCAR’s integration of deep neural network learning within a survival analysis and the resulting detailed outputs could represent a paradigm shift in the approach to SCDA risk assessment. Despite many heralding DL as the arrival of the artificial intelligence (AI) age in personalized healthcare 17 , 18 , 19 , 20 , 21 , no considerable progress has so far been made using DL on contrast-enhanced cardiac images to assess arrhythmia risk. Although there have been non-DL efforts to incorporate clinical imaging-derived features in SCDA risk stratification 22 , 23 , 24 , these severely underuse the data, suffering from two main limitations: features often rely on time-consuming, manual processing steps, typically involving arbitrarily chosen image intensity thresholds; or features are either too coarse to capture the intricacies of the scar distribution or highly mathematical, undermining their physiological underpinning. On the other hand, the DL efforts related to arrhythmia have focused primarily on its cardiologist-level detection in ECG signals 25 , 26 , 27 , 28 , 29 . In the current work, we present a DL approach that takes as input directly raw, unsegmented LGE-CMR images and automatically identifies features that best model and predict the T S C D A . SSCAR is an SCDA risk prediction model that combines raw imaging with other data types in the same DL framework. Our technology operates on LGE-CMR images and clinical covariates within a unified feature learning process, allowing for the different data types to synergistically inform the overall survival model. Among the clinical covariates used in SSCAR are standard manually derived imaging features, which prevents the CMR neural network from merely re-discovering these known features and, instead, encourages it to learn new features. SSCAR achieves performance that is beyond the state-of-the-art in both relative terms—SCDA risk ordering among patients—and absolute terms—accurately calibrated probabilities of SCDA. Our robust testing scheme overcomes important limitations of previous work on SCDA risk prediction 10 , 16 , 22 , 23 , 30 . First, we demonstrate high generalizibility by computing internal cross-validation performance numbers resulting from 100 train/test splits of the data and, notably, on an entirely separate external cohort, showing modest performance degradation. Second, our approach prevents the model from being over-tuned to a certain time horizon by computing performance metrics at multiple time points up to 10 years. Because SSCAR is a combination of neural networks, each working on different data types (images and clinical covariates), we were able to perform a comprehensive bottom-up analysis of overall performance. We demonstrated that the added complexity of our DL approach—potentially at some expense to interpretability—is justified by the significantly elevated performance numbers. Indeed, we developed and evaluated a regularized Cox proportional hazards model using the available clinical covariates to serve as a baseline for the rest of the analysis. We showed that the neural-network-driven feature extraction of SSCAR on the same covariates performs significantly better in the same proportional hazards setting, highlighting the importance of non-linear relationships in the covariates. Furthermore, we showed that, even when using only LGE-CMR images to predict arrhythmia risk, the CMR neural network in SSCAR (1) outperforms the Cox proportional hazards model constructed using clinical covariates, which include standard imaging and non-imaging features, and (2) performs on par with the covariate-only network in SSCAR using the same clinical variables, suggesting that the image-only neural network in SSCAR is able to identify highly predictive imaging features in the LGE-CMR images. Finally, we demonstrate that the imaging features found by SSCAR’s CMR network cannot be explained away even when considering non-linear relationships between standard covariates, as evidenced by the ensembled SSCAR model superior performance over SSCAR using either data type. Notably, a level of interpretability is embedded in the overall design of the custom neural network used in SSCAR. Interpretability of AI algorithms is paramount to their broad adoption, and concerns surrounding it are particularly prevalent in healthcare. In our approach, we take multiple steps to ensure the relevance and interpretability of resulting features. Our sensitivity analysis of the outputs to the extracted features offers a lens into the neural network, rendering some transparency to the algorithm ‘black-box’ (Fig. 4 ). In addition, CMR images taken as input by the CMR neural network are automatically segmented to include myocardium-only raw intensity values, and the network is designed as an encoder–decoder to ensure minimal loss of information during the feature extraction process. SSCAR achieves strong performance despite working on a relatively small dataset. A concern with DL on smaller datasets is overfitting, which manifests itself as high performance during training (good fit) but poor performance when applied to a new test set. Indeed, the results in this paper show some differences between metrics on the internal validation and external test cohorts. However, we emphasize that, although the two cohorts’ covariates were ‘harmonized’ where possible ( Methods ), they represent two different distributions (for example, low versus moderately reduced LVEF, unmatched versus matched case–control and three versus 60 CMR acquisition sites), likely accounting for any performance differences in the two populations. Furthermore, several measures were taken to mitigate overfitting. In addition to standard techniques—dropout, kernel and bias regularizers—we designed the CMR sub-network as an encoder–decoder that uses the distilled features used in risk prediction to also re-construct the original image as an additional regularization technique. Finally, all numbers cited on the internal validation set are averages of the test performance of hundreds of train/test data splits, adding a layer of statistical rigor. In SSCAR, we directly model the cause-specific hazard rate and use the implied survival function to make predictions. A potential shortcoming of models that do not directly model competing risks is that predicted probabilities for the event of interest assume a reality where no other type of death could occur, thereby potentially undermining interpretability. A limitation here is that we could not compute the cause-specific cumulative incidence function, as it requires additional all-cause mortality data as well as competing risk data (for example, revascularization data). However, should such data become available, our competing risk framework makes such an extension straightforward. An additional limitation in this work is that the list of covariates is not comprehensive. Few standard clinical covariates were dropped when ‘harmonizing’ the internal and external cohorts (for example, all diuretic types were merged into one variable, and there were no angiotensin receptor-neprilysin inhibitor data). However, because no left ventricle standard imaging covariates were excluded, we do not expect any of the omitted variables to affect conclusions drawn regarding the performance of the sub-components of SSCAR relative to the baseline Cox model. Including additional covariates identified in past work as predictors of SCDA, but not part of standard clinical practice, was beyond the scope of our work. However, these could, in principle, erode the performance of the image-based feature extraction in SSCAR in favor of the covariate-only part. Nevertheless, we would expect that, in general, including more variables with proper regularization can only improve the overall results in SSCAR, even if a re-balance of its components’ performance contribution occurs. Similarly, including right ventricle CMR images and parameters and adjusting the methodology accordingly could help generalize SSCAR to more cardiomyopathies. SSCAR fuses cutting-edge DL technology with modern survival analysis techniques. It represents innovation in CMR imaging feature extraction and learning of non-linear relationships among standard clinical covariates. The technology aims to transform clinical decision-making regarding arrhythmia risk and patient prognosis by encouraging practitioners to eschew the view of predicted risk as a single number outputted by a ‘black-box’ algorithm but, rather, to be guided by the estimated time-to-outcome in the context of patient-specific time prediction uncertainty, which is itself built in SSCAR’s learning process. Through its accurate predictions and considerable levels of generalizability and interpretability, SSCAR represents an essential step toward bringing patient trajectory prognostication into the age of AI. Methods The research protocol used in this study was reviewed and approved by the Johns Hopkins University institutional review board and the Brigham and Women’s Hospital institutional review board. All participants provided informed consent to be part of the clinical studies described below. There was no participant compensation. Patient population and datasets This study was a retrospective analysis based on a subset ( n = 269) of patients selected from the prospective clinical trials described below using the process outlined in Extended Data Fig. 4 . Of note is that the entire model development described in this manuscript was based on the internal cohort (see below), whereas the case–control external cohort was used exclusively for testing (outcomes were solely used for computing relevant metrics once the model was fixed). LVSPSCD cohort (internal) Patient data came from the LVSPSCD study (ClinicalTrials.gov ID NCT01076660 ) sponsored by Johns Hopkins University. As previously described 11 , 13 , patients satisfying clinical criteria for ICD therapy for SCDA (LVEF ≤35%) were enrolled at three sites: Johns Hopkins Medical Institutions (Baltimore, Maryland), Christiana Care Health System (Newark, Delaware) and the University of Maryland (Baltimore, Maryland). A total of 382 patients were enrolled between November 2003 and April 2015. Patients were excluded if they had contraindications to CMR, New York Heart Association (NYHA) functional class IV, acute myocarditis, acute sarcoidosis, infiltrative disorders (for example, amyloidosis), congenital heart disease, hypertrophic cardiomyopathy or renal insufficiency (creatinine clearance <30 ml min −1 after July 2006 or <60 ml min −1 after February 2007). The protocol was approved by the institutional review boards at each site, and all participants provided informed consent. CMR imaging was performed within a median time of 3 days before ICD implantation. The current study focused on the ischemic cardiomyopathy patient subset with adequate LGE-CMR, totaling 156 patients. As part of the clinical study, the participants had undergone single-chamber or dual-chamber ICD or cardiac resynchronization with an ICD implantation based on current guidelines. The programming of anti-tachycardia therapies was left to the discretion of the operators. PRE-DETERMINE and DETERMINE Registry cohorts (external) The PRE-DETERMINE (ClinicalTrials.gov ID NCT01114269 ) and accompanying DETERMINE (ClinicalTrials.gov ID NCT00487279 ) Registry study populations are multi-center, prospective cohort studies comprised of patients with coronary disease on angiography or documented history of myocardial infarction (MI). The PRE-DETERMINE study enrolled 5,764 patients with documented MI and/or mild to moderate left ventricle dysfunction (LVEF between 35% and 50%) who did not fulfill consensus guideline criteria for ICD implantation on the basis of LVEF and NYHA class (that is, LVEF >35% or LVEF between 30% and 35% with NYHA Class I heart failure) at study entry 6 . Exclusion criteria included a history of cardiac arrest not associated with acute MI, current or planned ICD or life expectancy of less than 6 months. The accompanying DETERMINE Registry included 192 participants screened for enrollment in PRE-DETERMINE who did not fulfill entry criteria on the basis of having an LVEF <30% ( n = 99), an LVEF between 30% and 35% with NYHA Class II–IV heart failure ( n = 19) or an ICD ( n = 31) or who were unwilling to participate in the biomarker component of PRE-DETERMINE ( n = 43). Within these cohorts, 809 participants had LGE-CMR imaging performed. Within this subset of patients, 23 cases of SCD occurred and were matched to four controls on age, sex, race, LVEF and follow-up time using risk set sampling. Of the resulting 115 patients, the current study focused on 113 patients with adequate LGE-CMR images for analysis. Finally, covariate data for this cohort were minimally ‘harmonized’ with the internal cohort, by retaining common covariates only. Some important differences between the external and internal cohorts remained, such as significantly higher LVEF in the external cohort. LGE-CMR acquisition The CMR images in the internal and external cohort were acquired using 1.5-T magnetic resonance imaging devices (Signa, GE Medical Systems; Avanto, Siemens). The exact software versions for the devices cannot be precisely retroactively ascertained given the very broad nature of the study. All were two-dimensional (2D) parallel short-axis left ventricle stacks. The contrast agent used was 0.15−0.20 mmol kg −1 of gadodiamide (Omniscan, GE Healthcare) or gadopentetate dimeglumine (Magnevist, Schering), and the scan was captured 10−30 minutes after injection. Owing to the multi-center nature of the clinical studies considered here, there were variations in CMR acquisition protocols. The most commonly used sequence was inversion recovery fast gradient echo pulse, with an inversion recovery time typically starting at 250 ms and adjusted iteratively to achieve maximum nulling of normal myocardium. Typical spatial resolutions ranged 1.5−2.4 × 1.5−2.4 × 6−8 mm, with 2−4-mm gaps. CMR images in the external cohort were sourced from 60 sites with a variety of imaging protocols, whereas those in the internal cohort originated from three sites and were more homogeneous. No artifact corrections were applied to the images. More details regarding on CMR acquisition can be found in previous work 11 , 13 , 31 , 32 . Clinical data and primary endpoint In both LVSPSCD and PRE-DETERMINE/DETERMINE cohorts, baseline data on demographics, clinical characteristics, medical history, medications, lifestyle habits and cardiac test results were collected (see Table 1 for a list of the common ones between the cohorts that were used in SSCAR). The primary endpoint for LVSPSCD was SCDA defined as therapy from the ICD for rapid ventricular fibrillation or tachycardia or a ventricular arrhythmia not corrected by the ICD. For the PRE-DETERMINE studies, the primary endpoint was sudden and/or arrhythmic death. Deaths were classified according to both timing (sudden versus non-sudden) and mechanism (arrhythmic versus non-arrhythmic). Unexpected deaths due to cardiac or unknown causes that occurred within 1 hour of symptom onset or within 24 hours of being last witnessed to be symptom free were considered sudden cardiac deaths. Deaths preceded by an abrupt spontaneous collapse of circulation without antecedent circulatory or neurological impairment were considered arrhythmic in accordance with the criteria outlined by Hinkle and Thaler 16 . Deaths that were classified as non-arrhythmic were excluded from the endpoint regardless of timing. Out-of-hospital cardiac arrests due to ventricular fibrillation that were successfully resuscitated with external electrical defibrillation were considered aborted arrhythmic deaths and were included in the primary endpoint. Data preparation The inputs to our model were the unprocessed LGE-CMR scans and the clinical covariates listed in Table 1 . The training targets were the event time and event type (SCDA or non-SCDA). As a pre-processing step, the raw LGE-CMR scans were first segmented for left ventricle myocardium using a method based on convolutional neural networks developed and described in previous work 33 . In brief, this segmentation network consisted of three sub-networks: a U-net with residual connections trained to identify the entire region of interest; a U-net with residual connections trained to delineate the myocardium wall; and an encoder–decoder tasked with correcting anatomical inaccuracies that may have resulted in the segmentation. In this context, anatomical correctness was defined via a list of pass/fail rules (for example, no holes in the myocardium, circularity threshold and no disconnected components). Once each patient’s LGE–CMR 2D slices were segmented via this method, they were stacked, all voxels outside the left ventricle myocardium were zeroed out and the slices were sorted apex-to-base using DICOM header information and step-interpolated on a regular 64 × 64 × 12 grid with voxel dimensions 2.5 × 2.5 × 10 mm. These dimensions were chosen to make all patient volumes consistent with minimal interpolation from the original resolution while allowing enough room to avoid truncating the left ventricle. Finally, the input to the neural network model consisted of a two-channel volume (that is, 64 × 64 × 12 × 2). The first channel was a one-hot encoding of the myocardium and blood pool masks. The second channel had zeros outside of the myocardium and the original CMR intensities on the myocardium, linearly scaled by multiplication with half the inverse of the median blood pool intensity in each slice. To mitigate overfitting, train/time data augmentation was performed on the images, specifically 3D in-plane rotations in increments of 90 ° to avoid artifacts and panning of the ventricle within the 3D grid. The clinical covariate data were de-meaned and scaled by the standard deviation. Survival model Statistical fit For each patient i , the outcome data were the pair ( X i , Δ i ), where X i is the minimum between the time to SCDA from arrhythmia T i and the (right) censoring time C i , after which either follow-up was lost or the patient died due to a competing risk. The outcome Δ i is 1 if the patient had the arrhythmic event before they were censored ( T i ≤ C i ) and 0 otherwise. We estimated the (pseudo-)survival probability function S i ( t ), the probability that the time to SCDA exceeds t . We modeled the T i values as independent, each having a cause-specific hazard rate 34 based on the log-logistic distribution with location parameter μ i and scale parameter σ i , such that \({S}_{i}(t;{\mu }_{i},{\sigma }_{i})=1/\{1+\exp [(\log t-{\mu }_{i})/{\sigma }_{i}]\}\) . The patient-specific parameters μ i and σ i were modeled as outputs of neural networks applied to LGE-CMR images and clinical covariates, trained by minimizing the loss function given by the negative likelihood: $$-\log {{{\mathcal{L}}}}=-\mathop{\sum}\limits_{i}\frac{\log {x}_{i}-{\mu }_{i}}{{\sigma }_{i}}+{\delta }_{i}\log {\sigma }_{i}+(1+{\delta }_{i})\log \left[1+\exp \left(\frac{\log {x}_{i}-{\mu }_{i}}{{\sigma }_{i}}\right)\right],$$ where x i is the observed time and δ i is the censoring status. With μ i and σ i estimated, the patient-specific survival functions were given by S i ( t ) as above. Performance metrics The all-time performance of the models was evaluated using two measures. The first was Harrell’s c-index 14 with the patient-specific μ i parameters as the risk scores ( \(\exp ({\mu }_{i})\) is the mode of the log-logistic distribution) to gauge the model’s risk discrimination ability. The second was the integrated Brier score 15 , which is defined as the time-average of mean squared error (MSE) between true 0/1 outcome and predicted outcome probability and gauges both probability calibration and discrimination. Both measures were adjusted for censoring, corrected by weighing with the inverse probability of censoring, and calculated for data before a given cutoff time τ 35 ; if unspecified, τ = 10 years, corresponding with the maximum event time in the dataset. Metrics derived from the confusion matrix (for example, precision and recall) were computed at several time points ( τ = 2, 3… years). Probability thresholds at these times were selected by maximizing F-score (for precision, recall and F-score) or Youden’s J statistic (for sensitivity, specificity and balanced accuracy) on the training data. Of note, to preserve consistency in evaluation between the internal and external cohorts, metrics computed on the external cohort were not covariate adjusted, potentially underestimating performance 36 . Neural network architecture SSCAR is a supervised survival analysis regression model composed of two sub-networks, each operating on different input types (Fig. 1 ): a convolutional sub-network (‘CMR’), which takes the LGE-CMR images as inputs, and a dense sub-network (‘covariate’). which uses the clinical covariate data. Feature extraction in the CMR sub-network from the LGE–CMR images was achieved by a 3D convolutional encoder–decoder model. The encoder used a sequence of 3D convolutions and pooling layers, followed by one dense layer to encode the original 3D volume into a lower-dimensional vector. Non-linear activation functions and dropout layers were added before each downsampling step. The encoding was further used for two purposes: survival and reconstruction. For the survival branch, the encoding was first stratified into one of r (learned) risk categories (Supplementary Table 2 ) and then fed to a two-unit dense layer to predict—for each patient—a set of two parameters, location μ and scale σ , which fully characterized the probability distribution of the patient’s log-time to SCDA (see the ‘Statistical fit’ section), followed by a bespoke activation function. This activation function clipped \(\ln \mu\) on [ − 3, 3] and clipped σ from below at σ m i n , where σ m i n was found such that the difference between the 95th and 5th percentiles of the predicted T S C D A distribution was no less than 1 month. This survival activation function effectively restricted the ‘signal-to-noise’ ratio μ / σ . For the purpose of reconstruction, the encoding was decoded via a sequence of transposed convolutions to re-create the original volume. Feature extraction from the clinical covariate data was performed using a sequence of densely connected layers, followed by a dropout layer to prevent overfitting. The resulting tensor used a similar path to the one followed by the convolutional encoding to eventually map to the two survival parameters. Finally, once the two sub-networks were trained, they were frozen and joined using a learned linear combination layer to ensemble the survival predictions. The predicted survival parameters (location and scale) aimed to minimize the aforementioned negative log-likelihood function for the log-logistic distribution, accounting for censoring in the data and class imbalance. The reconstructed output of the CMR sub-network minimized the MSE to the original input. Its contribution to total loss was learned to provide regularization to the imaging features extracted, ensuring that the survival fit relied on features able to reconstruct the original image. Both stochastic gradient descent (SGD) and Adam 37 optimizers were used. All code was developed in Python 3.7 using Keras 2.2.4 (ref. 38 ), TensorFlow 1.15 (ref. 39 ), numpy 1.6.2, scipy 1.2.1, openCV 3.4.2, pandas 0.24.2 and pydicom 1.2.2. Each train/evaluate fold took 3–5 minutes on an Nvidia Titan RTX graphics processing unit. Training and testing The entire model development and internal validation were performed using the LVSPSCD cohort. After a hyperparameter tuning step, the best model architecture was then used on the entire internal validation set to find the best neural network weights. As the ensembling layer was hyperparameter free, it did not use hyperparameter tuning. Hyperparameter tuning A hyperparameter search was performed using the set of parameter values described in Supplementary Table 2 , given the vast number of hyperparameter configurations available to define the model architectures. The package hyperopt 0.1.2 (ref. 40 ) was used to sample parameter configurations from the search space using the Parzen window algorithm to minimize the average validation loss resulting from a stratified ten-times-repeated ten-fold cross-validation process. The maximum number of iterations was 300 for the covariate sub-network and lowered to 100 for the CMR sub-network, given its highly increased capacity. Each fold was run using early stopping based on the loss value on a withheld 10% portion of the training fold with a maximum of 2,000 epochs (20 gradient updates per epoch). In hyperparameter tuning, models were optimized using SGD with a learning rate of .01 (default value in the neural network package used). The architecture with the highest Harrell’s c-index 14 was selected. Hyperparameters deemed to have little effect on learning (for example, maximum number of epochs) were fixed. Convolutional kernel size and the activation function for convolutions were kept at the default values in the neural network package used. The batch size was set to the highest value, given the memory constraints of our hardware. Internal validation and external test Internal model performance was assessed using ten repetitions of stratified ten-fold cross-validation on the LVSPSCD cohort. Early stopping based on the c-index on a withheld 10% subset was implemented with a maximum training of 2,000 epochs (20 gradient updates per epoch). The optimizer was Adam with learning rate 10 −5 for the CMR sub-network, 5 × 10 −4 for the covariate sub-network and .01 for the ensemble. A final model was trained with all the available LVSPSCD data and tested on the PRE-DETERMINE cohort. Of note, the final model shares the same architecture and training parameters with all the models in the 100 internal data splits but has different (fine-tuned) weights, which are derived using the entire internal dataset. To estimate CIs on the external cohort, the same cross-validation process was applied to the PRE-DETERMINE cohort, supplementing the training data in each fold with the LVSPSCD cohort. Approximate normal CIs were constructed using the 100 folds. Gradient-based interpretation of SSCAR The trained network weights in SSCAR were interpreted for both covariate and CMR sub-network using the gradients of outputs with respect to intermediary neural network internal representations of data. For the CMR sub-network, we adapted Grad-CAM 41 to work on regression problems and applied it to SSCAR by performing a weighted average of the last convolutional layer feature maps, where the weights were averages of gradients of the location parameter output with respect to each channel. The result was then interpolated back to the original image dimensions and overlaid to obtain the gradient maps shown (Fig. 4a , bottom row). For the covariate sub-network, the gradient of the location parameter output was taken with respect to each of the inputs and averaged over three groups: all patients, patients with SCDA and patients with no SCDA. Statistical analysis All values reported on the internal validation data set were averages over 100 data splits resulting from a ten-times-repeated ten-fold stratified cross-validation scheme. Values reported on the external test dataset represented a single evaluation on the entire set. All CIs were normal approximations resulting from the aforementioned 100 splits. In computing CIs for the external test set, the same procedure was used on all available data, ensuring that test folds came exclusively from the external dataset. Error bars are standard errors with sample standard deviation estimated from the 100 splits. Correlation P value was based on the exact distribution under the bivariate normal assumption. Covariate P values are based on two-sample Welch’s t -test 42 for continuous variables and Mann–Whitney U -test for categorical variables. Cox proportional hazards analysis was performed using the Python lifelines 0.25.5 (ref. 43 ) package; it included a hyperparameter sweep for the ℓ 1 and ℓ 2 regularization terms and followed the same train/test procedure as the neural network models. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Patient data used in this manuscript cannot be made publicly available without further consent and ethical approval, owing to privacy concerns. The CMR images and patient clinical data can be provided by the authors pending Johns Hopkins University institutional review board and Brigham and Women’s Hospital institutional review board approval and a completed material transfer agreement. Requests for these data should be sent to N.A.T. and/or C.M.A. Code availability The code for this project is available under the Johns Hopkins University Academic Software License Agreement at . Change history 28 April 2022 A Correction to this paper has been published: | None | [] | [] | [] | SciNews | Medicine | Natalia Trayanova, Arrhythmic sudden death survival prediction using deep learning analysis of scarring in the heart, Nature Cardiovascular Research (2022). DOI: 10.1038/s44161-022-00041-9. www.nature.com/articles/s44161-022-00041-9 | https://dx.doi.org/10.1038/s44161-022-00041-9 | https://medicalxpress.com/news/2022-04-ai-ifand-whensomeone-cardiac.html | A new artificial intelligence-based approach, developed by Johns Hopkins University researchers, can predict with high accuracy whether a patient with heart disease will die of cardiac arrest and when it will occur. The technology, called Survival Study of Cardiac Arrhythmia Risk (SSCAR), uses raw images of patients' diseased hearts and patient backgrounds to build a personalized survival assessment for each patient. The algorithm, trained on contrast-enhanced cardiac images and 10 years of standard clinical patient data, can detect patterns and relationships not visible to the naked eye and predict the chance of sudden cardiac death over 10 years and when it's most likely to happen. The team's approach significantly outperformed doctors in predicting cardiac death and has the potential to revolutionize clinical decision-making, increasing survival rates from sudden and lethal cardiac arrhythmias.
A new artificial intelligence-based approach can predict, significantly more accurately than a doctor, if and when a patient could die of cardiac arrest. The technology, built on raw images of patient's diseased hearts and patient backgrounds, stands to revolutionize clinical decision making and increase survival from sudden and lethal cardiac arrhythmias, one of medicine's deadliest and most puzzling conditions. The work, led by Johns Hopkins University researchers, is detailed today in Nature Cardiovascular Research. "Sudden cardiac death caused by arrhythmia accounts for as many as 20 percent of all deaths worldwide and we know little about why it's happening or how to tell who's at risk," said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and Medicine. "There are patients who may be at low risk of sudden cardiac death getting defibrillators that they might not need and then there are high-risk patients that aren't getting the treatment they need and could die in the prime of their life. What our algorithm can do is determine who is at risk for cardiac death and when it will occur, allowing doctors to decide exactly what needs to be done." The team is the first to use neural networks to build a personalized survival assessment for each patient with heart disease. These risk measures provide with high accuracy the chance for a sudden cardiac death over 10 years, and when it's most likely to happen. The deep learning technology is called Survival Study of Cardiac Arrhythmia Risk (SSCAR). The name alludes to cardiac scarring caused by heart disease that often results in lethal arrhythmias, and the key to the algorithm's predictions. The team used contrast-enhanced cardiac images that visualize scar distribution from hundreds of real patients at Johns Hopkins Hospital with cardiac scarring to train an algorithm to detect patterns and relationships not visible to the naked eye. Current clinical cardiac image analysis extracts only simple scar features like volume and mass, severely underutilizing what's demonstrated in this work to be critical data. "The images carry critical information that doctors haven't been able to access," said first author Dan Popescu, a former Johns Hopkins doctoral student. "This scarring can be distributed in different ways and it says something about a patient's chance for survival. There is information hidden in it." The team trained a second neural network to learn from 10 years of standard clinical patient data, 22 factors such as patients' age, weight, race and prescription drug use. The algorithms' predictions were not only significantly more accurate on every measure than doctors, they were validated in tests with an independent patient cohort from 60 health centers across the United States, with different cardiac histories and different imaging data, suggesting the platform could be adopted anywhere. "This has the potential to significantly shape clinical decision-making regarding arrhythmia risk and represents an essential step towards bringing patient trajectory prognostication into the age of artificial intelligence," said Trayanova, co-director of the Alliance for Cardiovascular Diagnostic and Treatment Innovation. "It epitomizes the trend of merging artificial intelligence, engineering, and medicine as the future of healthcare." The team is now working to build algorithms now to detect other cardiac diseases. According to Trayanova, the deep-learning concept could be developed for other fields of medicine that rely on visual diagnosis. The team from Johns Hopkins also included: Bloomberg Distinguished Professor of Data-Intensive Computation Mauro Maggioni; Julie Shade; Changxin Lai; Konstantino Aronis; and Katherine Wu. Other authors include: M. Vinayaga Moorthy and Nancy Cook of Brigham and Women's Hospital; Daniel Lee of Northwester University; Alan Kadish of Touro College and University System; David Oyyang and Christine Albert of Cedar-Sinai Medical Center. |
10.1038/s41598-018-25880-0 | Synthesis studies transform waste sugar for sustainable energy storage applications | Biorefinery facilities are critical to fueling the economy—converting wood chips, grass clippings, and other biological materials into fuels, heat, power, and chemicals. A research team at the US Department of Energy's (DOE's) Oak Ridge National Laboratory has now discovered a way to create functional materials from the impure waste sugars produced in the biorefining processes. Using hydrothermal carbonization, a synthesis technique that converts biomass into carbon under high temperature and pressure conditions, the team transformed waste sugar into spherical carbon materials. These carbon spheres could be used to form improved supercapacitors, which are energy storage devices that help power technologies including smartphones, hybrid vehicles, and security alarm systems. The team's results are published in Scientific Reports, a Nature research journal. "The significant finding is that we found a way to take sugar from plants and other organic matter and use it to make different structures," said Amit Naskar, a senior researcher in ORNL's Materials Science and Technology Division. "Knowing the physics behind how those structures form can help us improve components of energy storage." By modifying the synthesis process, the researchers created two varieties of the novel carbon spheres. Combining sugar and water under pressure resulted in solid spheres, whereas replacing water with an emulsion substance (a liquid that uses chemicals to combine oil and water) typically produced hollow spheres instead. "Just by substituting water for this other liquid, we can control the shape of the carbon, which could have huge implications for supercapacitor performance," said Hoi Chun Ho, a Ph.D. candidate working with Naskar at the Bredesen Center for Interdisciplinary Research and Graduate Education, a joint venture of ORNL and the University of Tennessee, Knoxville. The team also discovered that altering the duration of synthesis directly affected the size and shape of the spheres. To further explore the discrepancies between solid and hollow carbon structures, the team ran synthesis simulations on the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. They also used transmission electron microscopy (TEM) and small-angle X-ray scattering (SAXS) tools at the Center for Nanophase Materials Sciences (CNMS), another DOE Office of Science User Facility, to characterize the capabilities and structure of the carbon samples. "We wanted to determine what kind of surface area is good for energy storage applications, and we learned that the hollow spheres are more suitable," said ORNL researcher Monojoy Goswami of CNMS and the Computer Science and Engineering Division. "Without these simulations and resources, we wouldn't have been able to reach this fundamental understanding." With this data the team tested a supercapacitor with electrodes made from hollow carbon spheres, which retained about 90 percent capacitance—the ability to store an electric charge—after 5,000 charge cycles. Although supercapacitors cannot store as much energy as batteries can store, they have many advantages over batteries, such as faster charging and exceptionally long lifetimes. Some technologies contain both batteries to provide everyday energy and supercapacitors to provide additional support during peak power demands. "Batteries often support smartphones and other electronic devices alone, but supercapacitors can be useful for many high-power applications," Ho said. "For example, if a vehicle is driving up a steep hill with many passengers, the extra strain may cause the supercapacitor to kick in." The pathway from waste sugar to hollow carbon spheres to supercapacitors demonstrates new potential for previously untapped byproducts from biorefineries. The researchers are planning projects to find and test other applications for carbon materials derived from waste sugar such as reinforcing polymer composites with carbon fibers. "Carbon can serve many useful purposes in addition to improving supercapacitors," Ho said. "There is more work to be done to fully understand the structural evolution of carbon materials." Making use of waste streams could also help scientists pursue forms of sustainable energy on a broader scale. According to the ORNL team, biorefineries can produce beneficial combinations of renewable energy and chemicals but are not yet profitable enough to compete with traditional energy sources. However, the researchers anticipate that developing useful materials from waste could help improve efficiency and reduce costs, making outputs from these facilities viable alternatives to oil and other fossil fuels. "Our goal is to use waste energy for green applications," Goswami said. "That's good for the environment, for the biorefinery industry, and for commerce." | A research team at Oak Ridge National Laboratory has discovered a way to convert waste sugars produced in biorefining processes into functional materials, specifically spherical carbon spheres. These carbon spheres can be used to form improved supercapacitors, which are energy storage devices that can power technologies such as smartphones and hybrid vehicles. The team used hydrothermal carbonization to transform the waste sugar into carbon under high temperature and pressure conditions, and found that modifying the synthesis process can control the shape of the carbon, which could have huge implications for supercapacitor performance. The researchers also discovered that altering the duration of synthesis affects the size and shape of the spheres, and that hollow spheres are more suitable for energy storage applications. The team's findings have the potential to create new pathways for previously untapped byproducts from biorefineries, and could help improve efficiency and reduce costs, making outputs from these facilities viable alternatives to oil and other fossil fuels. | None | Abstract Biorefineries produce impure sugar waste streams that are being underutilized. By converting this waste to a profitable by-product, biorefineries could be safeguarded against low oil prices. We demonstrate controlled production of useful carbon materials from the waste concentrate via hydrothermal synthesis and carbonization. We devise a pathway to producing tunable, porous spherical carbon materials by modeling the gross structure formation and developing an understanding of the pore formation mechanism utilizing simple reaction principles. Compared to a simple hydrothermal synthesis from sugar concentrate, emulsion-based synthesis results in hollow spheres with abundant microporosity. In contrast, conventional hydrothermal synthesis produces solid beads with micro and mesoporosity. All the carbonaceous materials show promise in energy storage application. Using our reaction pathway, perfect hollow activated carbon spheres can be produced from waste sugar in liquid effluence of biomass steam pretreatment units. The renewable carbon product demonstrated a desirable surface area of 872 m 2 /g and capacitance of up to 109 F/g when made into an electric double layer supercapacitor. The capacitor exhibited nearly ideal capacitive behavior with 90.5% capacitance retention after 5000 cycles. Introduction In the pursuit of a sustainable economy, both renewable energy and renewable chemical practices must be adopted. While the former can be produced from many sources, one feasible option for the combination of renewable energy and chemicals so far emanates from biorefineries 1 . However, with the current low oil price, biorefineries need improved profitability to compete with fossil fuels. This would require manufacturing of diversified products and effective utilization of byproducts for materials applications 1 , 2 . While lignin has been the center of attention for years as a co-product, the most overlooked byproduct is the impure sugar stream in liquid effluence from biorefinery pretreatment plants 3 . There exists a state-of-the-art technology that utilizes biomass, pretreated by acids or alkali, to break down amorphous carbohydrates to sugars for better cellulose accessibility 4 . Sugar content in the biomass pretreatment liquid effluence can contain maximum of 50% of the initial hydrolysable carbohydrate from the biomass 5 , 6 , 7 . Therefore, the efficiency of biorefineries can be improved significantly if this waste-stream sugar can be captured in a simple, cost-effective way without a need for extensive purification and apply it to materials design. However, a challenge, for biorefinery coproduct generation from the waste-stream, is the low concentration of soluble carbohydrates 1 . Concentrating this liquid effluence using waste heat, which is widely available in biorefineries, is achievable and already a common practice in Kraft pulping mills 1 , 6 . Utilization of this untapped biomass sugar could be prioritized and one of the potential applications can be its conversion to carbon particles with tunable morphologies as a medium for renewable energy storage, such as electric double layer (EDL) supercapacitors. Over the last decade, there has been growing interest in tailoring carbon sphere structures for different applications in renewable energy sectors. For EDL supercapacitor electrode applications, spherical carbon with a tunable porosity and controllable particle size distribution is of great interest 8 , 9 , 10 , 11 , 12 , 13 . The variety of structures can provide excellent performance for catalysis, adsorption, and energy storage 8 , 9 , 10 , 11 , 12 , 14 , 15 . Carbon spheres can be made from several methods 8 , 9 , 10 , 16 , 17 , 18 , 19 , 20 , 21 . One of the most inexpensive methods to date is hydrothermal carbonization (HTC). HTC is a relatively green technology and scalable to industrial production levels 9 . The HTC method is applicable to precursors with high moisture content much like the carbohydrates in pretreatment liquid effluence 22 . To better control the porosity, size, and shape of the carbon spheres, different strategies including templating and self-assembly were employed together with HTC 16 . Hard templating, which often uses silica as the template, can be one of the most straightforward ways to synthesize carbon spheres with a controllable morphology 14 , 23 . However, for silica hard templating, the most critical step is to obtain a template having strong interaction with the carbon precursor. The process is very tedious, and the removal of the template requires corrosive chemicals like sodium hydroxide or even hydrofluoric acid 13 , undesirable for green chemistry application. On the contrary, soft template synthesis does not require significant preparation or removal of the template 20 , 21 . We propose the synthesis of carbonaceous matter in a controllable manner using soft templating, followed by HTC and subsequent high temperature carbonization of solid HTC-derivatives. Emulsion (made from oil, water, and surfactant) and water-based HTC were carried out at different time-scales to study the evolution of spherical carbon products. The two synthesis routes were then correlated with the resulting carbon morphology, porosity, and surface characteristics. Furthermore, the carbon products derived from renewable sugar were investigated as EDL electrodes for supercapacitor application. Supercapacitors store energy based on two different principles: EDL capacitance from the pure electrostatic charge accumulation at the electrode interface, and (2) the pseudo -capacitance based on fast and reversible redox processes at characteristic potentials 17 . Out of these two mechanisms, we synthesized and characterized EDL supercapacitors and hence we will discuss the EDL supercapacitors only in this article. Surface activation of carbon products was conducted using KOH. We performed large-scale molecular dynamics (MD) simulations to understand the evolution and characteristics of the pore structures in an emulsion-based system. While previous studies have shown the possibility of producing carbon spheres from carbohydrates and even acid or alkaline pretreated biomass-derived hydrolyzed hemicellulose using HTC, detailed understanding on the structural evolution with respect to the hydrothermal reaction media is not fully understood 3 , 7 , 11 , 24 . In this study, we used sugarcane-derived table sugar as a model molecule to establish the physics and the carbon formation mechanism. We then corroborated our findings using the result from laboratory-made steam-pretreated liquid effluence from woodchips. After establishing that perfectly hollow carbon spheres can be made from pretreatment liquid effluence, we explored the potential application of our model material as supercapacitor electrodes. This study exhibits a pathway to design sustainable energy storage materials from the waste stream of a future biorefinery. Results and Discussion Structure of the carbonaceous materials The HTC of a carbohydrate precursor involves a four-step process – dehydration, condensation, polymerization, and aromatization as shown schematically in Fig. 1(a) 23 , 25 . The process is as follows: sugar molecules dehydrate, forming mainly a furfural-derivative 26 that decomposes into organic acid, and/or other species 27 . As the reaction continues, furfural and the excess dehydrated sugar condensate and polymerize. The growing “heads” in the polymer chain consist of reactive hydrophilic hydroxyl groups while the center of the chain becomes relatively dehydrated and hydrophobic. The center of the polymer chain then aromatizes with other chain centers to form a larger hydrophobic core. The aggregated chains, therefore, form spherical, micelle-like structures with a hydrophobic core and hydrophilic corona 28 . This evolution mechanism of the spherical carbonaceous aggregates was perfectly captured by scanning electron microscopy (SEM) [Fig. 1(b–d) ] with our water-based HTC synthesis (abbreviated as N system representing ‘No’ surfactants). In Fig. 1 , hydrothermal carbonization for 45 minutes (N45) and for 165 minutes (N165) give rise to micelle like structure and consequently spherical carbon structures [Fig. 1(c–d) ]. However, the 20 minutes sample (N20) with insufficient polymerization time exhibits out-of-equilibrium amorphous irregular-shaped structures with no carbon spheres [Fig. 1(b) ]. Interestingly, this shows that the micellar morphologies evolve from an irregular-shaped amorphous carbonaceous material to a perfectly shaped spherical particulate carbon as HTC time is increased. As the dehydrated sugar polymers aromatize during HTC, polymer cores continuously give off volatiles, lose functional groups, and carbonize further. Thus, when HTC duration increased, HTC samples became more carbonized and thermally stable, as seen in the thermogravimetric analysis (TGA) [Figure S1 ] of the HTC products. TGA results also show that carbon yield during high temperature carbonization increases as the HTC duration increases. Therefore, longer HTC time produces compact carbon spheres. For example, N45 samples show sphere diameter of 6.3 ± 1.5 μm [Fig. 1(c) ] while N165 samples have spheres of diameter 3.3 ± 1.6 μm [Fig. 1(d) ]. The N45 carbon, under transmission electron microscope (TEM), exhibits a perfectly spherical solid structure as shown in Fig. 1(e) . The perfectly spherical structure, has been corroborated by the cross-sectional thickness profile shown in Fig. 1(f) . Figure 1 Carbon spheres from the simple HTC synthesis (N carbon samples that are made without use of any surfactant). ( a ) A schematic representation of the evolution of carbon spheres during simple HTC. SEM images of N samples with ( b ) 20, ( c ) 45, and ( d ) 165 minutes HTC durations showing carbon morphology evolving from amorphous irregular-shaped carbon to spherical particulate carbon with increasing HTC time. ( e ) TEM image of a single N carbon sphere with 45 minutes HTC duration (N45). ( f ) Thickness profile of the single N45 sphere showing a solid spherical structure formation. Full size image Ferric chloride, primarily used as a catalyst during HTC synthesis, plays a critical role in carbonization and aromatization 29 , 30 , 31 . When hydrolyzed, ferric chloride forms ferric hydroxide or oxide and hydrochloric acid (HCl) in water 31 . As such the resulting acid catalyzes dehydration of the sugar; reducing sugar intermediates in this system could partially reduce ferric ions to ferrous ions and then be subsequently oxidized into various iron oxide species. Therefore, it is possible that some spheres may have traces of iron oxide at the micelle core 29 , 31 . However, most iron was likely removed during final acid wash of the resulting carbon except any iron oxide protected within carbon shells. Inductively coupled plasma optical emission spectrometry (ICP-OES) confirms that all samples obtained after carbonization and acid washing contain <1.2% of iron with the lowest being 0.28% of Y20 [Table S1 ]. The low iron contents together with the minimal traces of iron redox peak in the cyclic voltammetry experiments of the supercapacitor electrodes prepared from these carbonaceous materials indicate that our energy storage device is primarily an EDL capacitor, and hence pseudo -capacitance plays minimal role in our results. While carbon sphere formation in HTC is an established mechanism, the detailed procedure for emulsion medium HTC is far from understood. We denote emulsion synthesized carbon samples as Y samples, indicating presence of surfactant and oil in the reaction medium. The mechanism is schematically shown in Fig. 2(a) . In the emulsion formed by sodium dodecyl sulfate (SDS) surfactant (1 g/100 ml), water and paraffin oil (4:1 v/v), surfactant molecules form surfactant micelles. First, sugar naturally dissolves in the water phase. As HTC progresses, the sugar molecules in water behave much like the N samples and consequently dehydrate, condensate, polymerize, and aromatize. The hydrophobicity of the dehydrated and polymerized condensed sugar molecules gradually increases. The hydrophobic polymerized sugar molecules are entropically attracted towards the hydrophobic core of the surfactant micelles in the emulsion. Note that the hydrophobic tail (dodecyl) and the hydrophilic head (sulfate) of the SDS are denoted by the yellow and red color, respectively [Fig. 2(a) ]. As HTC continues, a layer-by-layer self-assembly of the sugar molecules in the surfactant micelle gives rise to the hollow carbon structures. The spherical carbon samples from emulsion-based HTC after 45 and 165 minutes (Y45 and Y165 carbons, respectively) can be seen in Fig. 2(b,c) . The hollow nature of Y45 is revealed from the crumbled sphere in Fig. 2(b,d–f) . The TEM image in Fig. 2(e) reveals a sphere having a bright core and dark edges indicating a hollow structure. In contrast to Fig. 1(f) where the cross-section thickness profile of N45 shows that the center of N45 sphere is the thickest part, Fig. 2(f) shows the Y45 bead having a hollow structure with the shell thickness of ca . 0.2 µm. Unfortunately, these broken spherical particles were not observed in the Y165 sample due to the longer HTC reaction. For long enough HTC reaction duration, the carbon shells can grow thicker and thus prevent the spheres from breaking. This mechanism also explains the smaller sizes of Y45 samples (2.5 ± 0.5 μm) as compared to 3.9 ± 1.2 μm for Y165 samples, as the longer HTC duration in Y165 allows sugar molecules to be part of a single micelle in a closely packed form. KOH activation of Y45 and Y165 [denoted as aY45 and aY165 respectively, Fig. 2(g,h) ] retained their morphologies with a slight increase in size to 4.0 ± 1.7 μm and 4.14 ± 1.62 μm respectively compared to their precursors [Fig. 2(b,c) ]. The slight increase in sphere sizes after activation is due to the addition of oxygen containing functional groups on carbon during activation, thereby expanding the carbon structure. Like N20, emulsion-based HTC was prematurely stopped after 20 minutes (Y20 sample) before the sugar molecules had a chance to form these hollow spherical structures, giving rise to out-of-equilibrium structures without carbon spheres [Fig. 2(i,j) ]. The emulsion-based carbon bead formation will be elaborated further using Molecular Dynamics simulation in a later section. Figure 2 Carbon spheres from emulsion-based HTC synthesis (Y samples). ( a ) A schematic representation of the evolution of carbon spheres during emulsion-based HTC. SEM images of Y samples with ( b ) 45, ( c ) 165, and ( d ) 45 minutes HTC showing perfectly spherical structures with the longer HTC durations and a revelation of their hollow nature with broken spheres. ( e ) TEM image of a single Y sphere with 45 minutes HTC (Y45) showing its hollowness. ( f ) Thickness profile of the single Y45 sphere showing its thin shell, ca . 0.2 µm. SEM images of activated Y sample with ( g ) 45 minutes and ( h ) 165 minutes HTC showing the retention of carbon morphology after activation. Insert of ( g ) reveals the retention of hollow nature of spheres. ( i ) Y sample with 20 minutes HTC showing the out out-of-equilibrium structures, and ( j ) activated Y sample with 20 minutes HTC which retained the out-of-equilibrium structures after activation. Full size image So far, we discussed the pathway to produce solid and hollow carbon spheres from simple sugar molecules using water and emulsion-based HTC techniques. Subsequently, we will discuss the understanding of the self-assembly these sugar derived Y and N samples’ and energy storage properties. Prior to that, we wanted to apply the same technique to synthesize carbon from biomass pretreatment liquid effluence as our long-term goal is to utilize carbon precursors from industrial effluence to produce sustainable energy storage materials. Liquid effluence from steam pretreated woodchips was prepared and subsequently carbonized following the same emulsion medium and carbonization parameters as the Y165 sample. The resulting carbons show perfectly hollow spherical structure from the SEM and TEM images as shown in Fig. 3(a–c) . These results from the biomass pretreatment liquid effluence prove that hollow carbon spheres can be produced as a co-product from biorefinery wastes, and these carbonaceous materials followed the same functionalities as that of Y samples. Carbon spheres produced herein are smaller and with thinner shells. It is known in literature that hydrothermally produced carbon sphere sizes are affected by the type of carbon precursors 32 , which can explain our smaller sphere sizes when comparing to the Y samples. For a simpler processing viewpoint, woodchip pretreatment liquid effluence was fed directly without being further concentrated into the hydrothermal synthesis reactor after being emulsified. The sugar extracted in the liquid was estimated to be ~30% of the initial woodchip mass. As a result, the sugar content in HTC used for the woodchip pretreatment effluent hydrothermal synthesis was lower than that of the Y samples, leading to the thinner carbon shell produced 11 . To the best of our knowledge, this is the first controllable hollow carbon sphere synthesis of biomass pretreatment liquid effluent from steam pretreated biomass using emulsion-based HTC. The success of this method demonstrates that our approach has potential for carbonaceous materials synthesis from a wide range of biorefinery waste materials. Figure 3 Spherical hollow carbon from steam pretreated woodchip liquid effluence. ( a ) SEM image. ( b ) and ( c ) TEM images. Full size image Molecular dynamics simulation We believe that there is a mixture of hollow and solid spheres obtained during emulsion-based HTC, as not all Y45 and Y165 spherical carbons are formed with the assistance of surfactant micelles and therefore not all of them are hollow. To further examine the coexistence of hollow and solid spherical beads in emulsion HTC, we performed coarse-grained molecular dynamics (CGMD) simulations. The underlying principle behind the structure formation conjectured in the previous section can also be verified using computational modeling. The simulations are carried out using LAMMPS package 33 in a canonical ensemble (see Methods Section for computational details). In an emulsion system, where surfactant number density is at or over the critical micelle number, the surfactants start forming the micelle at an early simulation stage. Figure 4(a) shows formation of such micelle at as early as 3 million simulation time steps. Figure 4(b–d) show evolution of the carbon structures at 4, 5, and a fully equilibrated 8 million simulation time steps. At the beginning of simulations (beginning of HTC in experiments), between 3–4 million time steps, it can be assumed that the charges on the polymer chains (sugar) are not fully stripped off. Therefore, when polymer (sugar) molecules are near the surfactant micelle, the micelle corona (red color beads) attracts the charges on the polymer chains. As HTC progresses, polymer chains get further dehydrated and gradually become hydrophobic. These hydrophobic backbones are then absorbed by the surfactant micelle cores due to strong interactions between many surfactant tails (which are hydrophobic) and hydrophobic polymer chains. The evolution from Fig. 4(b,c) shows a gradual change towards totally spherical beads consisting of surfactant and polymer molecules. The equilibrium structure in Fig. 4(d) consists of spherical beads formed by surfactant head and surfactant tail (red and yellow) along with the polymer chains (gray). Concurrently in Fig. 4(a–d) , we also observe progress of a separate bead formation consisting of only polymer (grey color) molecules. As these polymer chains are far away from the surfactant micelle, interaction between those polymer molecules with surfactants is unfavorable. As a result, these polymer chains form individual beads with other polymer molecules only. The computer simulations show the presence of both hollow and solid spheres in an emulsion system. Figure 4 CGMD simulations of surfactant polymer mixtures replicating the surfactant-sugar emulsion HTC experiment. The top panel shows evolution of the bead formation inside the central simulation cell at ( a ) 3 million, ( b ) 4 million, ( c ) 5 million and ( d ) 8 million simulation time steps. The red and yellow color spheres represent surfactant head and tail segments. The grey color spheres represent the polymer molecules (sugar derivative in experiment). The images at ( e ) and ( f ) exhibit a single bead formed by surfactant molecules in the presence of surfactant as shown in blue circle in (d) and when the surfactants are stripped off from the same, respectively. The intermolecular structure factor, S( Q ) for the bulk and near surfactant polymer molecules are plotted in ( g ) in red and blue color for high- Q range. The bulk and near surfactant polymer molecules are circled in ( d ) in red and blue colors also. The low- Q range profiles of S( Q ) for the bulk and near surfactant polymers is shown in green and magenta color, respectively. The inset in ( g ) shows the snapshots for a completely stripped off surfactant system. Full size image The inset of Fig. 4(g) shows only the carbonaceous spherical structure after the surfactant molecules are pyrolyzed off at high temperature. Two types of spheres are observed in the emulsion-based technique, solid spheres (shown in red circle) formed by only sugar aggregated (hydrogen-bonded) beads and hollow spheres (shown in blue circle) formed by sugar polymers absorbed by surfactant micelles. Because of the difference in their formation mechanisms, we expect a difference in their morphology too. The solid spheres, away from the surfactant micelle, show smooth surfaces. Whereas, the hollow spheres are seen to exist in the surfactant micelle environment. A closer look at the single bead formed by sugar absorbed surfactant micelles reveals that the surfactants serve as templates [as in Fig. 4(e) ], resulting in rough surfaces as shown in Fig. 4(f) once the surfactants are completely burnt off. The difference in surface roughness and porosity is reiterated in detail in the structural investigation in Fig. 4(g) . We show the inter-particle structure factor defined as, \(\frac{1}{N}\sum _{ij}{e}^{-iQ.{r}_{ij}}\) between the polymer molecules only. Here Q is the wave vector, r ij is the distance between two particles and N is the total number of particles. The near-surfactant S( Q ) (blue lines), is calculated from polymer molecules within 2 σ while the bulk S( Q ) (red lines) is obtained from the same molecules 2 σ distance away from the surfactant molecule, where σ is the Lennard-Jones diameter of each monomer. The near-surfactant S( Q ) represents polymer molecules from only those polymer spherical beads that are agglomerated within surfactant micelles. The bulk S( Q ) represents polymer molecules that are not associated with the surfactant micelles. As our focus is to understand the molecular level self-assembly in bulk and near-surfactant, we concentrate on S( Q ) at high- Q , i.e., shorter length-scale properties. The polymer molecules in bulk show wide and weaker peaks (red), representing essentially agglomerated structures with broad distribution. The bulk molecules are parts of the sphere formed solely by polymer chains and their wider peaks represent swollen structures with a relatively smoother surface. For polymers near the surfactant molecules (blue lines), well-defined structures are observed with peak at 0.33, 0.44, 0.55, 0.66 σ −1 , and so on. The structures show equally spaced molecules representing a layering of the polymer molecules within the micelle. It suggests that when the polymers enter the micelle core, they position themselves between the surfactant molecules thereby giving rise to the layering structure. As the surfactants are pyrolyzed off, the empty spaces left behind result in molecular level porous structures. This molecular level investigation corroborates our hypothesis that carbon morphology can be controlled using different formation mechanisms. The simulations concurrently support very well the collected electron micrographs and the proposed mechanism discussed in previous sections. Gas adsorption-desorption and surface characteristics of the carbon samples Based on the MD simulation, we expect the rougher surface from emulsion-based Y samples to generate higher surface area due to (1) the hollow nature of the carbon spheres and (2) the templating of surfactants. We calculated the surface area and pore volumes as obtained from gas adsorption-desorption experiments in Table 1 . Surface areas of Y45 and Y165 are approximately double those of N45 and N165. While no bead formation was observed with the 20 minutes HTC samples, the surfactant templating effect alone exfoliates Y20 carbons resulting in notably higher surface area than that in N20 carbons. In terms of HTC duration, surface area and pore volume both decreases as HTC duration time increases, for both Y and aY samples (Table 1 ) due to consolidation of layered structured pores collapsing. The trend was not as obvious with the N samples, as all three samples have surface area around 300 m 2 g −1 . To further increase the surface area of the carbon samples and eventually their capacitance performance, we activated Y samples with KOH. KOH acts as both the activating and templating agent, creating new pores and enlarging existing pores. As it heats up, KOH melts at 360 °C, infiltrating into macropores of carbon. As an activating agent, KOH etches new micropores and mesopores on the carbon surfaces. After washing, these emptied pores are exposed, changing the surface area and pore size distribution considerably 34 . The mechanism of KOH activation can be complex. Generally speaking, it can be represented as 6KOH + C = 2 K + 3H 2 + 2K 2 CO 3 35 , 36 . The highest surface areas and pore volumes were achieved by KOH activation. The aY20, aY45, and aY165 give rise to 1495, 1384, and 1037 m 2 g −1 surface areas with 0.627, 0.577, and 0.418 ccg −1 pore volume respectively. Table 1 Surface area and pore volume measured from nitrogen adsorption isotherm. Full size table The molecular scale templating effect from surfactants can also be seen by the increase in microporosity which is observed in the isotherms in Fig. 5(a–c) . The steep initial rise at low relative pressure suggests micropore filling 37 . Higher amounts of micropore filling (Fig. 5(a) ) for the Y samples can be seen. Y and aY isotherms contain the shape of Type I isotherms while N isotherms resemble Type II isotherms 38 . The percent micropore and mesopore were also quantified in Table 1 . It should be noted that the MD simulation fails to predict the mesoporosity of solid beads of the N samples. MD simulations are performed in an emulsion-based system and hence it cannot predict the N sample morphology accurately. A separate MD simulation of the N system was not performed to quantify the mesoporosity in the simulation. For Y and aY isotherms, the relatively flat plateau region suggests the limited amount of multilayer filling, or the presence of meso or macropores. N isotherms, on the contrary, have noticeable hysteresis, indicating capillary condensation in mesopores 39 . These features are also observed in the pore-size distribution analysis in Fig. 5(d) and its magnified version (from 2 nm onward) in Fig. 5(e) . The mesoporous characteristics of N samples can be explained in the following: N sugar polymers were polymerized into their final shape in bulk without being templated with hydrophobic segments of a surfactant. Thus, the polymerized sugars in N samples self-assemble randomly without being fully dehydrated. During the high temperature carbonization step, these functional groups evolved as volatiles, as shown in the TGA plots in Figure S1 between ca . 200 °C to ca . 700 °C, activating and creating channels within the structure. This gives rise to the mesoporous structures in the final carbonized products. In contrast, the Y samples did not have mesopores as a result of micelle formation with the surfactant causing the sugar polymers to dehydrate before being incorporated in the hydrophobic core. This is evident by the smaller weight loss from the TGA plots in Figure S1 compared to the N counterparts between ca . 400 °C to ca . 700 °C. Hence, this mechanism suppresses the amount of mesopores that can be formed in an emulsion-based HTC as observed with the Y samples. To summarize, the layering effect between surfactants and sugar-derived carbonaceous polymers gave rise to higher surface area with smaller micropores for the Y samples while the evolution of available volatiles from activated N samples created larger mesopore channels during the high temperature carbonization step. Figure 5 Carbon surface characteristics based on gas adsorption-desorption isotherm and porosity. ( a ), ( b ), and ( c ) Isotherms for adsorption-desorption for aY, Y and N series samples respectively. ( d ) Pore size distributions for different samples determined using the Quenched Solid Density Functional Theory (QSDFT). ( e ) Pore size distributions at longer length scales >2 nm to 200 nm. The color schemes are shown in the legends. Full size image Small angle X-ray scattering (SAXS) characterization of two selected samples was carried out to investigate the presence of porous structure within the samples. Figure 6 depicts the SAXS curves for the carbonized N45 and Y45 samples. One of the most notable differences was that Y45 exhibits a scattering shoulder in the high- Q region, i.e., 0.1 < Q < 0.6 nm −1 while N45 merely shows asymptotic decay in scattering intensity. Here, Q is the magnitude of the scattering vector defined as Q = | Q | = 4 πλ −1 sin θ , with λ and θ being the wavelength of incident X-ray beam and half of the scattering angle, respectively. The high- Q scattering feature of Y45 indicates the existence of nanometer scale structures created during carbon synthesis by the assistance of the surfactant where hydrophobic carbon precursors accumulated inside the micelle and these segments were separated by the hydrophobic surfactant tail and/or the oil molecules. By considering the curve shape of the high- Q scattering shoulder showing Intensity ∝ Q −1 , we employed the Guinier-Porod model for cylindrical objects to fit the high- Q scattering shoulder 40 . The data fit, indicated the existence of cylindrical pores with an average diameter of 8.6 Å (or 0.86 nm). Note that both N45 and Y45 exhibit a power law with a fractal dimension of approximately 3 in the low-Q region revealing the presence of a 3-dimensional (3D) network structure. We anticipate that they are 3D bridged pores of the samples. The N45 sample shows a steeper slope than that of the Y45 sample in low-Q region indicating larger porous structure existing within the N45 samples (see Fig. 6 ). These measured data corroborate very well with the measured pore volume and surface area shown in Table 1 . Specifically, the pore volume and surface area of N45 and Y45 are ( ca . 0.416 vs. 0.347 m 3 g −1 ) and ( ca . 273 vs. 743 m 2 g −1 ), respectively. Figure 6 Small angle X-ray scattering (SAXS) data for N45 and Y45 carbon samples. Full size image Supercapacitor application To this end, we demonstrate the application of these renewable carbonaceous materials for renewable energy storage systems. We used the synthesized porous carbonaceous particles to prepare electrodes for EDL. EDL supercapacitors, are energy storage devices with power performances that fit in between dielectric capacitors and batteries from the Ragone plot, which exemplifies the energy and power relationships for different energy storage devices 41 . Unlike batteries and pseudo-capacitors, EDL supercapacitors do not rely on faradaic reactions 42 . Thus, these type of supercapacitors have higher charge-discharge rates and stabilities 43 . EDL supercapacitors rely solely on the electrostatic separation between ions in electrolyte and electrons in electrodes 44 . Using porous carbonaceous materials to serve as electrodes has gathered significant interest 45 , 46 , especially when derived from renewable materials 47 , 48 , 49 . Figure 7(a,b) display typical cyclic voltammetry current-voltage (CV) curves and charge-discharge profiles respectively for Y45 sample as an example. The rest of the CV and charge-discharge curves can be found in Supporting Information. The CV curves show symmetrical rectangular shapes and the charge-discharge profiles show nearly symmetric triangular shapes for all scan rates and current densities. This represents good to excellent capacitive performances. Capacitances generally follow the same trend as surface area and pore volume based on the governing formula for capacitors, C = ε·A/d, where C is the capacitance of the supercapacitor, ε is the product of electrolyte dielectric constant and permittivity of free space, A is the surface area between the electrode and electrolyte, and d is the separation distances between ions in the electrolyte and the electrons in the electrodes. As a result, the amount of charge that can be stored, i.e. capacitance, increases with increasing accessible surface 44 . Therefore, aY20 and aY45 with the highest surface areas of all samples give the highest capacitance of up to 113 F g −1 . The direct correlation between surface area and capacitance can be observed in Fig. 7(c,d) , where a summary of the capacitance values is shown. The notable exception to the trend is N165 and Y165 samples. Y165 has a higher surface area when compared to N165 but not its capacitance. Although many factors can cause this discrepancy, at least one of the major factors is the pore characteristics of the two samples. From the gas adsorption-desorption experiments, it has been shown that N samples have larger pores than the Y samples. The Quenched Solid-State Functional Theory (QSDFT) model showed that Y165 has 56% of its pores smaller than 0.64 nm, which was the lower limit of our pore size measurements. N165, however, only has 46% of such small pores. Although solvated potassium ion has a size of 0.31 nm and solvated hydroxyl ion has a size of 0.35 nm 50 which are much smaller than our 0.64 nm limit, we may still speculate that the larger pores in N165, partly compensated its low surface area for capacitance. Many micropores of Y165 could have blocked electrolyte ions from reaching the carbon electrode surface, reducing the effective surface area on the electrode for capacitance applications 51 , 52 . Mesoporosity on the other hand, could make the diffusion of electrolyte ions onto carbon electrode surfaces easier, contributing to high capacitance, especially when fast diffusion of ions are required, as in the case with a high scan rate 50 , 53 . As a result, N165 and Y165 have similar capacitances at a slow scan rate, but as the scan rate increases, N165 gradually outperforms Y165. Similarly, other Y samples generally have poorer rate handling capability in comparison to the N samples. Noticeably, aY165 rate handling capability was the worst followed by aY45. Similar to Y165, the first suspect for explaining the poor rate handling was the kinetic limitation from pore size distribution. However, the DFT results of aY165 and aY45 did not differ much from aY20 which the only other activated sample but with good rate handling capability. Figure 7 Capacitance measurement using carbonaceous materials as electrodes of EDL supercapacitors. ( a ) Cyclic voltammetry IV curves for Y45 at 10, 20, 50, 100 and 200 mVs −1 scan rates. The legends are in mVs −1 ( b ) Charge-discharge experiments for Y45 sample at 200, 500, 1000 and 2000 mAg −1 current densities. Legends are in mAg −1 . ( c ) Capacitances for all the samples as shown in the legend color code. ( d ) Table showing the capacitance values at two different scan rates. ( e ) Electrochemical impedance spectroscopy (EIS) results and ( f ) Cycle stability of aY20, aY45, and aY165. Full size image The activated samples were further analyzed using electrochemical impedance spectroscopy. Figure 7(d) shows the Nyquist plots exhibit almost vertical lines at the low frequency region, representing behaviors closer to an ideal capacitor 17 , 54 . A shallower slope closer to 45° as seen at the mid to low frequency range represents Warburg resistance which indicates the slow diffusion of ions on the surface of the electrodes 55 . One can also find a good indication of the equivalent series resistance when extrapolating the vertical portion of the curve to the x-axis on the Nyquist plot 56 , 57 . Notably, aY165 curve is shifted to the right relative to aY45 and with aY20 being the furthest left curve. This indicates impedance of aY165 is higher than aY45 then closely followed by aY20. This explains the trend observed with a steeper drop in capacitance vs. scan rate curve in Fig. 7c for aY165 than aY45 and aY20 with the shallowest drop discussed earlier. We believe the main reason in their conductivity differences are due to the different amounts of metal species in the activated samples and thus their capacitance rate handling capability. Indeed, earlier ICP-OES results in Table S1 showed that aY45 and aY165 have the highest amount of metal species from the iron catalyst and the KOH activation process. Specifically, aY165 contains 0.897% iron and 0.535% potassium, the highest, followed by aY45, then aY20 with the least with 0.313% iron and 0.211% potassium. Because of the promising capacitances, 5000 long term cycling stability was evaluated for all activated samples as shown in Fig. 6(e) . Capacitance retentions of 98.3%, 97.2% and 99.8% for aY20, aY45, and aY165 respectively were measured, revealing high cycle stability performances during practical applications. So far, we have shown supercapacitor properties of samples made from sugar derived carbonaceous materials. To correlate the functionalities of sugar derived activated carbon with real world waste management, the hollow carbon spheres made from woodchip pretreatment liquid effluence was activated and characterized. A surface area of 872 m 2 /g and a pore volume of 0.511 cc/g were obtained. When made into supercapacitor electrodes, capacitance of 109 F/g was measured with a scan rate of 5 mV/s which maintained a 90.5% capacitance retention after 5000 cycles (Figure S4 ). The desirable result and nearly ideal capacitive behavior reinforced the potential for biorefineries to utilize its biomass waste using simple emulsion-based hydrothermal synthesis for EDL supercapacitor electrode applications as a value-added product. Conclusions We have analyzed the evolution of spherical carbon particles and pore formation mechanisms from sugars via hydrothermal synthesis. The morphologies of these products can be controlled by modifying the composition of the media and thus altering the carbonization mechanisms. Both computational and experimental results show an intriguing effect that carbon morphologies evolve from a poorly defined charcoal material to perfectly shaped spherical carbon as HTC time increases. The size of the spheres can also be controlled by the HTC duration. Because of the differences in reaction mechanisms, surfactant loaded precursor in the emulsion medium self-assembled into hollow spheres (Y sample) whereas in the absence of surfactant in simple hydrothermal reaction medium, the precursors only yield solid spheres (N Sample). In terms of porosity, Y samples have higher surface area and microporosity due to their hollow nature and the layered templating effect caused by the surfactant molecules. In contrast, N samples are both microporous and mesoporous mainly due to the evolution and activation of volatiles during carbonization. The carbon surface area analysis, molecular dynamics simulations, and the measured small-angle x-ray scattering data reveal the templating effect of surfactants on the emulsion-based hydrothermal synthesis of carbons. When these carbons were made into EDL electrodes of supercapacitors, the observed supercapacitances correlated very well with the measured surface areas and exhibited excellent capacitive behaviors. Particularly, the Y samples synthesized for short duration (20–45 minutes) show very high capacity ( ca . 50–80 F/g) even at high scan rates. The best performing Y samples were then activated with KOH, and surface areas and capacitances further improved up to 1495 m 2 g −1 and 113 F g −1 , respectively, with a 98.3% capacitance retention for aY20 after 5000 cycles. Finally, we produced supercapacitor electrodes from liquid effluence of steam pretreated woodchips that are typically a byproduct of biorefineries using our reaction pathway. Perfectly hollow spheres can be synthesized from the woodchips. The resulting activated hollow carbon spheres exhibited a desirable surface area of 872 m 2 /g and capacitance of up to 109 F/g. Almost ideal capacitive behaviors were observed with 90.5% capacitance retention after 5000 cycles. Thus, we provide a potential solution for deriving energy harvesting materials from a renewable resource. While the investigation was performed at a laboratory scale, the simplicity of the overall synthesis technique can easily be scaled up to industrial standards. The advantage of this method is threefold: (1) the synthesis technique is simple, (2) the method can be used in parallel with biorefinery unit operation, and (3) the cost of biorefineries will eventually decrease since a waste product can be converted into an energy storage material. We believe this work will influence a change in the practices of biorefineries towards achieving their goal of competing with fossil fuels. Methodology Hydrothermal synthesis and carbonization HTC of carbon was conducted from 120 ml 0.5 M sucrose (Diamond Crystal, Savannah, GA) and 0.5 M FeCl 3 solution. In another case, 0.5 M sucrose and 0.5 M FeCl 3 solution were made from an emulsion of ultrasonicating 1.2 g sodium dodecyl sulfate (SDS), 24 ml paraffin oil (Merck-KGaA, Darmstadt, Germany), and 96 ml DI water. The hydrothermal reactions were carried out for 20, 45, or 165 minutes. Additionally, liquid effluence of steam pretreated woodchips (obtained from carpentry waste from eastern Tennessee mixed hardwood biomass) at 180 °C for 12 hours was made into an emulsion just like the previous case with SDS and paraffin oil then subsequently HTC for 165 minutes with 1.2 g FeCl 3 . Amount of carbohydrate resulted in the liquid effluence was estimated at around 2.5 g. All chemicals were purchased from Sigma-Aldrich unless noted otherwise. Synthesis was done in a 200 mL PPL lined stainless steel autoclave (Columbia International). The autoclave was placed in an oven at 200 °C for a specific synthesis time. The hydrothermal synthesized solid product was then placed in a quartz tube inside a tube furnace and carbonized at 1000 °C for 20 minutes in a nitrogen atmosphere. The carbonized material was then washed with 0.5 M HCl and water. Activation KOH was ground with the dried samples in a 2:1 (KOH:sample) ratio. Ground samples were then ramped in a tube furnace under a nitrogen atmosphere at the rate of 8 °C per minute to 800 °C and held for 30 minutes. The activated samples were then cooled, washed with water, and dried. The activated ID carbon materials were designated as “aID” carbon (i.e. activated Y20 is named as aY20). Characterization Scanning electron microscope (SEM) images were collected with a Hitachi S4800. Carbon sphere diameters were measured by Image J software and characterized by the mean and standard deviation of 10 randomly chosen spheres within a sample. Transmission electron microscope (TEM) images were taken using a Zeiss Libra 120 Transmission Electron Microscope operating at 100 kV. Samples were dispersed onto a carbon film coated copper grid before analysis. Thermal gravimetric analysis was investigated by a Q500, TA instruments. For each sample, ca . 15 mg was used for measurement onto a platinum pan. Temperature was ramped to 105 °C at 10 °C/min, held for 30 min, and ramped to 1000 °C at 7.5 °C/min in nitrogen atmosphere. Inductively coupled plasma optical emission spectrometry was outsourced to Galbraith Laboratories Inc., a commercial analytical chemistry laboratory service in Knoxville, TN. Nitrogen adsorption desorption experiments were carried out with a Quantachrome Autosorb iQ at 77 K. Surface area, pore size distribution, and pore volume were determined using the Quenched Solid Density Functional Theory (QSDFT) 58 . For capacitance measurements, the carbonaceous material was first mixed with a conductive carbon black (Timcal super C45) at 8:1 ratio then with 10 wt. % aqueous Polytetrafluoroethylene (60% dispersion in water). The mixture was then mixed in ethanol to form a paste. The paste was then coated on two 7/16 th in. diameter Ni-foam circles and dried overnight. These current collectors were then pressed and used as electrodes in symmetrical two-electrode cells with 6 M KOH as electrolyte. Filter paper served as the separator. Two stainless steel rods were used to clamp on the electrode-filter paper-electrode complex and the complex was housed inside a Teflon Swagelok cell. VersaSTAT 4 (Princeton Applied Research) was used to perform cyclic voltammetry, charge-discharge, and electrochemical impedance spectroscopy (EIS) experiments. A 0 to 0.8 V voltage window was used. Scan rates varied from 10 mV s −1 to 200 m V s −1 and from 200 mA g −1 to 2000 mA g −1 . EIS was conducted at a frequency of 500 kHz to 50 mHz with an amplitude of 10 mV. The specific capacitances were calculated from C = 2q/mE, where E is the voltage window of 0.8 V, m is the mass of the carbon sample used, and q is the charge accumulated calculated from VersaStudio software. Cycle stability was conducted on a Arbin battery cycler (Arbin Instrument) at 500 mA g −1 . Small-Angle X-ray Scattering (SAXS) data were acquired at the Center for Nanophase Materials Sciences (CNMS) in Oak Ridge National Laboratory on an Anton Paar SAXSess mc 2 . The scattered beam was recorded on a CCD detector (PI-SCX, Roper) with a pixel resolution of 2084 × 2084 and pixel dimensions of 24 × 24 µm 2 . The data collection time was 20 minutes. For the measurements, the X-ray was generated at 40 kV/50 mA at a beam wavelength of λ = 1.541 Å (Cu Kα radiation). The generated X-ray beam was slit-collimated using a Kratky camera giving rise to the beam size of 18 mm (Length) × 0.4 mm (Width) and the collected SAXS data were desmeared and expressed as intensity versus Q , where Q = (4 π sin θ )/ λ after subtraction of detector dark current and background scattering. Computational Model Coarse-grained molecular dynamics (CGMD) simulations were performed on a dilute mixture of sugar and SDS surfactant. The sugar molecules were modeled as short polymer chains of 15 monomeric units with 4 hydrophilic monomers with charges on the polymer backbone. The charges on the polymer chains allow the chain to be slightly polar thereby mimicking the polar hydroxyl groups of the sugar molecules. The charge sites are converted to neutral once aggregates are formed to mimic the fully hydrophobic carbonaceous sugar molecules after HTC. The SDS surfactants were modeled as 12-mer polymer chains with 1-mer hydrophilic polar head and 11-mer hydrophobic tails has been done in previous CGMD studies 59 , 60 . While the experiments were performed in both water and emulsion (in SDS), we performed only one set of simulation in SDS (emulsion in experiments). The purpose of the simulation was to understand the self-assembly of carbonaceous bead formation irrespective of the presence of absence of SDS, we chose the latter. The interactions between the neutral monomers were modeled using Lennard-Jones (LJ) force-field (FF) while the charge interactions were modeled using explicit Coulomb interactions. Each monomer bead was represented by mass, m , and Lennard-Jones bead diameter, σ . For the simulations, we considered the mass and LJ diameter equal to 1 and 0.97 respectively, same for all monomers. As the variation of m and σ of sugar and surfactant monomers are relatively small, hence this choice m and σ would provide critical self-assembly information without drastically altering the fundamental physics. The model system consisted of 2000 polymer chains and 2000 surfactant molecules in a periodic box of 100 σ × 100 σ × 100 σ at a density, 0.064/ σ 3 . In experiments, the hydrothermal carbonization process strips off the charges on the sugar and SDS molecules. Therefore, the most realistic way to computationally model hydrothermal carbonization would be to strip off the charges after a certain simulation time. Hence, we modeled the carbonization process by stripping off the charges from both the polymers and surfactants after equilibrating for 5 million LJ time-steps. By deleting all the charges from the system, the interaction between the monomers becomes solely hydrophobic, representing a purely carbonaceous material. The polymer chains and surfactant molecules undergo self-assembly during equilibration, however, to achieve equilibration for a fully hydrophobic (no-charge scenario) system, we ran for 3 million more time steps. All the simulation parameters are in reduced units. The temperature is fixed at, T* = \(1.0{k}_{B}T/\varepsilon \) and the simulation time step is fixed at, Δt = 0.01 σ . The visualization of MD trajectories were generated by VMD code 61 and the structural analysis was performed using our in-house code. | None | [] | [] | [] | SciNews | Chemistry | Hoi Chun Ho et al. Amending the Structure of Renewable Carbon from Biorefinery Waste-Streams for Energy Storage Applications, Scientific Reports (2018). DOI: 10.1038/s41598-018-25880-0 Journal information: Scientific Reports , Nature | http://dx.doi.org/10.1038/s41598-018-25880-0 | https://phys.org/news/2018-09-synthesis-sugar-sustainable-energy-storage.html | A research team at Oak Ridge National Laboratory has discovered a way to convert waste sugars produced in biorefining processes into functional materials, specifically spherical carbon spheres. These carbon spheres can be used to form improved supercapacitors, which are energy storage devices that can power technologies such as smartphones and hybrid vehicles. The team used hydrothermal carbonization to transform the waste sugar into carbon under high temperature and pressure conditions, and found that modifying the synthesis process can control the shape of the carbon, which could have huge implications for supercapacitor performance. The researchers also discovered that altering the duration of synthesis affects the size and shape of the spheres, and that hollow spheres are more suitable for energy storage applications. The team's findings have the potential to create new pathways for previously untapped byproducts from biorefineries, and could help improve efficiency and reduce costs, making outputs from these facilities viable alternatives to oil and other fossil fuels.
Biorefinery facilities are critical to fueling the economy—converting wood chips, grass clippings, and other biological materials into fuels, heat, power, and chemicals. A research team at the US Department of Energy's (DOE's) Oak Ridge National Laboratory has now discovered a way to create functional materials from the impure waste sugars produced in the biorefining processes. Using hydrothermal carbonization, a synthesis technique that converts biomass into carbon under high temperature and pressure conditions, the team transformed waste sugar into spherical carbon materials. These carbon spheres could be used to form improved supercapacitors, which are energy storage devices that help power technologies including smartphones, hybrid vehicles, and security alarm systems. The team's results are published in Scientific Reports, a Nature research journal. "The significant finding is that we found a way to take sugar from plants and other organic matter and use it to make different structures," said Amit Naskar, a senior researcher in ORNL's Materials Science and Technology Division. "Knowing the physics behind how those structures form can help us improve components of energy storage." By modifying the synthesis process, the researchers created two varieties of the novel carbon spheres. Combining sugar and water under pressure resulted in solid spheres, whereas replacing water with an emulsion substance (a liquid that uses chemicals to combine oil and water) typically produced hollow spheres instead. "Just by substituting water for this other liquid, we can control the shape of the carbon, which could have huge implications for supercapacitor performance," said Hoi Chun Ho, a Ph.D. candidate working with Naskar at the Bredesen Center for Interdisciplinary Research and Graduate Education, a joint venture of ORNL and the University of Tennessee, Knoxville. The team also discovered that altering the duration of synthesis directly affected the size and shape of the spheres. To further explore the discrepancies between solid and hollow carbon structures, the team ran synthesis simulations on the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. They also used transmission electron microscopy (TEM) and small-angle X-ray scattering (SAXS) tools at the Center for Nanophase Materials Sciences (CNMS), another DOE Office of Science User Facility, to characterize the capabilities and structure of the carbon samples. "We wanted to determine what kind of surface area is good for energy storage applications, and we learned that the hollow spheres are more suitable," said ORNL researcher Monojoy Goswami of CNMS and the Computer Science and Engineering Division. "Without these simulations and resources, we wouldn't have been able to reach this fundamental understanding." With this data the team tested a supercapacitor with electrodes made from hollow carbon spheres, which retained about 90 percent capacitance—the ability to store an electric charge—after 5,000 charge cycles. Although supercapacitors cannot store as much energy as batteries can store, they have many advantages over batteries, such as faster charging and exceptionally long lifetimes. Some technologies contain both batteries to provide everyday energy and supercapacitors to provide additional support during peak power demands. "Batteries often support smartphones and other electronic devices alone, but supercapacitors can be useful for many high-power applications," Ho said. "For example, if a vehicle is driving up a steep hill with many passengers, the extra strain may cause the supercapacitor to kick in." The pathway from waste sugar to hollow carbon spheres to supercapacitors demonstrates new potential for previously untapped byproducts from biorefineries. The researchers are planning projects to find and test other applications for carbon materials derived from waste sugar such as reinforcing polymer composites with carbon fibers. "Carbon can serve many useful purposes in addition to improving supercapacitors," Ho said. "There is more work to be done to fully understand the structural evolution of carbon materials." Making use of waste streams could also help scientists pursue forms of sustainable energy on a broader scale. According to the ORNL team, biorefineries can produce beneficial combinations of renewable energy and chemicals but are not yet profitable enough to compete with traditional energy sources. However, the researchers anticipate that developing useful materials from waste could help improve efficiency and reduce costs, making outputs from these facilities viable alternatives to oil and other fossil fuels. "Our goal is to use waste energy for green applications," Goswami said. "That's good for the environment, for the biorefinery industry, and for commerce." |
10.1038/s43016-022-00488-w | Study finds relationships among herbicide-resistant weeds, tillage practices and agricultural greenhouse gas emissions | A new study that combines survey data and cutting-edge computer modeling found a growing trend in tillage intensity in U.S. corn and soybean production in recent years has led to an increase in greenhouse gas emissions from agricultural fields. The study, published recently in the academic journal Nature Food, drew on years of survey data that asked thousands of U.S. farmers about their tillage practices. The researchers then plugged the relevant data into sophisticated ecosystem models to see how tillage decisions affect soil emissions of greenhouse gases, including carbon dioxide and nitrous oxide. The survey data indicate farmers relied less on tillage during the period between 1998 and 2008, but that trend began to reverse around 2009 when tillage intensity started to rise. Chaoqun Lu, an Iowa State University associate professor of ecology, evolution and organismal biology and lead author of the study, said the growing resistance of weeds to the common herbicide glyphosate likely contributed to increased tillage. Genetically engineered herbicide-tolerant crops hit the agricultural scene in the late 1990s, and their adoption freed farmers from some of their reliance on tillage as a method of weed control. But growing numbers of weed species with resistance to the herbicide have emerged over the decades, reducing the effectiveness of the herbicide and making tillage a more attractive weed control option once again. And as tillage intensity grows, more carbon and nitrogen stored in the soil release into the atmosphere in the form of greenhouse gases, Lu said. "One of the interesting pieces that we found in this study is tillage intensity has shifted from a declining trend to an increasing trend since 2008," Lu said. "Our regression analysis suggests this trend is correlated to the wide adoption of herbicide-tolerant crops before 2008 and emerging weed resistance after 2008. We can't assert a strict causal relationship, but regression analysis reveals a strong relationship between them." The survey asked questions about farmers' decisions on seed varieties and cultivation practice intensity. Survey topics included no-till, conservation tillage (e.g., ridge till, mulch till), and conventional tillage (e.g., moldboard plow, chisel plow, disk harrow). The data show no-till grew by roughly 12 million acres for corn production and nearly 17 million acres for soybeans between 1998 and 2008. But no-till corn acres declined by nearly a half million acres between 2009 and 2016 and declined by nearly 6 million soybean acres during that period, according to the survey. Corn acreage under conservation tillage and soybean acreage under conservation and conventional tillage showed similar trends, first declining between 1998 and 2008 before climbing back to previous levels by 2016. Feeding the data into the land ecosystem models shows that gains in tillage intensity since 2009 have offset the greenhouse gas mitigation benefits achieved during the tillage declines from 1998 to 2008. Lu said the study uncovers a relationship between weed resistance, seed technology and greenhouse gas emissions that could lead to a better understanding of how farm practices can mitigate climate change. Her team's previous research showed that nitrous oxide emissions from farmland in the U.S. Corn Belt have increased in recent years, largely due to the widespread application of nitrogen fertilizers to agricultural land. The added nitrogen is partially used by crops, but the remainder either stays in soils or is lost to the environment. During this process, microorganisms living in soils consume nitrogen-containing compounds and give off nitrous oxide as a byproduct. Meanwhile, soil organic matter decomposes and partially converts into carbon dioxide. Both are powerful greenhouse gases that have potential to warm the climate. Intensive tillage practices disturb the soil, alter soil moisture and aeration status, and stir heavy crop residue into soils, which together change the production rates of soil greenhouse gases and allow more of them to escape, Lu said. Lu pointed to the use of alternative herbicides to combat glyphosate-resistant weeds, or using glyphosate in fewer consecutive years, as well as the diversification of crops beyond corn and soybeans as options to control weeds without increasing greenhouse gas emissions. "Without an effective strategy to control weeds, tillage intensity could continue to grow in the future and could undermine greenhouse gas mitigation achievements from other agricultural activities," Lu said. | A recent study published in Nature Food found that the trend of decreasing tillage intensity in US corn and soybean production reversed around 2009, leading to an increase in greenhouse gas emissions from agricultural fields. The study combined survey data and computer modeling to analyze the impact of tillage decisions on soil emissions of carbon dioxide and nitrous oxide. The researchers found that the growing resistance of weeds to glyphosate, a common herbicide, likely contributed to the increase in tillage intensity, as farmers turned to tillage as a method of weed control. The study suggests that the trend is correlated with the adoption of herbicide-tolerant crops before 2008 and emerging weed resistance after 2008. The findings highlight the need for alternative weed control strategies to mitigate the growth of tillage intensity and its associated greenhouse gas emissions, which could undermine efforts to reduce agricultural emissions. | None | Abstract Tillage is a common agricultural practice that helps prepare the soil and remove weeds. However, it remains unknown how tillage intensity has evolved and its effect on net greenhouse gas (GHG) emissions. Here, using a process-based modelling approach with a multi-source database, we examined the change in tillage intensity across the US corn–soybean cropping systems during 1998–2016 and the impact of tillage intensity on soil GHG emissions. We found that tillage intensity first decreased and then, after 2008, increased, a trend that is strongly correlated with the adoption of herbicide-tolerant crops and emerging weed resistance. The GHG mitigation benefit (−5.5 ± 4.8 TgCO 2 e yr −1 ) of decreasing tillage intensity before 2008 has been more than offset by increased GHG emissions (13.8 ± 5.6 TgCO 2 e yr −1 ) due to tillage reintensification under growing pressure of weed resistance. As weed resistance persists or grows, tillage intensity is anticipated to continue rising, probably increasing GHG emissions. Our results imply that farmers’ choices in managing herbicide resistance may help mitigate agricultural GHG emissions, underscoring the importance of an alternative strategy to control weeds. Main Emissions of greenhouse gases (GHGs), such as carbon dioxide (CO 2 ), methane (CH 4 ) and nitrous oxide (N 2 O), from agriculture (cultivation of crops and livestock) and deforestation account for about a quarter of global total GHG emissions 1 . In the United States, agriculture contributed ∼ 10% of total GHG emissions in 2018, a proportion that has increased by 10% since 1990, which represents a substantial increase compared with the national total GHG emission increase of 3.7% in the same period 2 . The agriculture sector provides a notable GHG mitigation potential 3 , but doing so requires a deep understanding of the sector’s GHG flux dynamics and their key environmental drivers including human management practices 4 . Tillage is an important cropping practice that helps prepare the soil and remove weeds. Although various definitions of tillage types exist in the literature, for our purposes, tillage practices can be grouped into three types, namely, conventional tillage, conservation tillage and no-till, which differ by degrees of soil disturbance and residue retention. Conventional tillage leaves less than 15% residual on the soil surface, while conservation tillage has at least 30% residue left and no-till keeps the soil covered 100% of the time 5 , 6 . Various tillage practices have different impacts on the physical, hydrological and biogeochemical processes in the soil. For example, conventional tillage practices (such as disc ploughing) not only promote soil organic carbon oxidation and decomposition but also accelerate soil erosion by increasing soil exposure to wind and rain 7 . On the other hand, no-till and conservation tillage (such as strip-till and mulch-till) have been widely adopted by farmers to conserve soil and water 8 . However, the no-till system contributes less than is often assumed to agricultural sustainability because it may retard springtime soil warming, increase weed, pest and disease pressures, and lead to crop yield loss 9 , 10 , 11 , 12 . There are many reasons why tillage intensity has mostly declined on the US cropped acres in the past decades. Reduced tillage has been widely adopted to suppress soil erosion, preserve moisture and reduce crop production cost in the use of fuel, labour and machinery 8 , 13 . The advent of herbicide-tolerant (HT) crops, commencing in the late 1990s, has made it possible to spray herbicide over the growing crops, further reducing reliance on tillage 14 . But the benefit of HT crop adoption in reducing tillage might not be sustainable in the long run as weed resistance has emerged to the main chemical used, glyphosate 15 . Evidence to date suggests that partial reversion to conventional tillage has resulted 16 , 17 . For example, a recent study 17 reveals that the shares of conservation tillage and no-till in soybean fields declined by 3.9% and 7.6%, respectively, when eight glyphosate-resistant weed species are identified, despite little initial effect on tillage practices upon first emergence of weed resistance. However, the consequences of the changing tillage intensity in soil GHG fluxes during this period remain unclear. In the United States, a wide variety of studies have been conducted to quantify the GHG mitigation potential of the agriculture sector 18 , 19 , 20 . More recent efforts have involved seeking policy and market solutions that promote additional mitigation practices 21 , 22 , 23 , 24 . Nonetheless, most existing tillage-related assessment and prediction activities either lack data to characterize the spatiotemporal patterns of tillage practices and their intensity changes or focus on the resultant fluxes of single GHGs. This limits the explicit characterization of system responses and hinders us from identifying and adopting sustainable management practices. Although the US Geological Survey developed tillage intensity maps for 1989–2004 by aggregating county-level survey into eight-digit hydrologic unit watersheds 25 , little is known about how tillage practices in the United States have changed in more recent years, especially given increasing concerns about herbicide-resistant weeds 13 , 16 , 26 . In addition, there is still limited understanding as to how tillage decisions are driven by environmental stressors such as herbicide and herbicide-resistant weeds, and how they together have affected GHG mitigation outcomes during recent decades. There is substantial evidence that using more intensive tillage is a coping strategy for many farmers faced with herbicide-resistant weeds, and this has raised concerns about negative environmental impacts 16 , 17 . Here we use a process-based land ecosystem model, a long-term farmers’ survey and time-series gridded data of environmental changes to examine the relationships between genetically engineered HT crop adoption, the emergence of weed resistance to herbicide and farmers’ decisions in tillage practices, and how historical tillage practices altered net GHG fluxes in agricultural land (Fig. 1 ). Our study in the United States could provide insightful information for other agricultural regions in the world that are impacted by growing weed pressure, herbicide resistance, intensifying tillage and diminished GHG mitigation potential. Fig. 1: Conceptual depiction of the hypothetical GHG fluxes in response to tillage intensity changes affected by HT crop adoption and emergence of herbicide-resistant weeds. Arrow thicknesses and box sizes indicate the intensity of tillage practice, herbicide use and seed varieties in controlling weeds. The conditions in phase I and II represent the tillage intensity shift before and after 2008 in the United States, respectively, and the tillage-affected GHG fluxes that are examined in this study. The arrow direction of the y axis indicates higher tillage intensity or more GHG emissions. (Background photograph © USDA, ). Full size image A key data source of our study is a commercial survey of farmer choices regarding corn and soybean seed varieties, pesticide choices (including herbicide, insecticide and fungicide) and intensity of tillage practices 17 . These data allow us to develop time-series gridded maps to characterize the location and intensity of tillage practices in the US corn–soybean cropping systems during 1998–2016. We explored the reasons that were likely to shape the changes in tillage intensity across the country by examining the relationships between the state-level adoption rate of genetically engineered HT crops, the number of herbicide-resistant weed species, and corn–soybean acreage under different tillage practices. Furthermore, we used the annual tillage intensity maps to drive a land ecosystem model, Dynamic Land Ecosystem Model (DLEM), to distinguish and quantify how historical tillage practice changes in the US corn–soybean cropping system have affected net fluxes of CO 2 and N 2 O. Results Tillage intensity change and potential contributing factors We adopted a unitless index to represent the national and state-level tillage intensity by standardizing the acreage ratio of intensive to less-intensive tillage practices (for example, acreage ratio of conventional to conservation tillage, conservation to no-till, and conventional to no-till) into a value between 0% and 100% ( Methods ). Our analysis indicates that tillage intensity in the US corn–soybean cropping systems declined substantially during 1998–2008, but shifted to an increasing trend after 2008 (Fig. 2c,d , blue line). The farmers’ survey data indicate that corn and soybean acreage under no-till practice increased by 5.2 Mha and 6.8 Mha, respectively, during 1998–2008. However, this increase was followed by a no-till cropland acreage decline, comprised of 0.2 Mha from corn and 2.4 Mha from soybean, in the period 2009–2016 (Fig. 2a,b ). No-till was used on 20–33% (minimum–maximum) of national corn acreage, but had a much larger share (34–55%) on soybean-planted lands over the study period (Supplementary Fig. 1 ). This supports the statement in Livingston et al. 27 that soybean production is more reliant on herbicide and less on tillage than is corn production. However, the no-till shares estimated in our study only reflect the annual percentage of corn- and soybean-planted areas under no-till according to the surveyed farm operators, without separating continuous or intermittent no-till. By using the same definition of no-till (defined as the absence of any tillage operation from harvest of the previous crop to harvest of the current crop), we found that the acreage shares of no-till reported by our data were close to the three most recent field-level crop-specific production practice surveys conducted by the US Department of Agriculture (USDA) Agricultural Resource Management Survey (ARMS), that is, 23–27% of total corn acreage was under no-till in 2005, 2010 and 2016, and 35–45% of total soybean acreage in 2002, 2006 and 2012 6 . Fig. 2: Annual changes of crop acreage under each tillage practice since 1998 and possible factors regulating the tillage intensity shift. a – d , Annual acreage changes of corn ( a ) and soybean-planting area ( b ) in the lower 48 states of the United States; and the relationships between tillage intensity index (tillage index), accumulated species number (AccSpecies) of weeds that are resistant to herbicides, and planting percentage of HT crop varieties (HT%) for corn ( c ) and soybean ( d ). Dotted and solid lines in c and d indicate the period before and after 2008, respectively. Correlation coefficients indicate the relationship between the two lines specified. Source data Full size image Corn acreage changes under conventional tillage showed annual fluctuations around the zero line (no change), whereas areas receiving conservation tillage first declined and then increased from 1998–2016. Soybean acreage under conservation and conventional tillage changed with a similar pattern, first declining until 2007 and then increasing with the 2016 level close to or above the initial level of 1998. Increases in no-till crop acreage before 2008 predominantly occurred in the Mississippi River basin, the Corn Belt and the lower Mississippi alluvial valley in particular, and small areas along the US east coast (Supplementary Fig. 2 ). Accordingly, the acreage of conservation and conventional tillage decreased in these areas. However, the no-till area declines after 2008 were mainly found in the southern United States and the southern part of the Midwest, while intensifying tillage centred in the central and northern part of the Midwest where a large amount of fertilizer has been applied to boost crop growth 28 . The spatial heterogeneity of tillage practice changes explained the faster GHG responses in phase II shown by the conceptual diagram (Fig. 1 ) and in our model estimations. Among states, the central (for example, Illinois, Indiana and Iowa) and western Corn Belt states (for example, Nebraska, North Dakota and Minnesota) had large variations of different tillage practices over the years in corn and soybean acreage (Supplementary Figs. 3 and 4 ). We examined the relationships among tillage intensity, crop seed varieties and weed resistance since 1998. Our analysis demonstrated that the early-stage (1998–2008) reduction in tillage intensity was strongly correlated with the increases in national adoption rate of genetically engineered HT corn and soybean varieties. The latter included seeds that had HT genes only, as well as stacked genes (that is, with both HT and insect-tolerant traits). Nationally, the adoption rate of HT crops has substantially increased since the beginning of the study period, reaching a level above 90%, or close to 100% in some states during the 2000s (Supplementary Figs. 5 and 6 ). The same survey data reveal that the national average share of HT varieties in all the planted corn grew from 11% in 1998 to ∼ 90% after 2008, while HT soybean varieties increased from 61% in 1998 to ∼ 95% since 2004. The percentage of HT crops in soybean production started increasing earlier than that in corn production and also peaked a few years earlier. The share of HT varieties in both crops levelled off in the early- to mid-2000s, and thus had no significant relationship ( P > 0.1) with the post-2008 increases in tillage intensity. Nonetheless, we find that the increasing number of herbicide-resistant weed species was closely correlated to the rising tillage intensity after 2008 (with a correlation coefficient of 0.81 in corn and 0.87 in soybean, P < 0.01) (Fig. 1c,d ). Likewise, weed resistance to herbicide was more prevalent in the US soybean production than in corn production, with accumulative number of species up to 220 and 166, respectively, in 2016. This may be caused by more herbicide usage and less reliance on tillage in soybean production than in corn production. Similar results were reported by Livingston et al. 27 , in which 5.6% of corn acres in 2010 and 40% of soybean acres in 2012 were identified as being infested by glyphosate-resistant weeds or declines in glyphosate effectiveness. Our results demonstrate the historical role of HT crop adoption and growing weed pressure in changing tillage intensity across the United States. It also implies that, in the future, farmers are likely to adopt tillage practices on more cropped acres to control weeds given that tillage does not promote herbicide resistance. Tillage impacts on GHG fluxes We set up a series of simulation experiments using DLEM to distinguish between and quantify the impacts of historical tillage practices and tillage intensity change (TIC) on soil GHG emissions. The former is estimated as the difference between the experiment driven by historical tillage practices versus the no-till scenario, while the latter reflects the differences between experiments under varied versus fixed tillage practices ( Methods ). Model simulation results show that historical tillage practices during 1998–2016 resulted in net GHG emissions at a rate of 64.3 ± 20.0 TgCO 2 e yr −1 (mean ± SD, SD indicating interannual variability of GHG estimates). This is close to the direct N 2 O emissions from national synthetic fertilizer uses (that is, 63.6 TgCO 2 e yr −1 in 2016 reported by the Environmental Protection Agency 29 ), but with larger interannual variations (SD, 31% of mean). Nearly half of the tillage impact was attributed to tillage-induced CO 2 emission from soils, and the rest from direct soil N 2 O emissions. In the context of multifactor environmental changes, tillage impacts were estimated to range from 32.9 TgCO 2 e yr −1 in 2009 to 102.8 TgCO 2 e yr −1 in 2012 (Fig. 3 ). The highest tillage-induced GHG emissions we found, in 2012, were likely caused by crop mortality in the summer drought which limited crop nitrogen demand and provided more substrate for decomposition and denitrification 30 . The maximum–minimum GHG difference resulting from tillage practices on the US corn–soybean system was equivalent to 5–23% of the global GHG mitigation potential of crop management (0.3-1.5 PgCO 2 e yr −1 (ref. 31 )). This difference suggests a sizeable GHG mitigation potential in tillage management, if the tillage decision can be made with consideration of crop nitrogen demand/supply balance and the impacts of climate extremes. Fig. 3: Model-estimated impacts of tillage practices on GHG emissions in the US corn–soybean cropping system during 1998–2016. Error bars denote modelled uncertainty, which is the standard deviation calculated from multiple model runs with various values of key parameter sets (details of uncertainty estimation and simulation experiments can be found in the Supplementary Information ). Note that the lowest tillage-induced GHG emission is not found in 2008 when national tillage intensity was the lowest because the complex interactions between tillage practices and other environmental changes within managed croplands, and the legacy effects of residual removal, are also included in model estimations. Source data Full size image Our estimation shows that tillage-induced GHG emissions declined at an annual rate of 4.6 TgCO 2 e yr −1 during 1998–2008, and then marginally increased by 2.7 TgCO 2 e yr −1 during 2009–2016 (Fig. 3 ). Regardless of change direction, the changing trends were at a similar level or higher than the reported trend of annual GHG emissions from the US agriculture sector during 1990–2016 (including CO 2 , CH 4 and N 2 O emissions from agricultural soil management, rice cultivation, livestock and manure management, liming and field burning of agricultural residues, 2.3 TgCO 2 e yr −1 (ref. 29 )). Impacts of tillage intensity change on net GHG fluxes Tillage impact on GHG emissions first declined during 1998–2008 and then increased, which was predominantly determined by tillage intensity change across the United States. The GHG mitigation rate from tillage reduction alone in the period 1998–2008 (−5.5 ± 4.8 TgCO 2 e yr −1 , 1998–2008; Fig. 4a ) could offset the annual GHG emission increase from the entire US agriculture sector 29 over the same period, but this mitigation benefit disappeared after 2008, and the tillage impact shifted to accelerating GHG emissions. We estimated that the tillage intensity increase during 2009–2016 resulted in a net GHG source of 13.8 ± 5.6 TgCO 2 e yr −1 , more than double the GHG mitigation rate due to reduced tillage intensity in the preceding decade. Fig. 4: Tillage intensity change-induced soil GHG emissions. a , b , Annual average ( a ) and accumulated ( b ) CO 2 and N 2 O fluxes (in CO 2 e) resulting from tillage intensity change relative to the year 1998 (for the period 1998–2008) and the year 2008 (for the period 2009–2016). We selected 2008 as the benchmark year for the post-2008 assessment because national tillage intensity started to increase from this year. The cumulative GHG fluxes in b reflect how soon the GHG mitigation during 1998–2008 has been offset by the increased GHG emissions after 2008. The shaded area in a represents the 95% confidence interval (CI) as calculated from multiple simulation experiments with prescribed parameter values. Source data Full size image We find that the declining tillage intensity cumulatively reduced GHG emissions by 61.0 TgCO 2 e during 1998–2008, while increased GHG emissions of 110.0 TgCO 2 e were caused by intensifying tillage in the post-2008 period (Fig. 4b ). The cumulative impacts of tillage practice change shifted from a net GHG sink to a net source in 2013. During the past approximately two decades (1998–2016), cumulative GHG emissions due to tillage intensity change in the US corn–soybean cropping system was estimated as 49.1 TgCO 2 e. This change was equivalent to 1.2-fold of the net GHG emission increase from the whole US agriculture sector during the same period (annual increase rate of 2.18 TgCO 2 e yr −1 for 19 yr, 41.4 TgCO 2 e in total 29 ). The model estimates for the US corn–soybean system reveal that the tillage intensity changes over the past two decades are large enough to shape the dynamics of national agricultural soil GHG fluxes. Our work implies that the benefit of HT crop adoption in reducing tillage has reached its peak, while the emerging weed resistance is found to contribute to intensifying tillage practices. As weed resistance persists and grows, tillage intensity is anticipated to continue to rise, which would further increase GHG emissions and contribute to global warming. Spatial patterns of tillage impacts Our model estimations indicate a substantial impact of historical tillage adoption on GHG fluxes, equivalent to the role of agricultural fertilizer input. Nevertheless, there exist large interannual variations due to tillage intensity changes and their interactions with other factors such as climate variability and crop resource use efficiencies. Spatially, the highest tillage-induced GHG emissions are found to centre in the Corn Belt area in 1998, the Prairie Pothole Region in particular, including northern Iowa and southwestern Minnesota (Fig. 5 ). Due to the decline in tillage intensity, GHG emissions in 2008 were reduced, which was consistent with the spatial shift of tillage intensity across the region (Supplementary Figs. 2 and 9 ). However, GHG emissions due to tillage rebounded in 2016, resulting in a pattern similar to 1998 but with wider source areas (Fig. 5 ). The reduced GHG emissions under tillage practices (shown by shades of green in Fig. 5 ) reflect more weight on the role of tillage in reducing denitrification rate and residue retention in these areas. Fig. 5: Spatial patterns of the model estimated GHG fluxes due to the historical use of tillage across the US corn–soybean cropping system. a – c , The impacts of historical use of tillage on GHG fluxes for the years 1998 ( a ), 2008 ( b ) and 2016 ( c ). Full size image In terms of the impacts of tillage intensity changes, we find the spatial distribution and magnitude for the accumulated CO 2 emissions are similar to those of N 2 O fluxes before and after 2008 (Fig. 6 ). However, the spatial coverages of tillage-affected areas differ between these two periods. The considerable spatial heterogeneity in the GHG flux responses (that is, a mixture of negative and positive of values) are primarily caused by the variations in local climate, soil properties and cropping system mixed with tillage intensity changes across the country (Fig. 6 and Supplementary Fig. 2 ). The areas with increased GHG emissions due to intensifying tillage after 2008 covered the entire Corn Belt and the lower Mississippi alluvial valley. In particular, the western Corn Belt, including the Dakotas and part of Minnesota, stood out with high emission rates, in which corn and soybean cropping systems expanded in the most recent decade 32 , 33 , 34 . They are found to be larger than the pre-2008 GHG mitigation areas that resulted from reducing tillage, which are concentrated in the central Corn Belt, the lower Mississippi alluvial valley, and the US east coast. Fig. 6: Accumulated GHG fluxes (gCO 2 e m – 2 ) due to tillage intensity changes across the US corn–soybean cropping system. a – f , Soil fluxes of CO 2 ( a , b ), N 2 O ( c , d ), and their sum ( e , f ) for 1998–2008 (accumulated over 11 yr, a , c , e ) and 2009–2016 (accumulated over 8 yr, b , d , f ). Full size image Discussion Uncertainties with fall tillage practice considered Most tillage is implemented in spring. It should be noted that autumn tillage may be adopted before and in addition to spring tillage on some farmlands, but we do not exactly know where and when double tillage was implemented across the United States. Considering both spring and fall tillage, our model predicted that historical tillage practices could lead to a net GHG emission of 92.4 ± 22.0 TgCO 2 e yr −1 (mean ± SD) during the study period. This prediction shows an upper bound of estimated tillage impacts, approximately 44% higher than the estimate that considered only spring tillage. Assuming that all corn and soybean fields under tillage were tilled twice a year (in both spring and autumn), we estimate that the increased tillage intensity in the period 2009–2016 increased GHG emissions by 20.5 ± 7.2 TgCO 2 e yr −1 , while the reduced tillage intensity before 2008 caused a reduction in GHG emissions by 7.0 ± 6.6 TgCO2e yr −1 . As a result, the corresponding changes in accumulated GHG flux induced by changes in tillage intensity in 1998–2016 were found to be approximately 78% higher than the estimates driven by spring tillage only (that is, 87.5 TgCO 2 e under tillage twice a year versus 49.1 TgCO 2 e under spring tillage only). Despite the uncertainty caused by the lack of detailed tillage frequency data, our estimates provide a lower and an upper bound on tillage practice impacts, which strengthened our conclusion that there is substantial GHG mitigation potential through managing tillage practices. Proportions of continuous no-till The spatially explicit annual tillage maps used to drive the model simulation in this study were developed by combining time-series crop type and distribution maps, soil erodibility ranking maps and the Crop Reporting District (CRD)-level survey of farmers during 1998–2016. Due to limited information, we assume that no-till has the highest likelihood of being adopted on the highly erodible lands in each CRD. Our data show that the continuous no-till areas account for 10% of total corn and soybean acreage during 1998–2016, with an additional 5% of corn and soybean acreage under tillage only once since 1998. These numbers are lower than, but comparable with, the values reported in a four-year survey conducted by USDA ARMS, which shows 18% of corn-planted area under continuous no-till/strip-till in 2016 and the three years preceding the survey year, and 21% for soybean in 2012 and the three years before that 6 . The reasons for a lower proportion of continuous no-till in our spatial tillage maps include: (1) these data cover a longer time period; (2) no-till in our data is defined as the absence of tillage practice and where strip-till is excluded. The long-term survey data we rely on to develop tillage maps in this study have multi-year survey records for some farms (for example, area percentage under no-till, conservation and conventional tillage in each year), but plot-level arrangement may change over time and remain difficult to track. Even though our data have a lower share of continuous no-till than those reported by the four-year survey, we may still overestimate the no-till share over an approximately two-decade time frame considering the spatial tillage arrangement within a farm. The assumption that we made to always assign no-till to highly erodible lands first may have slightly underestimated the historical tillage-induced GHG emissions across the US corn–soybean cropping system, while overestimating GHG mitigation due to tillage intensity change during 1998–2008 and underestimating the re-intensified tillage-induced GHG emissions during 2009–2016. The uncertainty in the estimated GHG consequences of tillage practices indicates the importance of implementing a long-term survey and identifying the places and duration of continuous no-till practices across the United States. Contributions of tillage-related machinery use Tillage intensity changes can affect GHG emission beyond the soil, among which CO 2 emission from agricultural machinery is an GHG source that should be considered. Due to the lack of data, we used two extreme-case scenarios to estimate the amount of machinery CO 2 emissions associated with tillage practice changes. We assume that: (1) machinery change from decreasing tillage intensity was all attributed to the shift from conventional tillage to no-till, and vice versa for increasing tillage intensity; and (2) increasing no-till areas were all converted from conventional tillage and reducing no-till areas were all converted to conventional tillage. The tillage conversion area was counted repeatedly every year after the conversion occurred to estimate the fuel-derived CO 2 emissions. Based on the above assumptions, the no-till practice implemented on additional corn and soybean acreage were found to be 55.6 Mha, cumulatively, during 1998–2008; while the reduced no-till acreage summed to 29 Mha during 2009–2016. Using machinery emission data obtained from Adler et al. 35 (that is, 17.01 kgC ha −1 from ploughing versus 0 from no-till), this assumption will result in a reduced GHG emission of 0.315 TgCO 2 e yr −1 for the period 1998–2008, and an additional GHG source of 0.226 TgCO 2 e yr −1 for the period 2009–2016. Both scenarios show a minor contribution from the changed machinery emissions when compared with soil GHG emissions driven by tillage intensity changes. Outlook This study demonstrated a shift in national tillage intensity for the corn–soybean system during 1998–2016, and examined the role of tillage practices and their intensity changes in affecting GHG emissions from the US corn–soybean-planted soils. The findings suggest that the GHG mitigation benefit gained from the tillage intensity reduction during 1998–2008 has been offset by tillage practice reintensification since 2008. Without an effective strategy to control weeds, tillage intensity is expected to continue growing and so undermine the GHG mitigation achievements from other activities or other sectors. On the other hand, this study implies that the farmers’ choices in managing herbicide resistance, such as not applying glyphosate during consecutive growing seasons, using glyphosate during fewer years and combining it with other herbicides 14 , may help mitigate agricultural GHG emissions. Although we have assessed our estimation uncertainties caused by model assumptions, limited data about tillage practices and key parameter values, there remain knowledge gaps that hinder us from accurately predicting the consequences of tillage intensity changes. For example, it remains uncertain how sensitive crop growth and the coupled terrestrial carbon–nutrient–water cycling are to different tillage practices with a combination of environmental stressors including climate and emergence of herbicide-resistant weeds 30 , 36 . In addition, the environmental impacts of tillage differ between simplified and diversified cropping systems 37 , 38 . Research has also been very limited on how land conversion, crop rotation, and tillage practices interact in affecting agricultural GHG balance and climate mitigation. This increases uncertainty in accounting for both carbon debt 39 , 40 , soil carbon storage change 41 , 42 and GHG balance as indicated in this work. We therefore suggest that further research is needed to examine the previously overlooked patterns and drivers responsible for rotation and crop-specific tillage intensity change through long-term experiments and modelling, and to improve our understanding of responses and feedback between agroecosystem management and the climate system. Methods Data information The database we used to characterize corn and soybean tillage practice intensity and HT crop adoption rate was mainly from a nationwide survey of farmer choices at the field (that is, plot) level of analysis. The data were purchased from Kynetec, the most prominent commercial surveying company in US agriculture. Coverage purchased was for the 1998–2016 crop years. Farmers were queried about cultivation practice intensity, including no-till, conservation tillage (for example, ridge till, mulch till) and conventional tillage (for example, mouldboard plough, chisel plough, disk harrow) in each USDA CRD. Although location is available at the county level, data collection procedures are intended to establish representativeness at the USDA CRD level of disaggregation. A CRD is comprised of about nine contiguous counties that are also similar in cropping conditions. For each of the corn and soybean crops and for each year over the period, about 4,000–4,500 independent farm operators were paid to complete the survey. Respondents were identified by various means, including through federal information regarding participation in government programmes. Data used in this manuscript were collected primarily through telephone calls, involving multiple attempts to ensure high participation rates and so representativeness. The company implements rigorous protocols to ensure that interviewers and interview supervisors are trained to implement consistent, standardized procedures for data collection, quality screening and subsequent data transcription/processing activities. The 1998–2011 data have been used elsewhere to study how glyphosate-tolerant soybean seed has influenced tillage practices 14 , genetically engineered crops have affected pesticide use 43 and confusion in herbicide choices 44 . The soybean data have been recently used to examine the relation between the spread of glyphosate-resistant weeds and reduction in conservation tillage in soybean production 17 . As far as we know, no alternative data source on actual annual tillage choices in the United States exists that covers the period 2005–2016, a critical time in light of seed technology innovations. From this database we obtained annual information on seed varieties, including state-level adoption of HT crops, and annual percentage of tillage types at the CRD level. Weed species data were obtained from . To demonstrate the comprehensive dynamics of acreage changes under different tillage practices, we use a unitless indicator to represent national and state-level tillage intensity (TI) by considering the area share of no-till, conservation and conventional tillage. First, we identified the maximum tillage intensity from the sum of the three ratios during the period 1998–2016 for each state or entire country. Second, we normalized the weight of acres under intensive tillage practice to those under less-intensive tillage groups. Note that the index was normalized by the maximum tillage intensity in a given area (that is, state or entire United States), which depicts the temporal changes of tillage intensity in each specific region and is not comparable between regions. $$\begin{array}{lll}{\mathrm{TI}}_{\mathrm{max}} &=& {\mathrm{max}} \left({\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j} \right.\\&& \left.+ {\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j}+{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j} \right)\end{array}$$ $$\begin{array}{lll}{\mathrm{TI}}_{i,j} =\\ \frac{{({\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j} + {\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j} + {\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j})}}{{{\mathrm{TI}}_{{\mathrm{max}}}}} \times 100\% \end{array}$$ where A_Till CV , A_Till CS and A_Till NT stand for corn or soybean acreage under conventional tillage, conservation tillage and no-till, respectively. Subscript i denotes corn or soybean, and subscript j denotes year. We harmonized 1 km time-series gridded cropland distribution and type maps for the contiguous United States 45 , 46 with the CRD-level percentage data of corn/soybean acreage that adopted the three tillage types to spatialize the annual tillage-specific area data 47 . The annual maps of various tillage practices have been used to force the model to assess their impacts on GHG fluxes. More details on tillage intensity change can be found in Supplementary Figs. 1 – 6 . Modelling approach We adopted a process-based land ecosystem model, DLEM, to assess the impacts of tillage practices on net fluxes of CO 2 and N 2 O from the agricultural soils in the US corn–soybean cropping system. The DLEM is unique in incorporating multiple environmental drivers, grid-to-grid connectivity through river systems and simultaneous estimation of CO 2 , CH 4 and N 2 O fluxes 48 , 49 , 50 . Its agricultural module has been intensively calibrated and validated in upland and lowland croplands across countries and the entire world, and has been widely used to quantify the contributions of multifactor environmental changes to ecosystem functions 33 , 50 , 51 , 52 , 53 , 54 , 55 . We have validated the DLEM’s performance in simulating soil organic carbon (SOC) content and tillage impacts on SOC dynamics across the United States in our previous work 32 , 46 , 47 . In this study, we implemented additional model validation by comparing model estimates with measured N 2 O fluxes under no-till and conventional tillage practice at a corn-planting site in Tennessee (Supplementary Fig. 7 ). To distinguish the impacts of tillage practice change from other environmental drivers and human activities, we set up a series of simulation experiments by turning on and off tillage practice changes at a few time points (see Supplementary Information , section 3.4, for more details). To characterize other natural environmental changes and human practices and to force the model, several time-series gridded data sets have been developed at the same resolution spanning from 1850 to 2016. In addition to tillage practice, the model input data include daily climate condition (maximum, minimum and mean temperature, precipitation, short-wave solar radiation and relative humidity), monthly atmospheric nitrogen deposition, air CO 2 concentration, annual land use and cover change, and major agricultural management practices (such as crop-specific nitrogen fertilizer use, manure nitrogen application, tile drainage, crop rotation) at a resolution of 5 arcmin × 5 arcmin. More details regarding the input data can be found in Supplementary Information , section 3.3. Our analysis focused on the period 1998–2016, during which consistently collected annual tillage practice data for the corn–soybean cropping system were available. In experiment I, the model was driven by historically varying tillage intensity and other aforementioned time-series gridded input drivers across the contiguous United States. This experiment provided our ‘best estimates’ of biogenic GHG fluxes in the US corn–soybean cropping systems which were comparable to observations. We examined the GHG fluxes under tillage practices in the pre- and post-2008 time periods because tillage intensity in both corn- and soybean-planted lands was found to be the lowest in 2008. Experiments II and III fixed the location and cropland area under conservation and conventional till at the 1998 and 2008 levels, respectively. The difference between these two experiments and experiment I can be used to quantify the impacts of tillage intensity change (TIC) on GHG fluxes during the periods 1998–2008 and 2009–2016, respectively. We set up experiment IV to represent a hypothetical case in which the no-till practice was adopted in all the cropland area since 1998. The difference between experiments I and IV represented the impact of the historical tillage practice pattern in the corn–soybean system (Supplementary Table 1 and Supplementary Fig. 8 ). We calculated CO 2 fluxes as the year-by-year SOC changes excluding dissolved organic carbon (DOC) leaching and CH 4 fluxes. Because the CO 2 assimilation into crop biomass will be eventually consumed somewhere else, we only counted CO 2 emissions from soils in this study. Likewise, only soil direct N 2 O emissions were included for estimating the net GHG emissions here. Methane (CH 4 ) fluxes were not included when calculating the net GHG balance because their total amount was negligible in the corn/soybean-planted areas. We used 100 yr global warming potential to convert the fluxes of CO 2 and N 2 O from gram C and gram N into gram CO 2 e (refs. 1 , 50 ): $$F_{{\mathrm{CO}}_{2}^i} = ({\mathrm{SOC}}_{i - 1} - {\mathrm{SOC}}_i) - F_{{\mathrm{DOC}}_{\mathrm{leaching}}^i} - F_{{\mathrm{CH}}_{4}^i}$$ (1) $$E_{{\mathrm{CO}}_{2}^i} = (F_{{\mathrm{CO}}_{2}^i}/12) \times 44$$ (2) $$E_{{\mathrm{N}}_{2}{\mathrm{O}}^i} = (F_{{\mathrm{N}}_{2}{\mathrm{O}}^i}/28) \times 44 \times 265$$ (3) $$E_{{\mathrm{net}}^i} = E_{{\mathrm{CO}}_{2}^i} + E_{{\mathrm{N}}_{2}{\mathrm{O}}^i}$$ (4) where \(F_{{\mathrm{CO}}_{2}^i}\) and \(F_{{\mathrm{N}}_{2}{\mathrm{O}}^i}\) are CO 2 and N 2 O fluxes in TgC yr −1 and TgN yr −1 , respectively, and \(E_{{\mathrm{CO}}_{2}^i}\) and \(E_{{\mathrm{N}}_{2}{\mathrm{O}}^i}\) are CO 2 and N 2 O emissions in TgCO 2 e. Negative values represent GHG uptake from the atmosphere, whereas positive values represent GHG emissions from soils. In equation ( 1 ), we approximated CO 2 flux as the between-year SOC storage change minus DOC leaching and CH 4 emissions. We estimated the annual net fluxes of CO 2 and N 2 O in each simulation experiment, and the impacts of historical tillage practices and tillage intensity change were quantified as the differences between experiments as described above. For our ‘best-estimate’ simulations, tillage was implemented in spring when corn or soybean was planted. Generally, previous-year autumn tillage may also be adopted before spring tillage in part of the study areas, but it remains uncertain where they are and how often farmers have undertaken more than one tillage practice per year 56 . Therefore, we designed two types of experiments to quantify the impacts of with- and without-autumn tillage practice following the protocol described in our previous study 47 . More specifically, autumn tillage was assumed to have been implemented two weeks after harvest. In this study, we used simulations driven by spring tillage as our ‘best estimate’, while the experiments with corn–soybean land tilled twice annually (that is, both autumn and spring tillage) represented more intensive soil disturbance scenarios and provided the upper bound on tillage impact estimations. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The tillage maps used in this study were developed from a proprietary national survey conducted annually by Kynetec Group. The purchase agreement requires that the data remain confidential. Source data supporting the figures are provided with this paper. Source data are provided with this paper. Code availability The code used to perform analyses in this study is generated in ENVI/IDL and is available upon request. | None | [] | [] | [] | SciNews | Biology | Chaoqun Lu et al, Emerging weed resistance increases tillage intensity and greenhouse gas emissions in the US corn–soybean cropping system, Nature Food (2022). DOI: 10.1038/s43016-022-00488-w Journal information: Nature Food | https://dx.doi.org/10.1038/s43016-022-00488-w | https://phys.org/news/2022-04-relationships-herbicide-resistant-weeds-tillage-agricultural.html | A recent study published in Nature Food found that the trend of decreasing tillage intensity in US corn and soybean production reversed around 2009, leading to an increase in greenhouse gas emissions from agricultural fields. The study combined survey data and computer modeling to analyze the impact of tillage decisions on soil emissions of carbon dioxide and nitrous oxide. The researchers found that the growing resistance of weeds to glyphosate, a common herbicide, likely contributed to the increase in tillage intensity, as farmers turned to tillage as a method of weed control. The study suggests that the trend is correlated with the adoption of herbicide-tolerant crops before 2008 and emerging weed resistance after 2008. The findings highlight the need for alternative weed control strategies to mitigate the growth of tillage intensity and its associated greenhouse gas emissions, which could undermine efforts to reduce agricultural emissions.
A new study that combines survey data and cutting-edge computer modeling found a growing trend in tillage intensity in U.S. corn and soybean production in recent years has led to an increase in greenhouse gas emissions from agricultural fields. The study, published recently in the academic journal Nature Food, drew on years of survey data that asked thousands of U.S. farmers about their tillage practices. The researchers then plugged the relevant data into sophisticated ecosystem models to see how tillage decisions affect soil emissions of greenhouse gases, including carbon dioxide and nitrous oxide. The survey data indicate farmers relied less on tillage during the period between 1998 and 2008, but that trend began to reverse around 2009 when tillage intensity started to rise. Chaoqun Lu, an Iowa State University associate professor of ecology, evolution and organismal biology and lead author of the study, said the growing resistance of weeds to the common herbicide glyphosate likely contributed to increased tillage. Genetically engineered herbicide-tolerant crops hit the agricultural scene in the late 1990s, and their adoption freed farmers from some of their reliance on tillage as a method of weed control. But growing numbers of weed species with resistance to the herbicide have emerged over the decades, reducing the effectiveness of the herbicide and making tillage a more attractive weed control option once again. And as tillage intensity grows, more carbon and nitrogen stored in the soil release into the atmosphere in the form of greenhouse gases, Lu said. "One of the interesting pieces that we found in this study is tillage intensity has shifted from a declining trend to an increasing trend since 2008," Lu said. "Our regression analysis suggests this trend is correlated to the wide adoption of herbicide-tolerant crops before 2008 and emerging weed resistance after 2008. We can't assert a strict causal relationship, but regression analysis reveals a strong relationship between them." The survey asked questions about farmers' decisions on seed varieties and cultivation practice intensity. Survey topics included no-till, conservation tillage (e.g., ridge till, mulch till), and conventional tillage (e.g., moldboard plow, chisel plow, disk harrow). The data show no-till grew by roughly 12 million acres for corn production and nearly 17 million acres for soybeans between 1998 and 2008. But no-till corn acres declined by nearly a half million acres between 2009 and 2016 and declined by nearly 6 million soybean acres during that period, according to the survey. Corn acreage under conservation tillage and soybean acreage under conservation and conventional tillage showed similar trends, first declining between 1998 and 2008 before climbing back to previous levels by 2016. Feeding the data into the land ecosystem models shows that gains in tillage intensity since 2009 have offset the greenhouse gas mitigation benefits achieved during the tillage declines from 1998 to 2008. Lu said the study uncovers a relationship between weed resistance, seed technology and greenhouse gas emissions that could lead to a better understanding of how farm practices can mitigate climate change. Her team's previous research showed that nitrous oxide emissions from farmland in the U.S. Corn Belt have increased in recent years, largely due to the widespread application of nitrogen fertilizers to agricultural land. The added nitrogen is partially used by crops, but the remainder either stays in soils or is lost to the environment. During this process, microorganisms living in soils consume nitrogen-containing compounds and give off nitrous oxide as a byproduct. Meanwhile, soil organic matter decomposes and partially converts into carbon dioxide. Both are powerful greenhouse gases that have potential to warm the climate. Intensive tillage practices disturb the soil, alter soil moisture and aeration status, and stir heavy crop residue into soils, which together change the production rates of soil greenhouse gases and allow more of them to escape, Lu said. Lu pointed to the use of alternative herbicides to combat glyphosate-resistant weeds, or using glyphosate in fewer consecutive years, as well as the diversification of crops beyond corn and soybeans as options to control weeds without increasing greenhouse gas emissions. "Without an effective strategy to control weeds, tillage intensity could continue to grow in the future and could undermine greenhouse gas mitigation achievements from other agricultural activities," Lu said. |
10.1007/s00204-017-1983-0 | Cigarette damage to unborn children revealed in stem cell study | Chemicals found in cigarette smoke have been shown to damage foetal liver cells. Scientists say the potent cocktail of chemicals in cigarettes is particularly harmful to developing liver cells and affects male and female foetuses differently. Researchers - led by the University of Edinburgh - have developed a novel way to study the effects of maternal smoking on liver tissue using embryonic stem cells. The stem cell technique will provide important information about the long-term effects of maternal cigarette smoking, say experts. The liver is vital in clearing toxic substances and plays a major role in regulating metabolism. Smoking cigarettes - which contain around 7000 chemicals - can damage foetal organs and may do lasting harm. Scientists used pluripotent stem cells - non-specialised cells that have the distinctive ability to be able to transform into other cell types - to build foetal liver tissue. Liver cells were exposed to harmful chemicals found in cigarettes, including specific substances known to circulate in foetuses when mothers smoke. The study showed that a chemical cocktail - similar to that found in cigarettes - harmed foetal liver health more than individual components. Findings also showed that cigarette chemicals damage the liver differently in male and female foetuses, with male tissue showing liver scarring and female tissue showing more damage to cell metabolism. The study was carried out in collaboration with the Universities of Aberdeen and Glasgow and is published in the journal Archives of Toxicology. Dr David Hay from the University of Edinburgh's Centre for Regenerative Medicine, said: "Cigarette smoke is known to have damaging effects on the foetus, yet we lack appropriate tools to study this in a very detailed way. This new approach means that we now have sources of renewable tissue that will enable us to understand the cellular effect of cigarettes on the unborn foetus." Professor Paul Fowler, Director of the Institute of Medical Sciences at the University of Aberdeen, said: "This work is part of an ongoing project to understand how cigarette smoking by pregnant mothers has harmful effects on the developing foetus. These findings shed light on fundamental differences in damage between male and female foetuses." | Scientists have developed a novel way to study the effects of maternal smoking on liver tissue using embryonic stem cells, revealing that the potent cocktail of chemicals in cigarettes is particularly harmful to developing liver cells and affects male and female foetuses differently. The study, led by the University of Edinburgh, used pluripotent stem cells to build foetal liver tissue and exposed it to harmful chemicals found in cigarettes, including specific substances known to circulate in foetuses when mothers smoke. The findings showed that the chemical cocktail harmed foetal liver health more than individual components, with male tissue showing liver scarring and female tissue showing more damage to cell metabolism. The study provides important information about the long-term effects of maternal cigarette smoking and sheds light on fundamental differences in damage between male and female foetuses. | None | Abstract The liver is a dynamic organ which is both multifunctional and highly regenerative. A major role of the liver is to process both endo and xenobiotics. Cigarettes are an example of a legal and widely used drug which can cause major health problems for adults and constitute a particular risk to the foetus, if the mother smokes during pregnancy. Cigarette smoke contains a complex mixture of thousands of different xenobiotics, including nicotine and polycyclic aromatic hydrocarbons. These affect foetal development in a sex-specific manner, inducing sex-dependant molecular responses in different organs. To date, the effect of maternal smoking on the foetal liver has been studied in vitro using cell lines, primary tissue and animal models. While these models have proven to be useful, poor cell phenotype, tissue scarcity, batch-to-batch variation and species differences have led to difficulties in data extrapolation toward human development. Therefore, in this study we have employed hepatoblasts, derived from pluripotent stem cells, to model the effects of xenobiotics from cigarette smoke on human hepatocyte development. Highly pure hepatocyte populations (>90%) were produced in vitro and exposed to factors present in cigarette smoke. Analysis of ATP levels revealed that, independent of the sex, the majority of smoking derivatives tested individually did not deplete ATP levels below 50%. However, following exposure to a cocktail of smoking derivatives, ATP production fell below 50% in a sex-dependent manner. This was paralleled by a loss metabolic activity and secretory ability in both female and male hepatocytes. Interestingly, cell depletion was less pronounced in female hepatocytes, whereas caspase activation was ~twofold greater, indicating sex differences in cell death upon exposure to the smoking derivatives tested. Working on a manuscript? Avoid the common mistakes Introduction The liver is the body’s second largest organ playing a major role in the processing of xenotoxicants, which include alcohol, drugs and environmental pollutants. Cigarettes are an example of a widely used drug which can cause major health problems for adults, and constitute a particular risk to the developing foetus. Cigarettes contain a complex mixture of over 7000 different compounds (Rodgman et al. 2009 ) which include nicotine and the polycyclic aromatic hydrocarbons (PAHs). Nicotine is primarily metabolised by cytochrome P450 2A6 (CYP2A6) in the liver (Benowitz et al. 1994 ) into several metabolites, of which cotinine represents approximately 70–80% (Messina et al. 1997 ). PAHs are incomplete combustion products first identified as carcinogenic constituents of coal tar (Phillips 1983 ) and charcoal-grilled foods (Phillips 1999 ; Boström et al. 2002 ; Rodgman et al. 2009 ). PAHs are also detected in placental tissues and umbilical cord blood of smokers (Perera et al. 2005a ; Al-Saleh et al. 2013 ) reaching the foetal liver from the maternal circulation. This exposes the developing foetus to harmful agents and leads to corresponding changes in gene expression (O’Shaughnessy et al. 2011 ). In addition to toxicant exposure, smoking also disrupts foetal oxygen and carbon monoxide balance which can cause harmful effects, including impaired growth, premature birth, hormonal imbalances, increased predisposition to metabolic syndrome, liver disease and even death (Chen et al. 2006 ; Harvey et al. 2007 ; Rogers 2009 ; Mamsen et al. 2010 ; Fowler et al. 2011 , 2014 ; Hackshaw et al. 2011 ; Högberg et al. 2012 ; Behl et al. 2013 ; Filis et al. 2015 ). Moreover, it has been reported that maternal smoking affects the foetus in a sex-specific manner. For example, male offspring possess a higher risk of developing conduct disorders, whereas female offspring are predisposed to developing weight disorders and drug dependence (Weissman et al. 1999 ; Chen et al. 2006 ). In addition, maternal smoking induces sex-dependant molecular responses in the reproductive organs and the liver of the developing foetus (Fowler et al. 2008 ; O’Shaughnessy et al. 2011 ; Drake et al. 2015 ). To date, the effect of maternal smoking on the foetal liver has been studied in vitro using cell lines, primary tissue and animal models (Neumann 1986 ; Rao et al. 1988 ; Cho et al. 2006 , Choi et al. 2015 ; Baxter 2009 ; Sanchez et al. 2011 ; Van Kesteren et al. 2013 ; Williams et al. 2014 ). While these models have proven to be informative, the scarcity of human tissue, the rapid loss of cell phenotype, batch-to-batch variation and species differences have led to difficulties in data extrapolation toward the human. Moreover, the mature nature of primary cells used in vitro impairs the study of foetal development ‘in the dish’. In contrast to the above sources, human hepatocytes derived from pluripotent stem cells have been proven to represent a reliable human model to study liver biology in detail (Szkolnicka et al. 2014 , 2016 ; Villarin et al. 2015 ). To study the disruptive effects of smoking on human development, we have employed this renewable cell model. Pluripotent stem cell derived hepatoblasts were produced at scale from male and female cell lines. Following this, hepatocyte differentiation was performed in the presence of cotinine and PAHs and this led to sex-specific changes in cell biology. Methods and materials Cell culture and differentiation H9 and Man12 human embryonic stem cells (hESCs ) identity was confirmed using short tandem repeat typing. hESCs were cultured and differentiated as previously described (Cameron et al. 2015 ). Maintenance of hESCs was performed on pre-coated laminin 521 (Biolaminin) in mTeSR1 (STEMCELL Technologies) in a humidified 37 °C, 5% CO 2 incubator. For differentiation, hESCs were plated onto a pre-coated blend of laminins 521 and 111 (at a 1:3 ratio). Differentiation was initiated at 40% confluence by replacing serum-free medium with endoderm differentiation medium: RPMI 1640 containing 1× B27 (Life Technologies), 100 ng/mL Activin A (PeproTech), and 50 ng/mL Wnt3a (R&D Systems). The medium was changed every 24 h for 72 h. On day 3, endoderm differentiation medium was replaced with hepatoblast differentiation medium, and this was renewed every second day for a further 5 days. The medium consisted of knockout (KO)-DMEM (Life Technologies), Serum replacement (Life Technologies), 0.5% Glutamax (Life Technologies), 1% non-essential amino acids (Life Technologies), 0.2% β-mercaptoethanol (Life Technologies), and 1% DMSO (Sigma). On day 8, differentiating cells were cultured in the hepatocyte maturation medium HepatoZYME (Life Technologies) containing 1% Glutamax (Life Technologies), supplemented with 10 ng/mL hepatocyte growth factor (PeproTech) and 20 ng/mL oncostatin M (PeproTech). On day 10 maturation medium was replaced with HepatoZYME supplemented with and without smoking derivatives (Sigma-Aldrich) for a further 8 days, with media replaced every 48 h. Immunofluorescence Cell cultures were fixed in 100% ice-cold methanol at −20 °C for 30 min. Subsequently, fixed cells were washed twice with PBS at room temperature. Cell monolayers were blocked with 0.1% PBS-Tween containing 10% BSA for 1 h, and the monolayers were incubated with primary antibodies diluted in PBS-0.1% Tween/1% BSA at 4 °C overnight (Supplementary Table 1). The following day, the primary antibody was removed, and the fixed monolayers were washed three times with PBS-0.1% Tween/1% BSA. Following this, the cells were incubated with the appropriate secondary antibody diluted in PBS/0.1% Tween/1% BSA for 1 h at room temperature and washed three times with PBS. Cultures were then mounted with PermaFluor aqueous mounting medium (Thermo Scientific) and counterstained with NucBlue Hoechst 33342 (Sigma-Aldrich). The cells were imaged with an Axio Observer Z1 microscope with LD PlanNeoFluar objective lenses (Carl Zeiss). This microscope was coupled to a Zeiss AxioCamMR3 camera used for image acquisition. The images were captured using a Zeiss Axiovision SE 64 Rel 4.8 and analysed using Zeiss Axiovision software version 4.9.1.0. The percentage of positive cells (±standard deviation) was estimated from at least eight random fields of view. Albumin and α-fetoprotein ELISA hESC-derived hepatocyte protein secretion was measured by ELISA. Alpha-fetoprotein and albumin production was quantified using commercially available kits from Alpha Diagnostic International. The different media were collected at the day 18 in the differentiation process. Samples were run in duplicate and measured on a FLUOStar Omega multi-mode microplate reader (BMG Labtech). Protein production was expressed as either nanogram or microgram of protein per milliliter of medium per 24 h and per milligram of cellular protein [determined by the bicinchoninic acid (BCA) assay, Pierce] or as percentage of secretory capacity normalised to the vehicle control. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates. Cytochrome P450 assays CYP3A and CYP1A2 activity was measured in hepatocytes at Day 18 using pGlo technology (Promega) in accordance with the manufacturer’s instructions. CYP activity was expressed as either relative light units (RLUs) per milliliter of medium per milligram of protein (determined by the BCA assay, Pierce), or as a percentage of CYP activity normalised to the vehicle control. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates. Cell health assays Cell health was assessed measuring ATP production and Caspase 3/7 activity at Day 18 employing pGlo technology (Promega) in accordance with the manufacturer’s instructions. Levels of expression of both markers were expressed as percentage of relative light units (RLUs) per milliliter of medium and normalised to the vehicle control. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates. Detection of smoking derivatives in foetuses from smokers and non-smokers Cotinine was measured using LC–MS/MS methodology as follows. Cotinine and the internal standard 2 H 3 -cotinine were dissolved in methanol and diluted in pooled human plasma to give calibration standards in the range 1.5–500 ng/mL. Quality control samples were prepared in pooled human plasma at 2.5, 250 and 450 ng/mL cotinine. To a 10 µL aliquot of plasma, 10 µL (2 ng) of IS was added and 250 µL 0.1% formic acid in water. After mixing, the samples were kept on ice for 15 min. Following centrifugation at 14,800 rpm for 15 s, the plasma samples were applied to BondElut Plexa PCX cartridges (30 mg/1 mL, Crawford Scientific, UK) that had been pre-conditioned and equilibrated using 0.5 mL of methanol and 0.5 mL of 0.1% formic acid in water. The cartridges were washed with 0.5 mL 0.1% formic acid in water followed by 2 × 0.5 mL 95/5 methanol/0.1% formic acid in water and cotinine and the IS eluted with 0.5 mL 95/5 methanol/ammonium hydroxide. The eluate was evaporated to dryness under nitrogen at room temperature and the residue re-suspended in 60 µL 50/50/0.1 water/methanol/formic acid. Following centrifugation at 14,800 rpm for 5 min, 5 µL of the supernatant was injected onto the chromatograph. Chromatography was performed on a Thermo Surveyor (Thermo Scientific, UK) system using a 150 × 2.1 mm ACE 3µ C18-AR column (Hichrom, UK) maintained at 50 °C. The mobile phase consisted of 0.1% ammonium acetate (A) and methanol (B) and elution achieved with a linear gradient over 3 min from 10 to 100% B with a hold of 1 min at 100% B. The flow rate was 200 µL/min and the samples were maintained at 4 °C in the autosampler. Total run time was 8 min. A Thermo TSQ Quantum triple quadrupole mass spectrometer was used in positive electrospray ionisation mode for the detection of cotinine. Quantification was performed using single reaction monitoring (SRM) scan mode using the following transitions: cotinine m / z 177.0–80.1 and 2 H 3 -cotinine m / z 180.0–80.1. Flow injection analysis was used to optimise the MS/MS conditions as follows: spray voltage 4000 V, sheath gas pressure 60, auxiliary gas pressure 0, capillary temperature 375 °C, skimmer offset −10 V, collision pressure 1.7 mTorr and collision energy 25 V. Instrument control and peak integration and quantification were performed using Thermo Xcalibur software (v. 2.0.7 SP1). Weighted least squares linear regression with a weighting factor of 1/X was used to quantify the cotinine concentration in unknown samples by comparing peak area ratios (analyte/IS) with those obtained from a multi-level calibration standard curve. PAHs in human foetal livers were quantified in Fowler et al. ( 2014 ) and the results presented here in are in different format for display purposes. The collection of foetal material (Fowler et al. 2008 ) was approved by the National Health Service Grampian Research Ethics Committees (REC 04/S0802/21). Cell count Following compound exposure, cells were washed with 100 μL/well of 1×HBSS (Invitrogen) and fixed with 50 μL of 4% (wt/vol) paraformaldehyde (PFA) for 20 min at room temperature. Cells were permeabilized with 50 μL/well of 0.1% (vol/vol) Triton X-100 (Sigma-Aldrich for 15 min), followed by a wash with 100 μL/well of 1×HBSS and an incubation with 50 μL/well of a solution containing 2 drops/mL of NucBlue Live ReadyProbes ® Reagent (Molecular Probes) in 1×HBSS for 5 min at room temperature. Following incubation, a final wash of 100 μL/well of 1×HBSS was performed. Fluorescent images were acquired using the Operetta high content analysis system with the Harmony High-Content Imaging and Analysis Software (PerkinElmer). 20 fields of view were acquired across the well to obtain an average representation of the well. Nuclei were quantified using the Acapella Image Analysis software (PerkinElmer, version: 4.1.1.118082™). The experiments are representative of five biological replicates. Statistical analysis Unless indicated, all data were obtained from at least five biological replicates and are presented by Mean ± standard deviation (SD). The difference between control and treatment groups were tested by Student’s t test where P < 0.05 is denoted as *, P < 0.01 is denoted as ** and P < 0.001 is denoted as ***. Results Measuring foetal exposure to smoking-derived contaminants The active components of cigarette smoke are well known (Rodgman et al. 2009 ) and for the purposes of these experiments we focused on cotinine, the major bioactive metabolite of nicotine, and polycyclic aromatic carbons (PAHs). Both cotinine and PAHs are significantly increased in the foetus by maternal smoking (Fig. 1 ). Following previous in vivo experimentation, we therefore hypothesised that these compounds represent a threat to the normal development of the foetal liver and we wished to model this in vitro using a reliable developmental model. To test this, we generated hepatoblasts and hepatocytes from male and female human embryonic stem cells (hESCs) using established methodology (Cameron et al. 2015 ). Fig. 1 Concentrations of cigarette smoke derivates in the second trimester human foetus. a Polycyclic aromatic hydrocarbons (PAHs) in livers from 10 control and 12 smoke-exposed human foetuses. PAHs in human foetal livers were quantified in Fowler et al. ( 2014 ) and the results presented here are in different format for display purposes. b The predominant bioactive metabolite of nicotine, cotinine, in plasma from 16 control and 22 smoke-exposed foetuses Full size image Generation of hepatoblasts and hepatocytes from male and female human embryonic stem cells Both hepatoblasts and hepatocytes were produced at scale from hESCs using a stagewise process (Cameron et al. 2015 ) to generate highly pure populations (Fig. 2 a). During cellular differentiation, cells from both genders underwent similar morphological changes, culminating in typical hexagonal hepatocyte morphology (Fig. 2 b). Further characterisation of the hepatocyte populations demonstrated that albumin was expressed in 93% of female and 95% of male hepatocytes. In these populations HNF4α was detected in 87 and 85% of female and male hepatocytes. To determine if hepatocytes were polarised we examined E-Cadherin and Zonal Occludin1 (ZO-1) expression. E-Cadherin was expressed in 97 and 98% of female and male cells, whereas ZO-1 expression detectable in 99 and 98% of female and male cells, respectively (Fig. 2 c). Fig. 2 Characterisation stem cell derived hepatocytes. a Male and female human embryonic stem cells (hESC) were differentiated to hepatocytes employing a stepwise hepatocyte differentiation approach. b During differentiation, cells adopted different morphologies at each stage: definitive endoderm, hepatoblasts and hepatocytes. c Immunofluorescence was employed to examine the expression of the hepatocyte proteins HNF4α ( red ) and albumin ( green ), and epithelial markers E-cadherin ( green ) and Zona Occludens-1 ( green ). Morphological images were taken at ×10 magnification and scale bar represents 200 μm. For each condition eight random fields of view, containing at least 400 cells, were counted Full size image Generation of functional hepatocytes from male and female human embryonic stem cells Following basic characterisation of morphology and gene expression, hepatocyte metabolic and secretory capacity was studied using pGlo™ and ELISA technologies. Hepatocytes from both sexes demonstrated appreciable levels of CYP1A2 and CYP3A activity (Fig. 3 a, b). Cytochrome P450 activity was greater in female hepatocytes and aligned with a previous study (Villarin et al. 2015 ). Following this, albumin (ALB) and alpha-fetoprotein (AFP) secretion were measured. Female hepatocytes secreted 4.8 μg/mL/mg protein of ALB and 10.9 μg/mL/mg protein of AFP (Fig. 3 c, d), whereas male hepatocytes secreted 2.1 μg/mL/mg protein of ALB and 2 μg/mL/mg protein of AFP (Fig. 3 c, d). These experiments demonstrated that male and female hepatocytes displayed dependable performance and were suitable for the subsequent modelling experiments. Fig. 3 Stem cell derived hepatocytes display hepatocyte functions. Male and female hESCs derived hepatocytes were functionally characterised employing. a , b pGLO™ technology to study cytochrome p450 CYP3A and CYP1A2 function. c , d ELISA to measure the secretion of hepatocyte proteins albumin and alpha-fetoprotein. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates Full size image Determining the sex differences in hepatocyte biology following exposure to cotinine and PAHs Hepatocyte specification and maturation was performed in male and female hESCs lines in the presence or absence to smoking components. Cotinine at a concentration range of 1 to 300 nM, chrysene at a concentration range of 10 nM to 50 μM, fluorene at a concentration range of 10 nM to 100 μM, naphthalene at a concentration range of 10nm to 1 mM and phenanthrene, at a concentration range of 10 nM to 500 μM were used. Following 8 days exposure, cell health was determined by ATP production and caspase activity. Following this cell function was determined measuring CYP P450 3A and 1A2 activity. In order to measure the effectiveness of each component of cigarette smoke to inhibiting activity we used the half maximal inhibitory concentration (IC 50 ). Analysis of ATP levels revealed that, independent of the sex, the majority of smoking derivatives tested did not deplete ATP levels below 50%. The exception was phenanthrene, which reduced male ATP levels by 50% at 78 μM. Cell health was also studied by measuring caspase 3/7 activation in hepatocytes (Table 1 ; Supplementary Fig. 1 and 2). In these experiments we observed a sex-dependent response in caspase activation. While male and female hepatocytes were equally sensitive to continine, male hepatocytes were more sensitive to chrysene, fluorene and phenanthrene, whereas female hepatocytes were more sensitive to naphthalene. Subsequently, we studied the IC 50 for CYP P450 following exposure to smoking derivatives (Table 1 ; Supplementary Fig. 1 and 2). Loss of CYP1A2 function was more pronounced in female hepatocytes following exposure to cotinine, chrysene and phenanthrene whereas male hepatocytes were more sensitive to fluorene. Of note, naphthalene did not reduce CYP P450 activity but, instead, induced CYP1A2 activity in male and female hepatocytes (Table 1 ). Analysis of the CYP3A function in response to the smoking derivatives also showed sex differences. Both male and female hepatocytes responded in a similar fashion to cotinine, whereas male hepatocyte function was more sensitive to chrysene, fluorene and naphthalene than female hepatocytes. Both male and female hepatocytes responded in a similar fashion to cotinine, whereas loss of male CYP3A function was less sensitive to chrysene and more sensitive to fluorene, naphthalene and phenanthrene than female hepatocytes. Table 1 IC50 values of the tested compounds on cell health and cell metabolism Full size table Determining the sex differences in hepatocyte function following exposure to smoking derivatives CYP3A is a major phase 1 enzyme family involved in the third trimester of human foetal development. To study the effect of a cocktail composed of cotinine and PAHs, we incubated hepatocytes with these additives at the IC 50 values for CYP3A calculated in Table 1 . Phase contrast images of the cells after incubation with the drug cocktail revealed morphological deterioration in both sexes (Fig. 4 a). Descriptively, female hepatocytes lost hallmark hepatocyte features and displayed more fibroblast-like structures, whereas male hepatocytes exhibited a more rounded morphology. Cell function in both hepatocyte populations was also studied in detail. CYP3A activity was reduced by 54% in female hepatocytes and by 38% in male hepatocytes following exposure (Fig. 4 b). This was in contrast to CYP1A2 activity which was reduced to similar extents in female and male hepatocytes (Fig. 4 c). We also studied the effect that compound incubation had on protein secretion. In accordance with cell CYP P450 function, secretion of both ALB and AFP were reduced in male and female hepatocytes following exposure to the smoking derivatives (Fig. 4 d, e). Fig. 4 Cell morphology and metabolic activity following smoking component exposure. a Phase contrast images reveal a deterioration in the cell morphology in the presence of the mixture of drugs for 8 days compared with cells in the presence of the vehicle control. b , c pGLO™ technology was employed to measure cytochrome P450 activity. d , e ELISA was employed to study the secretion of albumin and alpha-fetoprotein. The images were taken at ×10 magnification and scale bar represents 200 μm. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates Full size image Determining the sex differences in cell biology following exposure to smoking derivatives In addition to cell function, cell health was studied measuring ATP levels and caspase 3/7 activity. ATP levels were reduced to a greater extent in female hepatocytes than male hepatocytes (Fig. 5 a). Whereas caspase activation was greater in female hepatocytes (~twofold increase) than male hepatocytes (~1.2 fold) (Fig. 5 b). Following these experiments we examined the number of hepatocytes that remained in culture post exposure. Interestingly, male hepatocytes were depleted by 40%, whereas female hepatocytes were depleted by 30%. Taken together these data demonstrate that male hepatocytes were more likely detaching from the matrix and undergoing necrosis, whereas the female hepatocytes were undergoing cell dedifferentiation and apoptosis following expoure. Fig. 5 Measurement of the cell health in female and male hepatocytes following exposure to cocktail for 8 days. a Cell viability was studied by measuring the levels of ATP. b Cell apoptosis was measured using Caspase 3/7 activity. c Cell paint technology was employed to analyse the number of cells attached to the matrix Full size image Discussion The ability to study the effects of maternal smoking on the unborn foetus has traditionally relied on material from elective terminations, animal models and a variety of cell lines. While these approaches have generated highly informative datasets, they do suffer from some significant drawbacks which include tissue scarcity, individual variability and loss of cell phenotype. Therefore, to study of the effects of the maternal smoking on human foetal liver development, renewable cell based models are required which can be delivered from defined genetic backgrounds. Encouragingly, pluripotent stem cell based liver models have proven to be effective in modelling human liver exposure to drugs in the past (Medine et al. 2013 ; Villarin et al. 2015 ; Szkolnicka et al. 2014 , 2016 ). In these studies we have employed pluripotent stem cells to produce human hepatoblasts at scale and screen for developmental perturbation over an eight day time course. Cigarettes contain a complex mixture of chemicals. These compounds pose a risk to foetal development which includes increased risk of intrauterine growth restriction, small-for-gestational age and preterm delivery amongst others (Perera et al. 1998 , 2005a , b ; Dejmek et al. 2000 ). To identify the major players for our modelling experiments we employed gas chromatography and mass spectrometry in foetal plasma and livers to identify specific cigarette smoke components. From these experiments we identified cotinine and four PAHs present in foetal circulation of mothers who smoked. We used this information to study the effects of those derivatives on hepatocyte differentiation from male and female hESCs. On the whole, exposure to the compounds singly did not have a detrimental effect on hepatocyte biology, but in combination they displayed a more marked effect. Following exposure, female hepatocytes displayed greater cell number, induced caspase 3/7 activity and lower ATP levels than male hepatocytes. This suggested that female hepatocytes were likely undergoing apoptosis during cell dedifferentiation. Whereas, male hepatocytes appeared to be necrotic as they detached from the extracellular matrix. Whether these observations are a consequence of different levels of metabolic enzyme function in the hepatocyte populations, or the effects manifest due to other sex-dependent processes, will be the study of future experimentation. The sex differences reported in these studies are consistent with previous studies on maternal smoking on foetal development. Recently, Filis et al. demonstrated that in male foetuses maternal smoking affected pathways regulating liver fibrosis and cirrhosis, whereas in female foetuses glucose metabolism was more affected (Filis et al. 2015 ). Sex-specific responses to maternal smoking is also reflected in the balance of foetal endocrine signals (O’Shaughnessy et al. 2007 , 2011 ) and in the development of other organs, including the gonads (O’Shaughnessy et al. 2007 ; Fowler et al. 2014 ; Drake et al. 2015 ) and the placenta (Gabory et al. 2013 ). While sexual dimorphism exists in the expression of these pathways, there are also studies which indicate sex-independent responses leading to disease in the adult (Allina et al. 2011 ). In summary, our approach has shown that pluripotent stem cell derived hepatoblasts and hepatocytes represent a useful tool to model foetal liver biology ‘in the dish’, providing valuable information on sex differences that occur following exposure to components of cigarette smoke. Change history 03 October 2017 During manuscript proofing, the following sentence was not deleted in the section “Results” at the end of the paragraph: “Both male and female hepatocytes responded in a similar fashion to cotinine, whereas male hepatocyte function was more sensitive to chrysene, fluorene and naphthalene than female hepatocytes”. | None | [] | [] | [] | SciNews | Medicine | Baltasar Lucendo-Villarin et al, Modelling foetal exposure to maternal smoking using hepatoblasts from pluripotent stem cells, Archives of Toxicology (2017). DOI: 10.1007/s00204-017-1983-0 | http://dx.doi.org/10.1007/s00204-017-1983-0 | https://medicalxpress.com/news/2017-05-cigarette-unborn-children-revealed-stem.html | Scientists have developed a novel way to study the effects of maternal smoking on liver tissue using embryonic stem cells, revealing that the potent cocktail of chemicals in cigarettes is particularly harmful to developing liver cells and affects male and female foetuses differently. The study, led by the University of Edinburgh, used pluripotent stem cells to build foetal liver tissue and exposed it to harmful chemicals found in cigarettes, including specific substances known to circulate in foetuses when mothers smoke. The findings showed that the chemical cocktail harmed foetal liver health more than individual components, with male tissue showing liver scarring and female tissue showing more damage to cell metabolism. The study provides important information about the long-term effects of maternal cigarette smoking and sheds light on fundamental differences in damage between male and female foetuses.
Chemicals found in cigarette smoke have been shown to damage foetal liver cells. Scientists say the potent cocktail of chemicals in cigarettes is particularly harmful to developing liver cells and affects male and female foetuses differently. Researchers - led by the University of Edinburgh - have developed a novel way to study the effects of maternal smoking on liver tissue using embryonic stem cells. The stem cell technique will provide important information about the long-term effects of maternal cigarette smoking, say experts. The liver is vital in clearing toxic substances and plays a major role in regulating metabolism. Smoking cigarettes - which contain around 7000 chemicals - can damage foetal organs and may do lasting harm. Scientists used pluripotent stem cells - non-specialised cells that have the distinctive ability to be able to transform into other cell types - to build foetal liver tissue. Liver cells were exposed to harmful chemicals found in cigarettes, including specific substances known to circulate in foetuses when mothers smoke. The study showed that a chemical cocktail - similar to that found in cigarettes - harmed foetal liver health more than individual components. Findings also showed that cigarette chemicals damage the liver differently in male and female foetuses, with male tissue showing liver scarring and female tissue showing more damage to cell metabolism. The study was carried out in collaboration with the Universities of Aberdeen and Glasgow and is published in the journal Archives of Toxicology. Dr David Hay from the University of Edinburgh's Centre for Regenerative Medicine, said: "Cigarette smoke is known to have damaging effects on the foetus, yet we lack appropriate tools to study this in a very detailed way. This new approach means that we now have sources of renewable tissue that will enable us to understand the cellular effect of cigarettes on the unborn foetus." Professor Paul Fowler, Director of the Institute of Medical Sciences at the University of Aberdeen, said: "This work is part of an ongoing project to understand how cigarette smoking by pregnant mothers has harmful effects on the developing foetus. These findings shed light on fundamental differences in damage between male and female foetuses." |
10.1038/s41467-022-28279-8 | Researchers restore function in a gene that can suppress liver cancer and enhance immunotherapy | A team of researchers from Massachusetts General Hospital (MGH) and Brigham and Women's Hospital (BWH) has reprogrammed the tumor microenvironment of liver cancer by using mRNA nanoparticles. This technology, similar to the one used in COVID-19 vaccines, restored the function of the p53 master regulator gene, a tumor suppressor mutated in not just liver but also other types of cancer. When used in combination with immune checkpoint blockade (ICB), the p53 mRNA nanoparticle approach not only induced suppression of tumor growth but also significantly increased antitumor immune responses in hepatocellular carcinoma (HCC) laboratory models. The results of the study were published in Nature Communications. "The reprogramming of the cellular and molecular components of the tumor microenvironment could be a transformative approach for treating HCC and other cancers," says co-senior author Jinjun Shi, Ph.D., with the Center for Nanomedicine at BWH, who developed the platform with MGH liver cancer biologist and co-senior author Dan G. Duda, DMD, Ph.D. "By using this new approach, we're targeting specific pathways in tumor cells with mRNA nanoparticles. These tiny particles provide the cells with the instructions to build proteins, which, in the case of HCC, delayed tumor growth and rendered the tumor more responsive to treatment with immunotherapy." HCC is the most prevalent form of liver cancer, characterized by a high mortality rate and dismal prognosis for patients. Immune checkpoint blockers, a revolutionary new class of drugs that enable the body's immune system to recognize and attack cancer cells, have shown efficacy in treating HCC, but most patients do not benefit. To overcome this resistance, multiple strategies are being developed to improve ICBs by combining them with other existing therapies, such as anti-VEGF drugs and radiotherapy. However, even these approaches are expected to benefit only a small number of patients, creating an urgent need for new combination therapies. Encouraged by the success of mRNA in COVID-19 vaccines, Shi decided to apply the technology (with certain modifications) to targeting cancer cells. He teamed up with Duda, whose MGH lab had already created sophisticated animal models to analyze the microenvironment of liver tumors in response to immunotherapy. They developed and optimized an mRNA nanoparticle strategy to restore loss of function of p53, a tumor suppressor gene whose function is lost in more than one-third of HCC cases. In doing so, they uncovered evidence that p53 regulates the tumor microenvironment by modulating the interaction of cancer cells with immune cells as part of ICB therapy. "In our previous work we had developed nanoparticles to target CXCR4—a chemokine receptor expressed by liver cancer cells—and selectively co-deliver drugs such as kinase inhibitors," explains Duda. "We've now adapted this platform to use CXCR4 as a kind of ZIP code to selectively target the tumor with nanoparticles encapsulating therapeutic mRNAs. When we combined this nanomedicine with anti-programmed death receptor 1 (PD-1) antibodies, a standard immunotherapy for HCC patients, it induced global reprogramming of the tumor microenvironment and tumor response by restoring p53 expression." The next step for the team is to transfer their research from animal models to patients in a clinical trial. "Scientists have struggled for decades to find an effective way to target the tumor suppressor pathways," emphasizes Shi. "Our proof-of-concept study is an exciting development that clearly shows that p53 mRNA nanoparticles in combination with ICB not only works, but also could make a big difference by reversing immunosuppression in HCC and potentially other cancers." Shi is an associate professor of Anesthesia at Harvard Medical School (HMS). Duda is associate professor of Radiation Oncology at HMS and director of translational research in GI radiation oncology at MGH. Yuling Xiao, Ph.D., and Jiang Chen, MD, Ph.D., are the lead authors of the study and postdoctoral fellows at HMS. | Researchers from Massachusetts General Hospital and Brigham and Women's Hospital have developed a new approach to treating liver cancer by reprogramming the tumor microenvironment using mRNA nanoparticles. The technology, similar to that used in COVID-19 vaccines, restores the function of the p53 master regulator gene, a tumor suppressor mutated in liver and other cancers. When combined with immune checkpoint blockade, the p53 mRNA nanoparticle approach induced suppression of tumor growth and significantly increased antitumor immune responses in laboratory models of hepatocellular carcinoma. The study, published in Nature Communications, suggests that this approach could be a transformative treatment for liver cancer and potentially other cancers, and the researchers are planning to move forward with a clinical trial to test the therapy in patients. | None | Abstract Immunotherapy with immune checkpoint blockade (ICB) has shown limited benefits in hepatocellular carcinoma (HCC) and other cancers, mediated in part by the immunosuppressive tumor microenvironment (TME). As p53 loss of function may play a role in immunosuppression, we herein examine the effects of restoring p53 expression on the immune TME and ICB efficacy. We develop and optimize a CXCR4-targeted mRNA nanoparticle platform to effectively induce p53 expression in HCC models. Using p53 -null orthotopic and ectopic models of murine HCC, we find that combining CXCR4-targeted p53 mRNA nanoparticles with anti-PD-1 therapy effectively induces global reprogramming of cellular and molecular components of the immune TME. This effect results in improved anti-tumor effects compared to anti-PD-1 therapy or therapeutic p53 expression alone. Thus, our findings demonstrate the reversal of immunosuppression in HCC by a p53 mRNA nanomedicine when combined with ICB and support the implementation of this strategy for cancer treatment. Introduction Loss of function in tumor suppressors is a driving force in tumorigenesis and the development of therapeutic resistance. The p53 tumor suppressor gene, a master regulator of cell cycle arrest, apoptosis, senescence, and other cellular pathways 1 , is frequently mutated in a myriad of human cancers, including hepatocellular carcinoma (HCC). Beyond cell autonomous tumor-suppressive effects, increasing evidence indicates that p53 protein can also regulate the immune tumor microenvironment (TME) by modulating interactions of tumor cells with immune cells 2 , 3 , 4 , 5 , 6 . For example, p53 has been shown to induce antitumor immune response via transcriptional regulation of genes encoding for key cytokines (e.g., TNF-α, IL-12, and IL-15) 7 , 8 , 9 , chemokines (e.g., CCL2, –20, and –28, and CXCL1, –2, –3, –5, and –8) 10 , 11 and pathogen recognition (e.g., Toll-like receptors, TLRs) 12 , 13 , all of which result in recruitment and activation of immune cells. Genetic restoration of p53 could induce the activation of myeloid cells to promote tumor antigen-specific adaptive immunity 14 and upregulate the NKG2D ligands on senescent tumor cells for activation of natural killer (NK) cells 15 . p53 may also play an important role in the suppression of pro-tumorigenic M2-type tumor-associated macrophage (TAM) polarization, thus facilitating antitumor immunity 16 , 17 . Moreover, recent studies suggest that immunogenic cancer cell death induced by cytotoxic agents may be associated with activation of the p53 pathway 18 , 19 . Despite these advances in understanding the role of p53, developing therapeutic approaches that directly and effectively address the loss of p53 function and its role in immunosuppression and immunotherapy resistance in HCC remains an elusive goal. HCC is the most prevalent liver cancer with a high mortality rate and dismal prognosis 20 , 21 , 22 . Enhancing anti-tumor immunity using immune checkpoint blockade (ICB), including anti-CTLA-4, anti-PD-1 (aPD1), and anti-PD-L1 (aPD-L1) antibodies, has demonstrated the potential to transform the therapeutic landscape of many cancers including HCC. However, responses are seen only in a limited fraction of patients, and majority of cancer patients do not benefit from the treatment. This may be mediated in part by insufficient tumor immunogenicity and the immunosuppressive TME. Different strategies are actively being developed to improve ICB therapy in HCC, with a major focus on combining ICB with other existing therapies (such as anti-VEGF therapy), which could significantly increase anti-tumor immunity. Such combinations have been shown to improve anti-tumor efficacy in animal models and increase the survival of patients in clinical trials 23 , 24 , 25 , 26 . However, an increasing majority of HCC patients show no responses, and thus, new combinatorial strategies are still desperately needed. In this work, we address the unmet need to implement p53 therapy and potentiate ICB response in HCC. We report a targeted mRNA nanoparticle (NP) platform designed to induce p53 expression and reprogram the TME, which we test in proof-of-concept studies in combination with ICB in p53 -null murine HCC models. We optimize the p53 mRNA NP platform for HCC targeting, evaluate its therapeutic efficacy in p53 -null HCCs growing in orthotopic and ectopic sites (alone or with aPD1 antibody), and study changes in the TME. This unique combinatorial strategy safely and effectively inhibits tumor growth in vivo, while prolonging survival and reducing ascites and metastases. Thus, combining p53 mRNA nanotherapy with ICB immunotherapy could become a transformative approach for the treatment of HCC and potentially other cancers involving p53 deficiency. Results Engineering and optimization of CXCR4-targeted mRNA NPs We previously developed a robust self-assembly strategy for formulating lipid-polymer hybrid NPs for mRNA delivery 27 , 28 , composed of the ionizable lipid-like compound G0-C14 for mRNA complexation, a biocompatible poly(lactic-co-glycolic acid) (PLGA) polymer for forming a stable NP core to carry the G0-C14/mRNA complexes, and a lipid-poly(ethylene glycol) (lipid-PEG) layer for stability. We here engineered the hybrid NPs (Fig. 1a ) for selective HCC targeting and high mRNA transfection efficiency. To improve HCC targeting, we modified the NPs with the targeting peptide CTCE-9908 (KGVSLSYRCRYSLSVGK; referred to as CTCE), which is specific to CXCR4, a chemokine receptor that is upregulated in cancer cells and is a validated selective target in HCC 29 , 30 . For comparison, we also prepared non-targeted NPs using a scrambled peptide (LYSVKRSGCGSRKVSYL; referred to as SCP). The CTCE or SCP peptide was first conjugated to 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[maleimide(polyethylene glycol)-3000] (DSPE-PEG-Mal) by the thiol-maleimide Michael addition click reaction, with a high chemical yield (≥82%). The chemical structures of DSPE-PEG-CTCE and DSPE-PEG-SCP were confirmed by 1 H-NMR analysis (Supplementary Fig. 1 ). To optimize the targeting efficacy of the mRNA NPs, we examined the effect of CTCE peptide surface density on the cellular uptake of RIL-175 murine HCC cells. As shown in Fig. 1b , CTCE-conjugated enhanced green fluorescent protein (EGFP) mRNA NPs (referred to herein as CTCE-EGFP NPs) showed significantly greater cellular uptake compared to non-targeting SCP EGFP mRNA NPs (referred to as SCP-EGFP NPs) due to the active targeting ability of the CTCE peptide towards HCC cells. We found that 5% or 6% CTCE peptide provided maximum cellular uptake in RIL-175 cells while maintaining NP stability. The uptake of the 5% CTCE-EGFP NPs was >15-fold higher than that of the 5% SCP-EGFP NPs, which was also confirmed by confocal fluorescence microscopy in RIL-175 cells (Fig. 1c ). The 5% peptide density was selected for further analyses. Fig. 1: CXCR4-targeted nanoparticles (NPs) for p53 mRNA delivery to hepatocellular carcinoma (HCC). a Schematic of CXCR4-targeted p53 mRNA NPs and combinatorial strategy using anti-PD-1 therapy to reprogram the immunosuppressive tumor microenvironment for effective treatment of p53-deficient HCC. The combination of CTCE-p53 NPs and PD-1 blockade effectively and globally reprogrammed the immune TME of HCC, as indicated by activation of CD8 + T cells and NK cells, favorable polarization of TAMs towards the anti-tumor phenotype, and increased expression of anti-tumor cytokines. b Flow cytometric analysis of cellular uptake of CTCE-EGFP mRNA NPs with different CTCE peptide densities versus SCP-EGFP mRNA NPs with 5% SCP density in RIL-175 HCC cells ( n = 3 cell samples/group). c Confocal fluorescence imaging of RIL-175 cell uptake of SCP-Cy5-Luciferase (Luc) mRNA NPs versus CTCE-Cy5-Luc mRNA NPs after 4 h treatment. Scale bar: 100 µm. d Effect of different cationic lipid-like materials G0-Cm on the transfection efficacy of Luc-mRNA NPs (mRNA concentration: 0.25 μg/mL, n = 3 samples/group). e TEM image of CTCE-mRNA NPs. Scale bar, 200 nm. f Average particle size and zeta potential of the p53 NPs, SCP-p53 NPs, and CTCE-p53 NPs ( n = 3 samples/group). Data in b , d , and f are presented as mean values ± SD. For c and e : a representative image from one of five independent fields of view in a single experiment. Source data are provided as a Source Data file. Full size image To identify efficacious ionizable lipid-like materials for mRNA complexation and translation, a series of G0-Cn compounds (Supplementary Fig. 2a ) was synthesized through ring opening of epoxides by generation 0 of poly(amidoamine) (PAMAM) dendrimers (Supplementary Fig. 2b ) and screened for using a model luciferase-mRNA. The chemical structures of G0-Cn were confirmed by 1 H-NMR spectrum (Supplementary Fig. 3 ). Analysis of luciferase-mRNA NPs transfection results (Fig. 1d and Supplementary Fig. 4 ) showed that G0-C8 had the most effective mRNA transfection ability and was thus chosen as the ionizable lipid-like material for formulating targeted mRNA NPs for in vivo treatments. To explore the possible mechanisms behind this, we studied the mRNA encapsulation efficiency and cellular uptake of the mRNA NPs formulated with different G0-Cn. As shown in the Supplementary Table 1 , G0-Cn had negligible effect on the mRNA encapsulation efficacy. However, their effect on cellular uptake seemed to play an important role for the mRNA delivery efficacy (Supplementary Fig. 5 ), with the G0-C8 NP showing higher cellular uptake than other G0-Cn NPs. The hybrid CTCE-conjugated p53 mRNA NPs (referred heretofore as CTCE-p53 NPs) were ~110 nm in size as measured by dynamic light scattering (DLS), and their spherical and uniform structure was confirmed by transmission electron microscopy (TEM) imaging (Fig. 1e, f ). The addition of the targeting ligand (CTCE) and the scrambled peptide (SCP) to the NP surface slightly increased the particle size as well as the zeta potential, due to the positive charges of both peptides (Fig. 1f ). In addition, we characterized all the nanoformulations used in this study, including Luc mRNA NPs, GFP mRNA NPs, and p53 mRNA NPs. As shown in Supplementary Fig. 6 , all the nanoformulations used in this study exhibited similar average size and zeta potential. The organic solvent DMF (dimethylformamide) had no effect on the integrity or stability of EGFP mRNA, either as naked mRNA or encapsulated in NPs (Supplementary Fig. 7a ). Moreover, we detected no obvious changes in the size of p53-mRNA NPs over a period of 96 h in the presence of 10% serum, suggesting the in vivo stability of our targeted mRNA NPs (Supplementary Fig. 7b ). To further evaluate the stability of the p53-mRNA NPs, the cell viability was measured using RIL-175 cells after treatment with p53-mRNA NPs pre-incubated with 10% serum for various time points up to 96 h (at 37 °C). Comparable cell viability in all the groups (Supplementary Fig. 8 ) further supported the stability of these p53-mRNA NPs. Notably, pH played a crucial role in complexing mRNA for the ionizable G0-C8, and effective mRNA complexation with G0-C8 was achieved only in acidic conditions. As shown by agarose gel electrophoresis assay at pH 7.4 (Supplementary Fig. 9 ), G0-C8 could not fully complex mRNA even at a weight ratio of 200 G0-C8/mRNA. In comparison, when the pH was adjusted to 3.5 in citrate buffer solution, the mRNA could be completely complexed at a weight ratio of G0-C8/mRNA as low as 2. In addition, this ratio is favorable for mRNA delivery in vivo because it reduces the need to use ionizable lipid-like materials and may thus improve the safety of the mRNA NPs. A cytotoxicity assay was further performed to evaluate the in vitro cytotoxicity of G0-C8/EGFP mRNA (Supplementary Fig. 10 ), which showed ~100% viability at various ratios of G0-C8/mRNA from 1 to 20 in RIL-175 cells. In addition, in vitro cytotoxicity was further examined in both RIL-175 and normal hepatocyte THLE-3 cells. The near-100% cell viability at all tested concentrations in both cell lines (Supplementary Fig. 11 ) indicated the safety of our mRNA NPs. CXCR4-targeting improves mRNA NP delivery to HCC cells in vitro and in vivo We then investigated the CTCE-targeting effect of our mRNA NPs on cellular uptake and mRNA transfection in p53 -deficient murine HCC cells (RIL-175) using flow cytometry. We first examined the transfection efficacy of the targeted mRNA NPs and non-targeted mRNA NPs in vitro using EGFP-mRNA as the model mRNA, by counting EGFP-positive cells (Fig. 2a ). Both SCP-EGFP NPs and CTCE-EGFP NPs showed markedly higher fractions (>90%) of EGFP-positive cells after mRNA NP-transfection compared to controls (free/naked EGFP mRNA). Notably, the CTCE-EGFP NPs induced a ~4.5-fold higher mean fluorescence intensity in cells compared to the SCP-EGFP NPs (Supplementary Fig. 12 ). The higher transfection efficiency of CTCE-EGFP NPs was confirmed by fluorescence microscopy (Fig. 2b ). To further verify the selectivity of the CTCE-mRNA NPs, we also examined the targeting effect of CTCE peptide by blocking the CXCR4 receptor on RIL1-75 cell surface using free CTCE peptide (Supplementary Fig. 13 ). Upon treatment with CTCE peptide, the fluorescence intensity of RIL-175 cells co-incubated with CTCE-Cy5-Luciferase mRNA NPs was significantly lower than that without blocking. Moreover, we generated a CXCR4-knockout (CXCR4-KO) RIL-175 cell line by CRISPR/Cas9 editing and performed in vitro cellular uptake. As evidenced by Western blotting (WB, Supplementary Fig. 14 ), CXCR4 expression of the RIL-175 cells were effectively knocked out by CRISPR/Cas9 editing. In vitro cellular uptake study (Supplementary Fig. 15 ) showed that the fluorescence intensity of CXCR4-KO RIL-175 cells co-incubated with CTCE-Cy5-Luciferase mRNA NPs was significantly reduced than that of the sgControl RIL-175 cells (without CXCR4-KO). These results demonstrate the CXCR4-mediated active targeting effect of the CTCE-NPs on the RIL-175 cell line. Fig. 2: CXCR4-mediated HCC-targeting of CTCE-mRNA NPs in vitro and in vivo. a Flow cytometry analysis of in vitro transfection efficiency (%GFP positive cells) of SCP-EGFP NPs vs. CTCE-EGFP NPs in p53 -null RIL-175 cells. b Immunofluorescence of RIL-175 cells transfected with SCP-EGFP NPs vs. CTCE-EGFP NPs (magnification, ×50). Cells were treated with SCP-EGFP NPs or CTCE-EGFP NPs for 12 h and further incubated for 24 h with fresh cell culture medium (mRNA concentration: 0.5 μg/mL). Scale bar: 100 µm. c Circulation profile of free Cy5-Luc mRNA, SCP-Cy5-Luc NPs, and CTCE-Cy5-Luc NPs (mRNA dose: 350 μg/kg) after i.v. administration. d , e Quantification of biodistribution of free Cy5-Luciferase mRNA, SCP-Cy5-Luciferase (Luc) NPs, and CTCE-Cy5-Luc NPs in orthotopic ( d ) and ectopic ( e ) HCC grafts ( n = 3 mice/group;) at 24 h post-i.v. injection (mRNA dose: 350 μg/kg). f Western blot analysis of p53 protein expression after treatments (mRNA concentration: 0.5 μg/mL). β-actin was used as the loading control. g Immunofluorescence for p53 in RIL-175 cells after treatment with saline or CTCE-p53 NPs (p53 mRNA concentration: 0.25 μg/mL). Scale bar: 50 µm. h RIL-175 cell growth rate after treatment with control (saline), CTCE-EGFP NPs, empty NPs, SCP-p53 NPs, or CTCE-p53 NPs (mRNA concentration: 0.5 μg/mL) ( n = 3 cell samples/group). i RIL-175 cell viability after treatment with control (saline), empty NPs, control NPs (CTCE-EGFP NPs), or CTCE-p53 NPs with different mRNA concentrations (0.0625–0.75 μg/mL) ( n = 3 cell samples/group). Statistical significance was calculated using one-way ANOVA with a Tukey post-hoc test. Data in c , d , e , h , and i are presented as mean values ± SD. ** P < 0.01; *** P < 0.001; **** P < 0.0001. For b and g : a representative image from one of five independent fields of view in a single experiment. For f : this experiment was repeated five times independently with similar results. Source data are provided as a Source Data file. Full size image Next, intracellular uptake of the mRNA NPs in RIL-175 cells was examined by confocal fluorescence microscopy after incubating Cy5-labeled Luciferase-mRNA NPs (CTCE-Cy5-Luc NPs) with RIL-175 cells for 0.5, 2, 4, or 6 hrs. The intensity of red fluorescence from Cy5-Luc mRNA in the cells increased in proportion to incubation time (Supplementary Fig. 16 ), suggesting the successful intracellular delivery of our mRNA NPs. To test the efficiency of CXCR4-mediated HCC-targeting of CTCE-mRNA NP delivery in vivo, we next conducted pharmacokinetics (PK) and biodistribution (BioD) studies. We first evaluated PK parameters by administering targeted or non-targeted Cy5-Luc-mRNA NPs or free Cy5-Luc-mRNA into healthy C57Bl/6 mice via the tail vein. The PK results showed that free mRNA was rapidly cleared, with a dramatic decrease to ~8% after 15 min (Fig. 2c ). In contrast, similar to Cy5-Luc NPs without peptide modification, both SCP-Cy5-Luc NPs and CTCE-Cy5-Luc NPs showed prolonged mRNA circulation, with >30% of the Cy5-Luc-mRNA still circulating after 60 min. After 4 h, nearly 20% of both NPs were still detectable, while most free mRNA was cleared within 1 h. This result also indicated that the presence of the targeting moiety (i.e., CTCE) did not alter the PK profile of the mRNA NPs. We then evaluated the BioD and tumor accumulation of these NPs in both orthotopically and ectopically (s.c.) grafted RIL-175 HCCs. Tumor-bearing mice were administered free Cy5-Luc-mRNA, non-targeted SCP-Cy5-Luc-mRNA NPs, or targeted CTCE-Cy5-mRNA NPs by tail vein. As shown in Fig. 2d, e and Supplementary Fig. 17 , in both HCC models, both NPs exhibited considerable intratumoral accumulation, while the fluorescent signal of free Cy5-mRNA was barely detectable in the tumor tissue 24 h post-injection. Notably, there was ~1.5 and 2.7-fold greater intratumoral accumulation of CTCE-targeted NPs than non-targeted NPs in the orthotopic and ectopic models, respectively. Taken together, the evidence suggests that CTCE-targeted NPs demonstrated significantly enhanced cellular uptake, mRNA transfection efficiency, and intratumoral accumulation compared to non-targeted NPs irrespective of tumor site/stroma, supporting the use of CTCE peptide ligands for selective HCC cell targeting. CXCR4-targeted mRNA NP increases p53 protein expression and reduces HCC cell viability in vitro To determine whether the targeted p53-mRNA NPs could induce the expression of therapeutic p53 in p53 -null RIL-175 cells, we first checked p53 protein expression after treatment with CTCE-p53 NPs versus SCP-p53 NPs. Both WB and immunofluorescence (IF) staining (Fig. 2f, g ) confirmed the successful restoration of p53 expression in RIL-175 cells. The WB data further showed that targeted NPs exhibited enhanced level of p53 expression compared with non-targeted NPs. In addition, the IF images showed that p53 protein was mainly localized in the cytoplasm of RIL-175 cells. Next, we tested cell growth and cell viability after treatment with CTCE-p53 NPs versus SCP p53 NPs. Figure 2h shows that the number of viable cells was dramatically decreased after 10-day treatment with SCP-p53 NPs or CTCE-p53 NPs compared to control-treated cells, or to cells treated with CTCE-EGFP NPs or empty CTCE-NPs. Of note, the CTCE-p53 NPs elicited greater growth inhibition than non-targeted SCP-p53 NPs, consistent with higher p53 expression. Moreover, CTCE-p53 NPs significantly decreased cell viability in a dose-dependent manner compared to the control, free mRNA, and control NPs (Fig. 2i ). These results indicate that the CTCE-targeting NP system effectively delivers p53 mRNA to HCC cells, restoring functional p53 activity and reducing HCC cell viability. In addition, we tested whether the CTCE-p53 NPs could induce the suppressing function of p53 in p53 -wild type murine HCC cell line HCA-1. As shown in the Supplementary Fig. 18 , modest cytotoxicity was observed at high doses in HCA-1 cells, whereas empty NPs and control NPs (CTCE-EGFP NPs) had no effects on HCA-1 cell viability. Combining CXCR4-targeted p53 mRNA NPs with PD-1 blockade inhibits tumor growth and reprograms the immune TME in orthotopic p53 -null murine HCC To examine the role of p53 in immunosuppression in HCC, we tested the CTCE-p53 NPs and aPD1 against p53 -null HCC. Mice with established orthotopic RIL-175 tumors were treated with either CTCE-p53 NPs at a mRNA dose of 350 µg/kg by intravenous (i.v.) injection, aPD1 by intraperitoneal (i.p.) injection, or their combination, every 3 days for 4 cycles (Fig. 3a ). Tumor growth was monitored by high-frequency ultrasound imaging (Fig. 3b ). In vivo results revealed that CTCE-p53 NPs treatment or aPD1 therapy alone inhibited HCC growth compared to IgG-treated control mice, but their combination was significantly more effective than either treatment alone (individual growth curves in Fig. 3c , mean tumor volumes in Fig. 3d , and mean tumor weight in Supplementary Fig. 19a ). We also performed immunohistochemistry (IHC) analysis to confirm the expression of p53 in the orthotopic tumors. As shown in Fig. 3e , p53 was expressed at the highest levels in the CXCR4-p53 NP-treated groups, confirming the successful delivery of p53 mRNA to the orthotopic tumors. Fig. 3: PD-1 blockade combined with CXCR4-targeted p53 mRNA NPs reprograms the immune TME and promotes anti-tumor immunity in HCC. a Timeline of tumor implantation and treatment schedule in the orthotopic HCC model. The mice with orthotopic RIL-175 tumor were treated with CTCE-EGFP mRNA NPs or CTCE-p53 mRNA NPs every 3 days for 4 i.v. injections. Anti-PD-1 (aPD1) was given at 10 mg/kg every 3 days by i.p. injection. b High-frequency ultrasound images of the RIL-175 orthotopic tumor-bearing C57BL/6 mice at Day 7, 10, 13, 16, and 19 ( n = 7 mice/group). c , d Tumor growth profile of each indicated treatment group ( n = 7 mice/group). e Immunofluorescence staining of p53 expression in RIL-175 tumors (red signals) in different groups. Scale bar: 200 µm. f – n Flow cytometry analysis ( n = 7 samples for CTCE-EGFP-NPs and aPD1group; n = 6 samples for CTCE-p53 NPs and CTCE-p53 NPs+aPD1 group) of tumor CD8 + cytotoxic T cells ( f ), IFN-g + TNF-α + cells among CD8 + T cells ( g ), CD4 + T cells ( h ), CD11b + cells when gating on NK cells ( i ), KLRG1 + cells when gating on CD11b + NK cells ( j ), IFN-g + cells when gating on NK cells ( k ), IFN-gR + cells when gating on NK cells ( l ), M1-like tumor-associated macrophages (TAMs) ( m ), and M2-like TAMs ( n ). o – q Increased levels of expression of TNF-α ( o ), IL-1β ( p ), and IFN-γ ( q ) in RIL-175 tumor tissues by protein array measurements after combination treatment ( n = 4 tumor samples/group). Statistical significance was calculated via one-way ANOVA with a Tukey post-hoc test. All data are presented as mean ± S.E.M. For e : this experiment was repeated thrice independently with similar results. * P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001. Source data are provided as a Source Data file. Full size image We then examined the impact of treatment on immune cell infiltration and activation in the RIL-175 tumors by flow cytometry analyses of digested HCC tissues. Compared to treatment with CTCE-EGFP NPs, CTCE-p53 NPs, or aPD1 alone, we found that the combination of CTCE-p53 NPs with aPD1 significantly increased the number of infiltrating CD8 + T cells (Fig. 3f ). Importantly, the fraction of activated (IFN-γ + TNF-α + ) CD8 + T cells was significantly increased in the HCC tissue after combination therapy (Fig. 3g ). In addition, the fraction of infiltrating CD4 + FoxP3 – effector T cells (Fig. 3h ), mature (KLRG1 + CD11b + ) NK cells (Fig. 3i, j ), and activated (IFN-γ + and IFN-γR + ) NK cells (Fig. 3k, l ) all increased after combined treatment with CTCE-p53 NPs and aPD1. Moreover, we found that combination therapy effectively polarized tumor-associated macrophages (TAMs) towards the M1-like phenotype and decreased M2-like TAMs in HCC (Fig. 3m, n ). It is worth noting that CTCE-p53 NPs alone increased the fractions of mature NK cells and M1 TAMs while reducing M2 TAMs (Fig. 3l–n ); in contrast, aPD1 alone had the opposite effect by polarizing TAMs toward the M2-phenotype (Fig. 3m, n ). We also examined changes in key immune cytokines post-treatment by multiplexed array analysis of whole tumor tissue protein extract. We found that CTCE-p53 NPs and aPD1 significantly increased TNF-α and IL-1β levels; they also tended to increase IFN-γ + and IL-2 and decrease IL-6 but neither IL-10 nor MCP1 (CCL2) (Fig. 3o–q and Supplementary Figs. 19b–d ). Collectively, these results suggest that the combination of CTCE-p53 NPs and PD-1 blockade effectively and globally reprogrammed the immune TME of HCC by increasing effector immune cells and cytokine levels in the tumor. We further compared side-by-side the survival benefit of the combination of CTCE-p53 NPs with aPD1 against a regimen similar to the new standard of care in HCC patients (i.e., anti-VEGFR2 antibody+ aPD-L1 antibody) in the orthotopic RIL-175 tumor model (Supplementary Fig. 20 ). Results showed that both treatments were effective and comparable in increasing overall survival and delaying disease morbidity in the p53 -null murine HCC model. In addition, the in vivo therapeutic efficacy of the combination of CTCE-p53 NPs with aPD1 were also evaluated in an orthotopic p53 -wild type HCC tumor model (HCA-1) in C3H mice. Though the CTCE-p53 NPs showed modest in vitro cytotoxicity in HCA-1 cells (Supplementary Fig. 18 ), this modest in vitro effect did not translate into an in vivo survival benefit (Supplementary Fig. 21 ) with the same dosage and dosing frequency used in the RIL-175 model. Combining CXCR4-targeted p53 mRNA NPs with PD-1 blockade is effective in ectopic p53 -null murine HCC To determine whether the comprehensive reprogramming of the immune TME was dependent on the localization of tumor within the liver, we next evaluated in vivo p53 expression, anti-tumor immune response, and anti-tumor efficacy in a subcutaneously grafted HCC model in immunocompetent C57Bl/6 mice. We administered four injections of CTCE-p53 NPs i.v. (350 µg/kg body weight) and aPD1 i.p. (100 μg per dose) every 3 days in mice with established tumors (Supplementary Fig. 22a ). Tumor-bearing mice treated with CTCE-EGFP NPs served as controls. We first evaluated the anti-tumor effect of CTCE-p53 NPs and aPD1 by bioluminescence imaging of the luciferase-expressing RIL-175 tumors to estimate viable tumor burdens (Fig. 4a ). The combination treatment markedly limited the increase of bioluminescence signals compared to CTCE-p53 NPs or aPD1 treatment alone, indicating a potent anti-tumor effect. Moreover, RIL-175 tumor-bearing mice treated with CTCE-EGFP NPs showed aggressive tumor growth, while aPD1 treatment and CTCE-p53 NPs alone delayed the growth of RIL-175 tumors (Fig. 4b and Supplementary Fig. 22b ). The combination of CTCE-p53 NPs with anti-PD1 showed a significantly greater anti-tumor effect than either treatment alone, significantly reducing tumor volume and inducing tumor regression after 4 cycles of treatment (Fig. 4b ). Next, protein extracts from tumor tissues from the different treatment groups were analyzed by WB. As shown in Fig. 4c , CTCE-p53 NP treatment alone and combined with aPD1 treatment both elicited high levels of p53 protein expression in ectopic p53 -null RIL-175 tumors, whereas neither the aPD1 nor the control NPs (i.e., CTCE-EGFP NPs) had any effect on p53 expression. IHC analysis of tumor sections further confirmed p53 expression (Supplementary Fig. 22c ). These results demonstrate that the p53 mRNA NPs effectively restored p53 expression in vivo and significantly enhanced the anti-tumor effects of aPD1 therapy in HCC growing outside the liver. Fig. 4: Combining CXCR4-targeted p53 mRNA NPs with PD-1 blockade reprograms the immune TME and promotes antitumor immunity in ectopic HCC. a Bioluminescence images of the luciferase-expressing RIL-175 tumors grafted subcutaneously in C57Bl/6 mice after 6, 12, and 18 days of treatment ( n = 3 mice/group). b Tumor growth rate in each treatment group ( n = 7 mice/group; *** P < 0.001). c Western blotting analysis on the expression levels of p53 protein in the s.c. RIL-175 tumors after treatment. GAPDH was used as the loading control. d – f Flow cytometry analysis ( n = 3 tumor samples from each group) of lymph node CD80 + CD86 + dendritic cells gating on CD11c + cells ( d ) , and tumor-infiltrating CD8 + CD3 + T cells ( e ) and M2-like CD206 + F4/80 + CD11b + macrophages ( f ). g Representative immunofluorescence for CD8 (in red) to confirm intratumoral T cell infiltration after treatment with CTCE-EGFP NPs, anti-PD-1 (aPD1), CTCE-p53 NPs, or the combination. Scale bar: 200 µm. h – k Protein array analysis of differential expression of cytokines in s.c. HCC tissues after treatment ( n = 3 samples per group): TNF-α ( h ), IL-1β ( i ), IFN-γ ( j ), and IL-6 ( k ). Statistical significance was calculated using one-way ANOVA with a Tukey post-hoc test. All data are presented as mean ± S.D. For c and g : this experiment was repeated thrice independently with similar results. * P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001. Source data are provided as a Source Data file. Full size image Using the same model, we also harvested tumors and lymph nodes to examine the number and phenotype of immune cells and the changes in secreted cytokines after four cycles of treatment. CTCE-p53 NPs alone or in combination with aPD1 induced a significant increase in CD80 + CD86 + lymph node-resident dendritic cells (LNDCs) and intratumoral CD8 + T cells (Fig. 4d, e ), and a significant decrease in M2-type TAMs (Fig. 4f ). IF analysis of tumor tissues confirmed the increased intratumoral infiltration by CD8 + T cells after combination treatment (Fig. 4g ). Multiplexed array analysis revealed, similar to orthotopic HCCs, increased expression of cytokines associated with immune cell activation (e.g., TNF-α, IL-1β, IFN-γ, and IL-2) and also decreased expression of immunosuppressive cytokines (e.g., IL-10 and MCP-1) in the ectopic HCCs after combination treatment (Fig. 4h–k and Supplementary Fig. 23 ). Moreover, we also studied the role of p53 on MHC class I expression by WB and IF. Results in Supplementary Figs. 24 and 25 revealed an association between p53 and MHC class I expression, indicating the potential role of p53 restoration in inducing immune responses. These results demonstrate that targeting HCC cells with CTCE-p53 NPs combined with aPD1 therapy triggers anti-tumor immunity and reprograms the immune TME of HCC both in the liver and in other organs. Combination therapy prolongs survival and reduces bloody ascites, pleural effusions, and lung metastases Using the orthotopic RIL-175 tumor model, we further evaluated the therapeutic efficacy of combining aPD1 with CTCE-p53 NPs in mice with established tumors (Fig. 5a ). We treated the mice by i.v. injection of NPs and i.p. injection of aPD1 for four cycles and then monitored the tumor growth by ultrasound imaging and the survival. CTCE-p53 NPs alone and aPD1 alone modestly inhibited tumor growth, but their combination elicited a significant delay in tumor growth (Fig. 5b, c ). Notably, the group treated with CTCE-p53 NPs plus aPD1 showed a significant and substantial survival benefit (median overall survival of 43.5 days, almost double that in the control group in this model, HR = 0.26; p = 0.0001) (Fig. 5d ). In addition, only the combination treatment reduced the incidence of bloody ascites (Fig. 5e ) and pleural effusions (Fig. 5f ), which are potentially lethal adverse effects of orthotopic HCC. Moreover, when we assessed the lung metastatic burden by enumerating metastatic nodules, we found it significantly reduced in the group that received a combination of CTCE-p53 NPs with aPD1 (Fig. 5g ). These findings suggest that p53 restoration using CXCR4-targeted mRNA NPs can markedly improve the efficacy of aPD1 therapy in p53 -deficient HCC. Fig. 5: Therapeutic efficacy of the combination of CTCE-p53-mRNA NPs with anti-PD-1 (aPD1) in orthotopic HCC model. a Timeline of tumor implantation and treatment schedule for survival studies in HCC models. b , c Tumor growth profile of each indicated treatment group ( n = 12 mice/group). d Survival data from the RIL-175 orthotopic mouse model ( n = 12 mice/group). e , f The combination of CTCE-p53-mRNA NPs with aPD1 reduces ascites ( e ) and pleural effusion ( f ). g The combination of CTCE-p53-mRNA NPs with aPD1 reduces lung metastasis ( n = 12 mice for each group). Statistical significance was calculated via one-way ANOVA with a Tukey post-hoc test. All data are presented as mean ± S.E.M. * P < 0.05; ** P < 0.01. Source data are provided as a Source Data file. Full size image Combination of p53 mRNA NPs with aPD1 is safe in vivo Finally, to evaluate the in vivo safety of CXCR4-targeted p53-mRNA NPs alone and in combination with aPD1, mouse weight was monitored during the above animal studies with the s.c. grafted and orthotopic models, and blood and major organs (e.g., heart, kidneys, liver, lung, and spleen) were harvested at the end of these studies. No significant change in body weight was observed in any of the treatment groups (Supplementary Figs. 26 and 27 ). We performed hematological analysis based on serum biochemistry and whole blood panel tests. A series of parameters were tested, including alanine aminotransferase (ALT), aspartate aminotransferase (AST), urea nitrogen (BUN), albumin, BUN, creatinine, globulin, calcium, cholesterol, phosphorus, glucose, total protein, red blood cells (RBC), white blood cells (WBC), hemoglobin (Hb), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH), hematocrit (HCT), and lymphocytes (LY). As shown in Fig. 6 and Supplementary Fig. 28 , no obvious changes were detected in any hematological parameter across groups, indicating negligible side effects of the CTCE-p53 NPs and their combination with aPD1. We also examined the major organs by H&E staining. Histological analyses revealed no obvious abnormality and no differences in the main organs among the treatment groups ( Supplementary Figs. 29 and 30 ), further demonstrating the in vivo safety of the combination treatment. Fig. 6: In vivo safety of CTCE-p53 NPs and the combination with anti-PD-1 antibody. a – k Serum biochemistry analysis ( n = 4 samples for CTCE-EGFP NPs group; n = 5 samples for the left four groups). l – r Whole blood panel tests analysis ( n = 5 samples for each group). All data are presented as mean ± S.E.M. Source data are provided as a Source Data file. Full size image Discussion The last decade has witnessed a tremendous shift in cancer treatment toward immunotherapy with ICBs, significantly extending the survival of cancer patients, including those with HCC. However, benefits are seen in only a fraction of patients. Combinations of ICB therapy with other therapy modalities (e.g., chemotherapy, radiotherapy, and targeted therapy) are being actively explored for their ability to activate anti-tumor immune response and/or alter the immunosuppressive TME. These strategies are designed to increase the recruitment of activated effector T cells in ‘immunologically cold’ tumors that lack T cells and do not respond to ICB-based therapy. The tumor suppressor p53 is one of the most frequently mutated genes in a wide range of cancers and is strongly associated with tumorigenesis, tumor progression, treatment resistance, and adverse prognosis. Compelling evidence suggests that p53 dysfunction leads to immunosuppression and immune evasion. Restoration of p53 function thus may offer the opportunity to reverse immunosuppression of the TME and improve the anti-tumor efficacy of ICB therapy. Current efforts towards p53 reactivation include small molecules and DNA therapies 31 , 32 , 33 , 34 , 35 , 36 , 37 , which have shown notable outcomes but are also associated with formidable drawbacks 38 , 39 , highlighting the need for new therapeutic strategies to restore p53 functions. The use of synthetic mRNA has attracted tremendous attention, as exemplified by the recent clinical approval of COVID-19 mRNA nano-vaccines and the clinical trials of a number of mRNA nanotherapeutics for diverse diseases including cancer 28 , 40 , 41 , 42 , 43 . As a compelling alternative to DNA, mRNA requires only cytosolic delivery for translation, thus largely avoiding host genome integration and eliciting faster and more predictable protein expression. In this study, we developed a CXCR4-targeted mRNA NP platform for effective p53 restoration and tested it in combination with aPD1 immunotherapy using p53 -null murine HCC models. We extensively optimized the p53 mRNA NP platform by screening a series of ionizable lipid-like compounds and varying densities of CXCR4-targeting ligands for improving mRNA translation and HCC targeting in vivo. Our results demonstrate that the combination of CXCR4-targeted p53 mRNA NPs with aPD1 leads to a potent antitumor effect in intrahepatic and ectopic models of HCC with p53 loss. The combination of p53 mRNA NPs and aPD1 effectively and globally reprogrammed the immune TME by promoting MHC-I expression and anti-tumor immunity, and decreasing the expression of immunosuppressive cytokines in HCC, irrespective of organ location. These findings suggest that p53 mRNA nanotherapy could enhance the efficacy of ICB therapy, substantially improving the treatment of p53 -deficient HCC and potentially other p53 -deficient cancers. Further studies will be required to gain an in-depth understanding of the role of p53 in immune regulation, such as how the p53 status of cancer cells (e.g., p53 mutation) affects the immune TME and how the transfection of p53 mRNA NPs in immune cells (e.g., T cells, NK cells, and macrophages) affects their function in vivo. In addition, new combinatorial strategies between p53 targeting, ICB, with or without VEGF blockade may be required to increase durability of responses. If successfully translated, the mRNA nanotherapy-based p53 restoration strategy could be transformative and impactful in cancer immunotherapy. Methods Materials Ester-terminated PLGA (with inherent viscosity of 0.55-0.75 dL/g) was purchased from Durect Corporation. Lipid PEGs terminated with methoxyl groups (1,2-distearoyl-sn-glycero-3-phosphoethanolamine- N -[methoxy(polyethylene glycol)−3000] (ammonium salt), DSPE-MPEG (molecular weight (MW) of PEG, 3000 Da) were purchased from Avanti Polar Lipids. Cationic ethylenediamine core-poly(amidoamine) (PAMAM) dendrimer generation 0 (G0) were purchased from Sigma-Aldrich. CXCR4-targeting peptide CTCE-9908 (KGVSLSYRCRYSLSVGK, CTCE) and scrambled peptide (LYSVKRSGCGSRKVSYL, SCP) were custom synthesized by GL Biochem (Shanghai) Ltd. Lipofectamine 2000 (L2K) was purchased from Invitrogen. Firefly Luciferase mRNA (Luc mRNA, L-7202), Enhanced Green Fluorescent Protein mRNA (EGFP mRNA, L-7201), and Cyanine 5 Firefly Luciferase mRNA (Cy5-Luc mRNA, L-7702) were purchased from TriLink Biotechnologies (San Diego, CA). Murine p53 mRNA with chemical modification (full substitution of Pseudo-U and 5-Methyl-C, Capped (Cap 1) using CleanCap® AG, Polyadenylated (120 A)) was custom-synthesized by TriLink Biotechnologies (San Diego, CA). InVivo MAb anti-mouse PD-1 (CD279) was purchased from Bioxcell. D-luciferin-K + salt bioluminescent substrate (no. 122799) was obtained from PerkinElmer. Primary antibodies used for western blot experiments as well as immunofluorescent and immunohistochemistry staining included: anti-p53 (sc-126, Santa Cruz Biotechnology, 1:500 dilution), anti-GAPDH (Cell Signaling Technology, # 5174; 1:2000 dilution), anti-beta-Actin (Cell Signaling Technology; 1: 2,000 dilution), and anti-rabbit and anti-mouse horseradish peroxidase (HRP)-conjugated secondary antibodies (Cell Signaling Technology). Secondary antibodies used in this study included: Alexa Fluor® 488 Goat-anti Rabbit IgG (Life Technologies, A-11034), and Alexa Fluor® 647 Goat-anti Mouse IgG (Life Technologies, A-28181). All other chemicals and solvents were purchased from Sigma-Aldrich and used without further purification. Synthesis of ionizable lipid-like compounds (G0-Cn) A series of ionizable lipid-like compounds termed G0-Cn were synthesized through ring opening of epoxides bearing different alkyl chain lengths by generation 0 of poly (amidoamine) (PAMAM) dendrimers (M1). Briefly, substoichiometric amounts of epoxide were added to increase the proportion of products with one less tail than the total possible for a given amine monomer. The amine (1 equiv, typically 1 millimole (mmol)) and epoxide (9 equiv, typically 1 millimole (mmol)) were added to a 50 mL round-bottom glass flask containing a magnetic stir bar. The flask was sealed, and the reaction was heated to 95 °C with homogeneous stirring for 2 days. The crude products were separated by chromatography on silica with gradient elution from CH 2 Cl 2 to 15:1 CH 2 Cl 2 /MeOH. The separated product was characterized by 1 H NMR spectrum. mRNA complexation ability of G0-C8 and its stability in organic solvent Gel electrophoresis was used to study the mRNA complexation ability of ionizable compound G0-C8 and optimize the ratio between G0-C8 and mRNA in the NPs with free EGFP-mRNA or EGFP-mRNA complexed with G0-C8. Free EGFP-mRNA was also incubated with DMF to evaluate the stability of mRNA in organic solvent (DMF). The EGFP-mRNA were first incubated with G0-C8 at different weight ratios (weight ratios of G0-C8/mRNA: 1, 2, 5, 10, and 20) or DMF for 20 min at room temperature. The volumes of samples were then adjusted with loading dye (Invitrogen) and run into an E-Gel 2% agarose (Invitrogen) gel for 30 min at 50 V. Ambion Millennium markers-Formamide (Thermo Fisher Scientific) was used as a ladder. Finally, the gel was imaged under ultraviolet and the bands were analyzed. Synthesis of lipid-PEG-CTCE HCC targeting peptide (DSPE-PEG-CTCE) and lipid-PEG- scrambled peptide (DSPE-PEG-SCP) We conjugated the CXCR4-targeting peptide CTCE-9908 (KGVSLSYRCRYSLSVGK, CTCE) and scrambled peptide (LYSVKRSGCGSRKVSYL, SCP) to DSPE-PEG-MAL to construct the HCC targeted NPs and the non-targeted control NPs, respectively. Synthesis of DSPE-PEG-CTCE and DSPE-PEG-SCP was achieved through the efficient thiol-maleimide Michael addition click reaction. In brief, DSPE-PEG-maleimide and the thiol-CTCE peptide (3:1) or thiol-scrambled peptide were each dissolved in dimethylsulfoxide (DMF). The peptide solution was diluted in 0.1 M sodium phosphate buffer, pH 7.4, and DSPE-PEG was then added to the mixture. The final reaction mixture was 1:1 DMF/(sodium phosphate buffer) with 5 mM peptide and 15 mM DSPE-PEG maleimide. The reaction was allowed to proceed for 2 h at room temperature and then dialyzed against DI water for purification. Lastly, the product was lyophilized to obtain white powder as the final product (DSPE-PEG-CTCE or DSPE-PEG-SCP). The chemical structures of DSPE-PEG-CTCE and DSPE-PEG-SCP were confirmed by 1 H-NMR spectrum. Optimization of the mRNA NPs: the effect of targeting ligand densities The cellular uptake of Enhanced Green Fluorescent Protein mRNA (EGFP mRNA) NPs engineered with seven different densities of CTCE peptide (EGFP-mRNA-CTCE NPs, CTCE density: 2%, 3%, 4%, 5%, 6%, 7%, and 10%, respectively) and 5% scrambled peptide (SCP) was studied to optimize the surface chemistry and targeting efficacy of the mRNA NPs by measuring GFP expression using flow cytometry (BD Biosystems, Heidelberg, Germany) and analyzed using Flowjo software (Flowjo V10). Preparation of mRNA NPs and the formulation optimization An optimized and robust self-assembly technique was employed to prepare mRNA-encapsulated polymer-lipid hybrid NPs based on our previous report 27 , but we extensively optimized the ratios among different NPs’ components, the pH of the solution for mRNA complexation, and the sequence in which reagents were added, which affected the encapsulation, morphology, and transfection efficiency of the mRNA. Briefly, G0-C8 and PLGA were dissolved separately in anhydrous DMF to form a homogeneous solution at concentrations of 2.5 mg/ml and 5 mg/ml, respectively. DSPE-MPEG, DSPE-PEG-CTCE and DSPE-PEG-SCP were dissolved in DNase/RNase-free HyPure water (GE Healthcare Life Sciences, catalog no. SH30538) at the concentration of 1 mg/mL. All of the reagents listed above were sonicated for 5 min in a water-bath sonicator before use. Citrate buffer with pH 3.0–3.5 was first added to 80 μg of G0-C8 (in 32 μl of DMF), then 16 μg of p53 mRNA (in 16 μl of citrate buffer) was added, mixed gently (at a G0-C8/mRNA weight ratio of 5), and allowed to stay at room temperature for 15 min to ensure the sufficient electrostatic complexation. Afterwards, 250 μg of PLGA polymers (in 50 μl of DMF) was added to the mixture and gently mixed. The final mixture was added dropwise to 10 ml of DNase/RNase-free HyPure water consisting of 1 mg hybrid lipid-PEGs under uniform magnetic stirring (1000 rpm) for 30 min. An ultrafiltration device (EMD Millipore, MWCO 100 kDa) was used to remove the organic solvent and free compounds from the NP dispersion via centrifugation at 4 °C. After washing 3 times with DNase/RNase-free HyPure water, the mRNA NPs were collected and finally concentrated in pH 7.4 PBS buffer. The NPs were used fresh or stored at −80 °C for further use. Physicochemical characterization and stability of mRNA NPs The hydrodynamic diameter, zeta potential, and morphology of the p53-mRNA NPs were measured to assess their physicochemical properties. Sizes and zeta potentials of both CTCE- p53-mRNA NPs and SCP-p53-mRNA NPs were measured by dynamic light scattering (DLS, Brookhaven Instruments Corporation) at 20 °C. Diameters are reported as the intensity mean peak average. To prepare NPs for Transmission Electron Microscopy (TEM) to characterize their morphology and shape, CTCE-p53-mRNA NPs were negatively stained with 2% uranyl acetate and then imaged with a Tecnai G2 Spirit BioTWIN microscope (FEI Company). To verify the in vitro stability of the synthesized polymer-lipid hybrid mRNA NPs in an environment mimicking the physiological milieu, CTCE-p53-mRNA NPs were incubated in 10% serum-containing PBS solution at 37 °C in triplicate for 96 hr with constant stirring at 100 rpm. At each time point, an aliquot of NP solution was withdrawn for particle size measurement using DLS and analyzed at various time intervals to evaluate any change in size distribution. To test the encapsulation efficiency (EE%) of mRNA in the NPs, Cy5-Luc-mRNA NPs were prepared according to the aforementioned method. Dimethyl sulfoxide (DMSO, 100 μl) was added to 5 μl of the NP solution to extract the mRNA encapsulated in the NPs, and the fluorescence intensity of Cy5-Luc-mRNA was measured using a multi-mode microplate reader (TECAN, Infinite M200 Pro). The amount of loaded mRNA in the engineered NPs was calculated to be ~67.5%. Cell culture The p53 -null murine HCC cell line RIL-175 was used throughout. RIL-175 (a p53 -null/Hras mutant line syngeneic to C57Bl/6 mouse strain background, Luciferase-tagged) was kindly provided by Dr. Tim Greten (NIH). All other cells were purchased from American Type Culture Collection (ATCC). Dulbecco’s Modified Eagle’s Medium (DMEM; ATCC) was used to culture RIL-175 cells. The cell culture medium was supplemented with 10% fetal bovine serum (Hyclone, SH30071.03), Pen-Strep (100 U ml −1 and 100 μg ml −1 , respectively). Cell culture and all biological experiments were performed at 37 °C in 5% CO 2 conditions and the normal level of O 2 in a cell culture incubator. All cell lines were routinely tested using a mycoplasma contamination kit (R&D Systems) before any in vitro cell experiments or in vivo tumor model preparation. Cell viability and transfection efficiency of EGFP-mRNA NPs CTCE-EGFP-mRNA NPs and SCP-EGFP-mRNA NPs were prepared for evaluated the cell viability of the mRNA NPs along with their transfection efficiency of EGFP-mRNA. For the cell viability tests, RIL-175 cells were plated in a 96-well plate at a density of 5 × 10 3 cells per well. After 24 h of cell adherence, cells were treated with EGFP-mRNA at various mRNA concentrations (0.0625, 0.125, 0.250, 0.500, and 0.750 μg ml −1 ) for 24 hr, the cells were washed with PBS buffer (pH 7.4), followed by changing the culture medium to 0.1 ml fresh complete medium per well and further incubation for another 24 hr to evaluate cell viability by the Alamar Blue assay according to the manufacturer’s protocol and a microplate reader (TECAN, Infinite M200 Pro). To test the transfection efficiency, RIL-175 cells were seeded at a density of 5 × 10 4 cells per well on a 6-well plate and allowed to attach and grow until ~80% confluence. Cells were transfected with EGFP-mRNA NPs at the mRNA concentration of 0.5 μg ml −1 for 24 h followed by washing with fresh complete medium and further incubated for 24 h to assess transfection efficiency by measuring GFP expression using flow cytometry (DXP11 Flow Cytometry Analyzer). The percentages of GFP-positive cells were calculated and analyzed using Flowjo software (Flowjo V10). Establishment of CXCR4-KO RIL-175 cells The precise gene-editing system of CRISPR (clustered regularly interspaced short palindromic repeat)/Cas9 (CRISPR associated) was performed to knock out the CXCR4 gene in RIL-175 cells. Briefly, the single guide RNA (sgRNA) targeting CXCR4 was designed on the online tool ( ) including sgRNA1 (forward: 5′-CACCGTCGAGAGCATCGTGCACAAG-3′, reverse:5′-AAACCTTGTGCACGATGCTCTCGAC-3′) and sgRNA 2 (forward: 5′-CACCGGGACTTACACTCACACTGAT-3′, reverse: 5′-AAACATCAGTGTGAGTGTAAGTCCC-3′), and sequentially were phosphorylated and annealed. At one time, the lentiviral expression lentiCRISPRv2 plasmid (Addgene, cat. no. 52961, USA) was digested and dephosphorylated with BsmBI enzyme (ThermoFisher, cat. No. ER0451) following by running DNA gel and gel purify the larger band leaving the 2 kb filler piece. Next, the ligation reaction of lentiCRISPRv2 and sgRNAs was established for incubating 10 min at room temperature. After finishing the process of transformation in Stbl3 bacteria and validation by DNA sequencing, the lentiCRISPv2 inserted with sgRNAs targeting CXCR4 was selected out. Then the lentivirus system including lentiCRRISPv2 and the packaging plasmids pVSVg (AddGene, cat. No.8454) and psPAX2 (AddGene, cat. No.12260) were co-transfected into HEK293T cells to produce the complete lentivirus and further transfected into RIL-175 wide type cells. The puromycin (2 μg/μl) previously included in the lentiCRISPRv2 was used to screen out the positive cells successfully transfected with the complete lentivirus. Finally, the quantitative PCR and western blotting were performed to detect the expression of CXCR4 from both transcriptional and protein levels. Cellular uptake of dye-labeled mRNA-encapsulated NPs To monitor the cellular uptake of the NPs, Cy5-Luc-mRNA-NPs were prepared. RIL-175 cells were first seeded in 35 mm confocal dishes (MatTek) at a density of 5 × 10 4 cells per well and incubated at 37 °C in 5% CO 2 for 24 h. The cells were then incubated with medium (DMEM) containing Cy5-Luc-mRNA-NPs at different time intervals. The cells were then washed with PBS, counterstained with Hoechst 33342 (Thermofisher), and analyzed using an Olympus microscope (FV1200, Olympus). In vitro cell growth inhibition assay with p53-mRNA NPs RIL-175 or HCA-1 cells were plated in 96-well plates at a density of 5 × 10 3 cells per well. After 24 h of cell adherence, cells were treated with empty NPs (blank NPs), free p53 mRNA, p53-mRNA NPs at different mRNA concentrations (0.0625, 0.125, 0.250, 0.500, and 0.750 μg ml −1 ). After 24 h of incubation, the cells were washed with PBS buffer (pH 7.4) and further incubated in fresh medium for another 24 h. AlamarBlue cell viability was used to verify the in vitro cell growth inhibition efficacy of p53-mRNA NPs. Immunoblotting Protein extracts from cells taken from dissected tumors in each group were prepared using lysis buffer (1 mM EDTA, 20 mM Tris-HCl pH 7.6, 140 mM NaCl, 1% aprotinin, 1% NP-40, 1 mM phenylmethylsulphonyl fluoride, and 1 mM sodium vanadate), and supplemented with protease inhibitor cocktail (Cell Signaling Technology) and boiled at 100 °C for 10 min. Equal amounts of protein were determined with a bicinchoninic acid protein assay kit (Pierce/Thermo Scientific) according to the manufacturer’s instructions. After gel electrophoresis and protein transformation, membranes were blocked with 3% bovine serum albumin (BSA) in TBST (150 mM NaCl, 50 mM Tris-HCl at pH 7.4, and 0.1% Tween 20) for 1 h at room temperature with gentle shaking. Membranes were rinsed and then incubated overnight at 4 °C with appropriate primary antibodies. The immunoreactive bands were visualized using an enhanced chemiluminescence (ECL) detection system (Cell Signaling Technology). Immunofluorescence staining and microscopy For immunofluorescence staining, cells or tumor tissues from each treatment group were washed with ice-cold PBS and fixed with 4% paraformaldehyde (Electron Microscopy Sciences) in PBS for 20 min at room temperature, followed by permeabilization in 0.2% Triton X-100-PBS for 10 min. Samples were followed by blocking with PBS blocking buffer containing 2% normal goat serum, 2% BSA, and 0.2% gelatin for 1 h at room temperature. Then, the samples were incubated in primary antibodies at the appropriate concentration for 1 h at room temperature, washed with PBS and incubated in goat anti-rat-Alexa Fluor 647 (Molecular Probes) at 1:1000 dilution in blocking buffer for another 1 h at room temperature. Finally, stained cells were washed with PBS, counterstained with Hoechst 33342 (Molecular Probes-Invitrogen, H1399, 1:10000 dilution in PBS), and mounted on slides with Prolong Gold antifade mounting medium (Life Technologies). The slides were imaged under a confocal laser scanning microscope (Olympus, FV1100). Animals For the s.c. tumor model, all animal procedures were performed in ethical compliance and with approval by the Institutional Animal Care and Use Committees at Harvard Medical School. Immunocompetent male and female C57BL/6 mice (5-6 weeks old or 6–8 weeks old) were obtained from Charles River Laboratories and housed in a pathogen-free animal facility of Brigham and Women’s Hospital, Harvard Medical School. For each experiment, mice were randomly allocated to each group. Mice were put for at least a 72 h acclimation period prior to use in order for physiological parameters to return to baseline after shipping and transferring. All animals were housed in single-unit cages with 12-h alternate light and dark cycles and at controlled ambient temperature (68-79 °F) with humidity between 30%-70%. For the orthotopic tumor model, all animal experiments were performed after approval by the Institutional Animal Care and Use Committee of the Massachusetts General Hospital. Pharmacokinetics study Healthy C57Bl/6 mice (5–6 weeks old, n = 3 per group) were injected intravenously with free Cy5-Luc-mRNA, CTCE-Cy5-Luc-mRNA NPs, or SCP-Cy5-Luc-mRNA NPS through the tail vein at the mRNA dose of 350 μg per kg of animal weight. Blood was collected retroorbitally at different time points (5 min, 30 min, 1 h, 2 h, 6 h, 12 h, and 24 h) and the fluorescence intensity of Cy5-Luc-mRNA was measured using a microplate reader (TECAN, Infinite M200 Pro). Pharmacokinetics was evaluated by calculating the percentage of Cy5-Luc mRNA in blood at various time points. HCC tumor model preparation Two p53 -null RIL-175 HCC tumor models, an ectopic (s.c.) grafted model and an orthotopic model, were developed for in vivo biodistribution, modulation of the immune microenvironment, therapeutic efficacy, and in vivo toxicity studies. An orthotopic p53-wild type HCA-1 HCC tumor model was also developed for the in vivo therapeutic efficacy study. For the s.c. grafted model, ~1 × 10 6 RIL-175 cells in 100 μl of culture medium mixed with 100 μl of matrigel (BD Biosciences) were implanted subcutaneously in the right flank of C57Bl/6 mice (6–8 weeks old). Mice were monitored for tumor growth every other day according to the animal protocol. To develop the RIL-175 orthotopic model, ~1 million RIL-175 cells 1:1 in Matrigel (Mediatech/Corning, Manassas, VA) were grafted into the left extrahepatic lobe of C57Bl/6 mice (6–8 weeks old). Tumor growth was monitored by high-frequency ultrasonography every 3 days according to the animal protocol. For the HCA-1 orthotopic model, approximately 1 million HCA-1 cells 1:1 in Matrigel (Mediatech/Corning, Manassas, VA) were grafted into the left extrahepatic lobe of C3H mice (6–8 weeks old). Tumor growth was monitored by high-frequency ultrasonography every 3 days according to the animal protocol. When the tumor volume reached about ~100 mm 3 (for ectopic model) or ~5 mm in diameter (for orthotopic model), mice were randomly assigned to a treatment group. Biodistribution of mRNA NPs in the RIL-175 HCC tumor model The biodistribution and tumor accumulation of mRNA NPs were assessed in C57Bl/6 mice bearing with s.c. grafted RIL-175 tumor (~100–200 mm 3 ) and in the RIL-175 orthotopic model (~5 mm in diameter), respectively. In brief, RIL-175 bearing C57Bl/6 mice (5–6 weeks old, n = 3 per group) were injected intravenously with free Cy5-Luc-mRNA, CTCE-Cy5-Luc NPs or SCP-Cy5-Luc NPs via the tail vein at a mRNA dose of 350 μg per kg of animal weight. After 24 h, all the mice were sacrificed, and dissected organs and tumors were visualized using a Syngene PXi imaging system (Synoptics Ltd). The data were analyzed by Image J software. Flow cytometry and cytokine analysis Tumor immune-environment responses were assessed in the s.c. grafted and orthotopic HCC models by cytokine detection and flow cytometry after treatment. RIL-175 tumor-bearing C57Bl/6 mice (6–8 weeks old, n = 3 per group) were systemically (i.v. via tail vein) injected with CTCE-targeted p53 mRNA NPs or control groups (i.e., PBS or CTCE-EGFP NPs) every 3 days for four injections (at the murine p53 or EGFP mRNA dose of 350 μg/kg animal body weight). For the combinatorial immunotherapy group, one day after each i.v. injection of CTCE-p53 NPs, mice underwent intraperitoneal (i.p.) administration of aPD1 (100 μg per dose). The tumor inoculation and treatment schedule are depicted in Fig. 3a and Supplementary Fig. 22a . Forty-eight hrs post treatment, mice were euthanized and tumor tissue was harvested and homogenized for flow cytometry and cytokine analysis. For flow cytometry, tumor tissues were resected and minced, and fragments were incubated in HBSS with 1.5 mg/mL of hyaluronidase and 15 µg/mL of collagenase for 30 minutes at 37 °C. Digested tissues were passed through a 70-µm cell strainer and washed twice with phosphate-buffered saline (PBS)/0.5% bovine serum albumin. Prior to immunostaining, cells were washed with the buffer and fixed and permeabilized with FoxP3/Transcription Factor Staining Buffer Set (eBioscience/Thermo Fischer Scientific) to stain the intracellular markers. Harvested cells were incubated in Dulbecco’s Modified Eagle Medium with cell activation cocktail with BD Leukocyte Activation Cocktail, with BD GolgiPlug™(1:500, Biolegend) for 6 h at 37 °C. The cells were stained with the antibodies of cell surface and intracellular marker in the buffer with brefeldin A. Cells were stained with fluorescence-labeled antibodies CD11c (Biolegend, cat. no. 117310, clone N418), CD80 (Biolegend, cat. no. 104722, clone 16-10A1), CD 86 (Biolegend, cat. no. 105005, clone clone GL-1), CD4 (Biolegend, cat. no. 100412, clone GK1.5), CD3 (Biolegend, cat. no. 100204, clone 17 A2), CD8 (Biolegend, cat. no. 140408, clone 53–5.8), CD11b (Biolegend, cat. no. 101208, clone M1/70), F4/80 (Biolegend, cat. no. 123116, clone BM8), CD206 (Biolegend, cat. no. 141716, clone C068C2), Gr-1 (Biolegend, cat. no. 108412, clone RB6-8C5), CD45 (Biolegend, cat. no. 103108, clone 30-F11), TCR (Biolegend, cat. no. 109243, clone H57-597), CD39 (Biolegend, cat. no. 143805, clone Duha59), Ki67 (Biolegend, cat. no. 652423, clone 16A8), CD11b (Biolegend, cat. no. 101243, clone M1/70), CD206 (Biolegend, cat. no. 141717, clone C068C2), Forkhead box protein P3 (FoxP3; Biolegend, cat. no. 126419, clone MF-14), IFN-γ Receptor βchain (Biolegend, cat. no. 113605, clone MOB-47), CD119 (BD Bioscience, cat. no. 740897, clone GR20), FITC (Biolegend, cat. no. 503805, clone JES6-5H4) following the manufacturer’s instructions. All antibodies were diluted 200 times, except for FoxP3 and CD119 staining, which were 1:100 dilution. The stained cells were measured on a flow cytometer (Accuri C6 Plus, BD Biosciences) and analyzed by FlowJo software (Flowjo V10). The numbers presented in the flow cytometry analysis images are percentage based. For cytokine studies, tissue samples were assayed in duplicate using the MSD proinflammatory Panel I, a highly sensitive multiplex enzyme-linked immunosorbent assay (ELISA) for quantitatively measuring 10 cytokines-IFN-γ, interleukin (IL)−1β, IL-2, IL-4, IL-5, IL-6, IL-10, IL-12p70, TNF-α, KC/GRO and IL-9, IL-15, IP-10, MCP-1, MIP-1α, MIP-2, IL-17A/F, IL-27p28/IL-30, IL-33 using electrochemiluminescence-based detection (MesoScale Discovery, Gaithersburg, MD). In vivo therapeutic efficacy The therapeutic effects of p53-mRNA NPs and their integrated antitumor effect with anti-PD1 were evaluated in the p53 -null HCC s.c. RIL-175 tumor model, p53 -null RIL-175 orthotopic tumor model, and p53 -wild-type HCA-1 orthotopic tumor model. For the s.c. model, RIL-175 tumor-bearing C57Bl/6 mice (6–8 weeks old, n = 5 per group) were monitored for tumor growth every other day after tumor implantation; tumor size was measured using a digital caliper and calculated as 0.5 × length × width 2 . When the tumor volume reached about ~100 mm 3 , mice were randomly divided into five groups ( n = 5), which received treatment with PBS, CTCE-EGFP NPs, CTCE-p53 NPs, aPD1, or the combination of CTCE-p53 NPs and aPD1 according to the schedule in Supplementary Fig. 22a at the mRNA dose of 350 μg/kg animal body weight, while the aPD1 were administrated by i.p. at 100 μg per dose one day after the p53-mRNA NPs treatment. Tumor growth was measured and calculated every 3 days. The body weights of all mice were recorded every three days during this period. Animals were euthanized upon showing signs of imperfect health or when the size of their accumulated tumors exceeded 1.0 cm 3 . For the orthotopic HCC tumor model, tumor growth was monitored by high-frequency ultrasonography every 3 days. When the tumor size reached ~5 mm in diameter, mice were randomly assigned to a treatment group ( n = 12). Treatments were administered according to the schedule in Fig. 3a . For the comparison of side-by-side the in vivo survival of the combination of CTCE-p53 NPs with aPD1 against the new standard of care in HCC patients (i.e., anti-VEGFR2 antibody + aPD-L1 antibody) in the orthotopic RIL-175 tumor model, treatments were administered i.p. every 3 days for 4 doses at 10 mg/kg of aPD-L1 antibody (Bioxcell, #BE0101, clone 10F.9G2), and 10 mg/kg of anti-VEGFR-2 antibody (Bioxcell, #BE0060, clone DC101) (Supplementary Fig. 20a ). For survival studies, the endpoint was moribund status, defined as signs of prolonged distress, >15% weight loss compared with the starting date, body condition score >2, or tumor size of >15 mm in diameter. Bioluminescence To further explore the therapeutic efficacy of our therapeutic strategy, tumors were also assessed using an in vivo bioluminescence imaging system (Bruker Xtreme scanner). Mice were monitored for tumor growth by bioluminescent in vivo imaging every 6 days (Day 0, 6, and 12); specifically, 8 minutes after intraperitoneal injection of 150 mg/kg D-luciferin substrate (PerkinElmer, Catalog#122799), mice from each treatment group ( n = 3) were imaged. Immunohistochemistry staining The expression of p53 protein and CD8 + cells in tumor tissue sections from different in vivo treatment groups were assessed by immunohistochemistry. Tumor sections were fixed in 4% buffered formaldehyde solution and embedded in paraffin. Paraffin-embedded sections were deparaffinized, rehydrated, and washed in distilled water. In order to retrieve the antigen, tumor tissue sections were incubated in 10 mM citrate buffer (pH = 6) for 30 min, washed in PBS, and immersed in 0.3% hydrogen peroxide (H 2 O 2 ) for 20 min, then incubated in blocking buffer (5% normal goat serum and 1% BSA) for 60 min. Tissue sections were then incubated with the appropriate primary antibodies (PBS solution supplemented with 0.3% Triton X-100) at 4 °C overnight in a humid chamber. After being rinsed with PBS, the samples were incubated with biotinylated secondary antibody at room temperature for 30 min, rinsed again with PBS, and incubated with the avidin-biotin-horseradish peroxidase complex (ABC kit, Vector Laboratories, Inc). After being washed again, stains were processed with the diaminobenzidine peroxidase substrate kit (Impact DAB, Vector Laboratories, Inc) for 3 min. Sections were evaluated using a Leica Microsystem after being counterstained with hematoxylin (Sigma), dehydrated, and mounted. In vivo toxicity evaluation The in vivo toxicity of p53-mRNA NPs was comprehensively studied in both the p53 -null HCC s.c. graft tumor model and the p53 -null orthotopic HCC tumor model. In brief, the major organs were harvested at the end point, sectioned, and H&E stained to evaluate the histological differences. In addition, blood was drawn, and serum was isolated at the end of the in vivo efficacy experiment. Various parameters including ALT, AST, BUN, RBC, WBC, Hb, MCHC, MCH, HCT, and LY were tested to evaluate toxicity. Statistical analysis A two-tailed Student’s t-test or a one-way analysis of variance (ANOVA) was performed when comparing two groups or more than two groups, respectively. Statistical analysis was carried out using Prism 8.0 (GraphPad) and Microsoft Excel. Data are expressed as standard deviation (S.D.) or standard error means (S.E.M) as described in the main text. Difference was considered to be significant if P < 0.05 (* P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001 unless otherwise indicated). All studies were performed at least in triplicate unless otherwise stated. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that all data supporting the findings of this study are available within the Article, Supplementary Information or Source Data file. Source data are provided with this paper. | None | [] | [] | [] | SciNews | Medicine | Combining p53 mRNA nanotherapy with immune checkpoint blockade reprograms the immune microenvironment for effective cancer therapy, Nature Communications (2022). DOI: 10.1038/s41467-022-28279-8 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-022-28279-8 | https://medicalxpress.com/news/2022-02-function-gene-suppress-liver-cancer.html | Researchers from Massachusetts General Hospital and Brigham and Women's Hospital have developed a new approach to treating liver cancer by reprogramming the tumor microenvironment using mRNA nanoparticles. The technology, similar to that used in COVID-19 vaccines, restores the function of the p53 master regulator gene, a tumor suppressor mutated in liver and other cancers. When combined with immune checkpoint blockade, the p53 mRNA nanoparticle approach induced suppression of tumor growth and significantly increased antitumor immune responses in laboratory models of hepatocellular carcinoma. The study, published in Nature Communications, suggests that this approach could be a transformative treatment for liver cancer and potentially other cancers, and the researchers are planning to move forward with a clinical trial to test the therapy in patients.
A team of researchers from Massachusetts General Hospital (MGH) and Brigham and Women's Hospital (BWH) has reprogrammed the tumor microenvironment of liver cancer by using mRNA nanoparticles. This technology, similar to the one used in COVID-19 vaccines, restored the function of the p53 master regulator gene, a tumor suppressor mutated in not just liver but also other types of cancer. When used in combination with immune checkpoint blockade (ICB), the p53 mRNA nanoparticle approach not only induced suppression of tumor growth but also significantly increased antitumor immune responses in hepatocellular carcinoma (HCC) laboratory models. The results of the study were published in Nature Communications. "The reprogramming of the cellular and molecular components of the tumor microenvironment could be a transformative approach for treating HCC and other cancers," says co-senior author Jinjun Shi, Ph.D., with the Center for Nanomedicine at BWH, who developed the platform with MGH liver cancer biologist and co-senior author Dan G. Duda, DMD, Ph.D. "By using this new approach, we're targeting specific pathways in tumor cells with mRNA nanoparticles. These tiny particles provide the cells with the instructions to build proteins, which, in the case of HCC, delayed tumor growth and rendered the tumor more responsive to treatment with immunotherapy." HCC is the most prevalent form of liver cancer, characterized by a high mortality rate and dismal prognosis for patients. Immune checkpoint blockers, a revolutionary new class of drugs that enable the body's immune system to recognize and attack cancer cells, have shown efficacy in treating HCC, but most patients do not benefit. To overcome this resistance, multiple strategies are being developed to improve ICBs by combining them with other existing therapies, such as anti-VEGF drugs and radiotherapy. However, even these approaches are expected to benefit only a small number of patients, creating an urgent need for new combination therapies. Encouraged by the success of mRNA in COVID-19 vaccines, Shi decided to apply the technology (with certain modifications) to targeting cancer cells. He teamed up with Duda, whose MGH lab had already created sophisticated animal models to analyze the microenvironment of liver tumors in response to immunotherapy. They developed and optimized an mRNA nanoparticle strategy to restore loss of function of p53, a tumor suppressor gene whose function is lost in more than one-third of HCC cases. In doing so, they uncovered evidence that p53 regulates the tumor microenvironment by modulating the interaction of cancer cells with immune cells as part of ICB therapy. "In our previous work we had developed nanoparticles to target CXCR4—a chemokine receptor expressed by liver cancer cells—and selectively co-deliver drugs such as kinase inhibitors," explains Duda. "We've now adapted this platform to use CXCR4 as a kind of ZIP code to selectively target the tumor with nanoparticles encapsulating therapeutic mRNAs. When we combined this nanomedicine with anti-programmed death receptor 1 (PD-1) antibodies, a standard immunotherapy for HCC patients, it induced global reprogramming of the tumor microenvironment and tumor response by restoring p53 expression." The next step for the team is to transfer their research from animal models to patients in a clinical trial. "Scientists have struggled for decades to find an effective way to target the tumor suppressor pathways," emphasizes Shi. "Our proof-of-concept study is an exciting development that clearly shows that p53 mRNA nanoparticles in combination with ICB not only works, but also could make a big difference by reversing immunosuppression in HCC and potentially other cancers." Shi is an associate professor of Anesthesia at Harvard Medical School (HMS). Duda is associate professor of Radiation Oncology at HMS and director of translational research in GI radiation oncology at MGH. Yuling Xiao, Ph.D., and Jiang Chen, MD, Ph.D., are the lead authors of the study and postdoctoral fellows at HMS. |
10.1038/s41590-022-01145-x | A new pathway to shrink cancerous tumors through body's immune cells | Cancer researchers at Case Western Reserve University School of Medicine say they have successfully suppressed the growth of some solid tumors in research models by manipulating immune cells known as macrophages. The researchers say this discovery is significant because many solid tumor cancers, such as lung cancer, are difficult to treat. According to the National Cancer Institute, breast, lung, prostate and colorectal cancers—all of which are solid tumor cancers—account for almost half of all new cancer cases in the United States. In this new research, the scientists discovered that altering the macrophage metabolism—and, in doing so, influencing their relationship with T cells—suppressed the tumor's growth. The result was a significant reduction in overall tumor size in some mouse models. "The race to find a cure for cancer never stops," said Stanley Huang, an assistant professor of immunology in the Department of Pathology at the School of Medicine, who led the research. "Our research creates a pathway to a [potential] new form of cancer treatment for those with solid tumor cancers." The study appeared recently in the journal Nature Immunology. T cells and macrophages Generally, the body's immune response to disease involves mobilizing white blood cells that attack invaders like germs and bacteria. Macrophages are specialized white blood cells that consume invading cells to destroy pathogens. They are considered the "frontline soldiers" of the body's immune system and can activate T cells, which are another type of white blood cell. Yet, despite their typically protective role, macrophages can be co-opted by tumor cells to encourage tumor growth. Targeting macrophages and PERK protein As tumors grow and macrophages interact with the tumor cells, they create a response protein, which the study linked to tumor growth. Huang said the team believed it was possible to target macrophages and that particular protein—known to scientists by its shorthand, PERK ("protein kinase R" (PKR)-like endoplasmic reticulum kinase)—to block tumor growth. "Knocking out PERK suppresses downstream metabolic signaling in tumor macrophages, resulting in more T cells to fight the cancer cells," said Huang. Findings and future steps The study's findings suggest that the PERK protein is involved in several key pathways of metabolism in macrophages—and when the gene is removed, macrophages can no longer promote tumor growth; meaning tumors become smaller. Follow-up experiments further revealed that combination treatment of a PERK inhibitor drug with an inhibitor called "anti-PD-1" could significantly reduce tumor growth. Next, the researchers hope to identify a clinical drug that will act as an inhibitor for the PERK protein. "There are several strategies to enhance anti-tumor immunity like targeting or editing cell metabolism," Huang said. "We can target genes and their pathways to enhance immune function and work toward future therapeutic treatment options." | Researchers at Case Western Reserve University School of Medicine have made a breakthrough in cancer treatment by discovering that manipulating immune cells called macrophages can suppress the growth of solid tumors in research models. By altering the metabolism of macrophages and influencing their relationship with T cells, the team was able to significantly reduce the size of tumors in some mouse models. The study found that a protein called PERK, which is involved in metabolic signaling in macrophages, plays a key role in promoting tumor growth. By targeting PERK, the researchers were able to block tumor growth and even combine it with other treatments to achieve significant reductions in tumor size. The team hopes to identify a clinical drug that can inhibit PERK and is working towards developing new therapeutic treatment options for solid tumor cancers, which account for almost half of all new cancer cases in the US. | None | Abstract Chronic inflammation triggers compensatory immunosuppression to stop inflammation and minimize tissue damage. Studies have demonstrated that endoplasmic reticulum (ER) stress augments the suppressive phenotypes of immune cells; however, the molecular mechanisms underpinning this process and how it links to the metabolic reprogramming of immunosuppressive macrophages remain elusive. In the present study, we report that the helper T cell 2 cytokine interleukin-4 and the tumor microenvironment increase the activity of a protein kinase RNA-like ER kinase (PERK)-signaling cascade in macrophages and promote immunosuppressive M2 activation and proliferation. Loss of PERK signaling impeded mitochondrial respiration and lipid oxidation critical for M2 macrophages. PERK activation mediated the upregulation of phosphoserine aminotransferase 1 (PSAT1) and serine biosynthesis via the downstream transcription factor ATF-4. Increased serine biosynthesis resulted in enhanced mitochondrial function and α-ketoglutarate production required for JMJD3-dependent epigenetic modification. Inhibition of PERK suppressed macrophage immunosuppressive activity and could enhance the efficacy of immune checkpoint programmed cell death protein 1 inhibition in melanoma. Our findings delineate a previously undescribed connection between PERK signaling and PSAT1-mediated serine metabolism critical for promoting immunosuppressive function in M2 macrophages. Main Macrophages, a critical component of the innate immune system, are a group of heterogeneous cells present in all tissues. Due to this wide distribution, macrophages are uniquely poised to exert essential processes for human health—from pathogen clearance, tissue repair and maintenance of homeostasis 1 , 2 . The ability of macrophages to serve these functions reflects their ability to execute disparate cellular programs in response to distinct extracellular cues. As a result, immunosuppressive (M2) and proinflammatory (M1) macrophages represent two distinct polarization phenotypes in response to either tumor and helminthic insults or bacterial and viral infection 3 . Moreover, the revitalization of immunometabolism and epigenetics research has uncovered new insights into these polarization phenotypes, revealing major and largely nonoverlapping alterations in gene expression that are closely associated with distinctive metabolic pathways 4 , 5 . These distinct phenotypes are dependent on cues from the surrounding microenvironment, and inflammatory milieus are known to impose stress signals that affect the energetic demands and cellular fitness of infiltrating immune cells 6 , 7 . However, to induce phenotypic changes, these signals must be incorporated and translated intracellularly. The major organelle responsible for coordinating extrinsic challenges and intrinsic cellular demands is the ER where the progression of inflammatory diseases can provoke the unfolded protein response (UPR). The UPR is commonly associated with the maintenance of proteostasis; however, recent findings show that activation of the UPR is linked to the development and function of immune cells 8 , 9 , 10 , including dendritic cells 11 , 12 , myeloid cell-driven immunosuppressive cells (MDSCs) 13 and also T cells 14 , 15 . The UPR signaling cascade is primarily initiated by the type I transmembrane kinase, inositol-requiring enzyme-1α (IRE1α), the type II transmembrane protein, activating transcription factor (ATF) 6 and PERK (encoded by Eif2ak3 ) 16 . Recent studies have suggested that IRE1α-mediated, X-box-binding protein (XBP1) signaling plays a crucial role in macrophages during inflammatory diseases 17 , 18 . Yet, these findings have reached inconclusive and/or contradictory conclusions. This raises an important question about whether other arms of the UPR contribute to the metabolic adaptation necessary to support the immunosuppressive characteristics of macrophages. Activated PERK phosphorylates the downstream mediator eukaryotic translation initiation factor 2α (eIF2α) 16 , leading to the induction of stress-responsive ATF-4 activation 19 . PERK signaling induces mitochondrial function 20 , whereas ATF-4 activation has been suggested to upregulate a set of targets involved in amino acid anabolism 21 . In the present study, we show that the PERK arm of the UPR is uniquely upregulated in macrophages responding to the helper T cell 2 (T H 2) cytokine interleukin-4 (IL-4) and also the tumor microenvironment (TME). This PERK signaling modality promotes mitochondrial respiration to fulfill cellular energy requirements while also signaling through ATF-4 to regulate PSAT1 activity to mediate the serine biosynthesis pathway. The process of PSAT1-mediated serine synthesis, in addition to supporting mitochondrial fitness, balances the production of α-ketoglutarate (α-KG) necessary for JMJD3-dependent histone demethylation and reinforces immunosuppressive M2 activation and cell expansion. These results highlight a previously uncharacterized role for PERK in cellular metabolism and epigenetic modification in M2 macrophages, and our findings may offer a new strategy for counteracting the immunosuppressive effects of M2 macrophages in human diseases. Results PERK supports macrophage immunosuppression To investigate the role of the ER stress response in immunosuppressive M2 macrophages, we first analyzed publicly available microarray and single-cell RNA-sequencing (RNA-seq) data and performed gene set enrichment analysis (GSEA) of IL-4/anti-IL-4 antibody complex (IL-4c)-treated mouse peritoneal macrophages (accession no. GSE54679 ) 22 and tumor-associated macrophages (TAMs) from patients with lung carcinoma (accession no. GSE97168 ) 23 . Our data indicated that, under IL-4 stimulation (Extended Data Fig. 1a ) and within the TME (Extended Data Fig. 1b ), macrophages upregulated genes associated with an ER stress response. By analyzing our RNA-seq dataset (accession no. GSE53053 ) 24 , we found that the PERK arm of the ER stress response was markedly induced by bone marrow-derived macrophages (BMDMs) after stimulation with IL-4 compared with naive (M0) and proinflammatory (M1) macrophages (Fig. 1a ). Moreover, we observed a positive correlation between CD68 messenger RNA of tumor macrophages and individual gene transcripts ( HSP5A , EIF2A , NFE2L2 and ATF4 ) of the PERK-signaling axis in different human cancer patient samples from The Cancer Genome Atlas (TCGA) program, including colon adenocarcinoma, lung adenocarcinoma and pancreatic ductal adenocarcinoma (Extended Data Fig. 1c ), suggesting that the activation of PERK may be required to support an immunosuppressive M2 phenotype. To confirm this, we stimulated BMDMs with the T H 2 cytokine IL-4 or assessed TAMs from animals bearing B16-F10 melanoma. We found that both IL-4-stimulated macrophages and TAMs exhibited a higher percentage of activated (phosphorylated) PERK protein compared with naive BMDMs and splenic macrophages from melanoma tumor-bearing mice, respectively (Fig. 1b,c and Extended Data Fig. 1d ). Of note, a conventional ER stress inducer, thapsigargin, could induce the phosphorylation of PERK (Extended Data Fig. 1d,e ), but was incapable of driving polarization toward an immunosuppressive phenotype in macrophages (Extended Data Fig. 1f ), suggesting that PERK activation itself is not sufficient to induce M2 polarization, but, rather, an initiating factor such as IL-4 or the TME is also necessary. Fig. 1: PERK stress signaling promotes an immunosuppressive phenotype in macrophages. a , Expression of genes encoding molecules involved in the PERK arm of the ER stress response in M0 (naive), M1 (LPS + IFN-γ) and M2 (IL-4) BMDMs, assessed by RNA-seq analysis. b , Geometric mean of fluorescence intensity (gMFI) of p-PERK + BMDMs cultured for 24 h with IL-4, measured by flow cytometry ( n = 3, mean ± s.e.m). Data are collected from three independent experiments. c , The gMFI of p-PERK + macrophages in the tumor and spleen from B16-F10 tumor-bearing mice ( n = 4 mice per group). Data represent two independent experiments. d , Expression of CD206, CD301, PD-L2 and Relmα in M0- and IL-4-treated BMDMs from Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre mice ( n = 3, mean ± s.e.m). Data are collected from three independent experiments. e , Representative histogram (left) and quantitative plot (right) of Relmα + peritoneal macrophages in mice after treatment with IL-4c ( n = 4 mice per group, mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. f , Absolute number of peritoneal macrophages ( n = 4 for Eif2ak3 fl/fl and n = 3 for Eif2ak3 fl/fl × LysM Cre mice; mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. g , Representative histogram (left) and quantitative plot (right) of Ki67 + peritoneal macrophages in mice after treatment with IL-4c ( n = 4 for Eif2ak3 fl/fl and n = 3 Eif2ak3 fl/fl × LysM Cre mice; mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. h , Representative expression of CD206 and CD301 by PERK wild-type or knockout BMDMs cocultured with either B16-F10 melanoma cells (top) or LLCs (bottom) for 72 h ( n = 4, mean ± s.e.m). Data represent two independent experiments. i , Proliferation of CTV-labeled CD8 OT-I T cells activated with anti-CD3 and anti-CD28, and cocultured with PERK wild-type or PERK-null BMDMs stimulated with IL-4 (M2) or LPS + IFN-γ (M1) at a ratio of 1:10 for 72 h ( n = 2, mean ± s.e.m). Data represent two individual experiments. j , k , Tumor growth ( Eif2ak3 fl/fl , n = 15; Eif2ak3 fl/fl × LysM Cre, n = 15; mean ± s.e.m) ( j ) and tumor weight ( k ) of B16-F10 melanoma. Data were taken from tumors harvested on either day 10 (D10) or day 16 (D16) post-tumor transplantation ( n = 5 mice per group D10; n = 15 mice per group D6; mean ± s.e.m). Each symbol represents one individual. Data represent at least two independent experiments. l , m , Absolute number of TAMs ( l ) and frequency of CD206 + TAMs ( m ) in Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre tumor-bearing mice from either D10 or D16 tumors ( n = 4 per group D10 or n = 15 per group D16; mean ± s.e.m). Each symbol represents one individual. Data represent at least two independent experiments. n – p , Absolute number of TILs ( n ) and frequency of IFN-γ + CD4 ( o ) and IFN-γ + CD8 ( p ) T cells in Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre tumor-bearing mice from either D10 or D16 tumors ( n = 4 per group, mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. q , Survival analysis between Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre mice bearing B16-F10 melanoma ( n = 8 per group, mean ± s.e.m); data are collected from two independent experiments. All data were analyzed using a two-tailed, unpaired Student’s t -test ( b – c , g – p ) or a Mantel–Cox test for survival ( q ). Source data Full size image Treatment with a selective PERK inhibitor GSK2656157 could significantly inhibit M2 polarization as measured by the expression of canonical M2 markers CD206 and CD301 (Extended Data Fig. 1g ). To further study the intrinsic effect of PERK in macrophages, we generated myeloid-specific conditional knockout mice deficient in Eif2ak3 by crossing Eif2ak3 fl/fl with LysM Cre mice ( Eif2ak3 fl/fl × LysM Cre; designated PERK cKO ). We observed that PERK deficiency did not affect the cell viability in naive macrophages (Extended Data Fig. 1h ), yet PERK-null macrophages upon IL-4 stimulation were significantly hindered in M2 polarization both in vitro (Fig. 1d ) and in vivo (Fig. 1e ). Importantly, we found that PERK supports the proliferative capacity of peritoneal macrophages in this T H 2 setting, because increases in cell number and Ki67 expression were greatly inhibited in PERK-deficient peritoneal macrophages (Fig. 1f,g ). In contrast, macrophages classically activated with lipopolysaccharide (LPS) + interferon-γ (IFN-γ) (M1 macrophages) exhibited low expression of PERK (Fig. 1a and Extended Data Fig. 1i ). Similar to a reported study 25 , PERK deficiency (PERK cKO ) had no impact on the proinflammatory expression of inducible nitric oxide synthase (iNOS) and tumor necrosis factor (TNF) (Extended Data Fig. 1j,k ). Furthermore, loss of PERK did not ‘rewire’ M1 macrophages to exhibit an anti-inflammatory phenotype (Extended Data Fig. 1i,l ) 26 and deletion of PERK did not detrimentally affect the expression of XBP1s (Extended Data Fig. 1m ), which has been linked to macrophage immunity in obesity 17 and the TME 18 . ER stress responses have been implicated in dysregulated dendritic cell antigen presentation 12 and T cell exhaustion 14 , 15 in the TME. We, therefore, sought to determine whether PERK activity is required to support the immunity of TAMs. PERK-deficient BMDMs cocultured with either melanoma cells (B16-F10) or Lewis lung carcinoma cells (LLCs) exhibited less M2 polarization compared with PERK-sufficient macrophages (Fig. 1h ). In addition, these PERK cKO M2 macrophages were less capable of blunting T cell proliferation (Fig. 1i ), suggesting that loss of PERK in macrophages can confer greater anti-tumor immunity. To test this, we transplanted wild-type ( Eif2ak3 fl/fl ) or PERK cKO mice with murine melanoma cells and measured tumor growth over the course of 16 days. Indeed, both tumor volume (Fig. 1j ) and tumor weight (Fig. 1k ) were significantly lower in the PERK cKO mice compared with wild-type mice. Moreover, reduced numbers of macrophages infiltrating into the tumor were found in the PERK cKO compared with the control mice (Fig. 1l ) and corresponded to a significant decrease in CD206 positivity (Fig. 1m ). Conversely, higher numbers of tumor-infiltrating T lymphocytes (TILs) (Fig. 1n ) and an increased frequency of IFN-γ-expressing CD8 + and CD4 + T cells (Fig. 1o,p ) were found in tumors from PERK cKO mice. Intriguingly, this inverse relationship between TAMs and TILs was detected as early as 10 d post-tumor implantation (Fig. 1k–p ) and this improved anti-tumor immunity corresponded with a noticeable increase in survival (Fig. 1q ). Together, these data indicate that PERK activation promotes an immunosuppressive M2 phenotype in macrophages and that loss of PERK not only inhibits this phenotype but also restores anti-tumor activity. Integrated stress response in macrophage activation UPR signaling partially overlaps with the integrated stress response (ISR) in which PERK is uniquely positioned to coordinate with both pathways by phosphorylating eIF2α 27 . Inhibition of eIF2α using a selective ISR inhibitor (ISRIB) was able to significantly repress M2 polarization without distorting iNOS expression in macrophages (Extended Data Fig. 2a–c ). Yet, in addition to PERK, ISR proteins general control nonderepressible 2 (GCN2), double-stranded RNA-dependent protein kinase (PKR) and heme-regulated eIF2α kinase (HRI) can respond and adapt to different environmental and pathological challenges necessary for maintaining cell homeostasis 28 . To determine whether anti-inflammatory M2 macrophages required other ISR support, we performed RNA-seq analysis of HRI, PKR and GCN2, and found that only the expression of GCN2 was significantly elevated in M2 macrophages (Extended Data Fig. 2d ). Interestingly, deletion of GCN2 did not decrease the expression of CD206, CD301, programmed cell death 1 ligand 2 (PD-L2) and Relmα in macrophages in response to IL-4 (Extended Data Fig. 2e,f ), nor adversely affect iNOS and TNF expression in M1 macrophages (Extended Data Fig. 2g,h ). Moreover, Gcn2 −/− macrophages exhibited only slight differences in metabolic function compared with Gcn2 +/+ controls responding to IL-4 and LPS + IFN-γ (Extended Data Fig. 2i,j ). Thus, PERK signaling, but not other ISR members, accounts for M2 activation. PERK signaling is crucial for metabolic reprogramming As metabolic reprogramming has been suggested to modulate the suppressive function of macrophages 29 , we next examined whether PERK-deficient M2 macrophages fail to sustain appropriate metabolic reprogramming and operation. To test this, we performed an unbiased RNA-seq analysis of M2 macrophages from wild-type and PERK cKO mice. Analysis of RNA-seq data by pathway enrichment and gene ontology (GO) of differentially expressed genes showed that genetic ablation of PERK denoted significant dysregulation of numerous pathways in macrophages, including lysosomal function, mitochondrial oxidative phosphorylation (OXPHOS), lipid metabolism, glutamine metabolism and amino acid synthesis (Fig. 2a,b )—all processes that have been deemed essential for supporting the function of M2 macrophages 24 , 30 , 31 . We then used targeted metabolomics analysis and confirmed that the levels of cellular metabolites were clearly distinguishable in PERK-deficient M2 macrophages compared with the wild-type controls (Fig. 2c,d ), which displayed a marked decline in the generation of mitochondrial metabolites, histidine/pyrimidine and several crucial amino acids (Fig. 2e ). Fig. 2: Multivariate analysis of transcriptomics and metabolomics data. a , b , GSEA performed and enrichment scores shown for Kyoto Encyclopedia of Genes and Genomes (KEGG) ( a ) and GO ( b ) pathway enrichment in PERK knockout M2 macrophages compared with PERK wild-type M2 macrophages ( n = 2 independent experiments). NES, normalized enrichment score. c – e , Metabolomics profiling performed and principal component analysis score plot ( c ), heatmap analysis of metabolites ( d ) and KEGG pathway enrichment in PERK knockout M2 macrophages compared with PERK wild-type M2 macrophages ( e ) ( n = 4 independent experiments). Full size image Although lipid metabolism and mitochondrial fitness have been explicitly tied to M2 macrophage immunity, the initiating factor for these processes has not been fully elucidated. Inhibition of PERK signaling suppressed mitochondrial respiration (oxygen consumption rate (OCR); Fig. 3a and Extended Data Fig. 2k ) and reduced overall ATP production in M2 macrophages (Fig. 3b ). PERK deficiency also deviated glycolytic metabolism (extracellular acidification rate (ECAR)) to levels similar to naive macrophages (Extended Data Fig. 3a ). This scenario could be recapitulated by the PERK inhibitor GSK2656157 (Extended Data Fig. 3c–e ). Although PERK deficiency did not curb the proinflammatory activity of macrophages stimulated with LSP + IFN-γ, it slightly reduced glycolytic metabolism not mitochondrial respiration (Extended Data Fig. 3a,b ). Furthermore, our transcriptomic and real-time quantitative PCR (RT-qPCR) analysis showed that a number of key genes responsible for lipid metabolism and mitochondrial respiration were markedly reduced by PERK inhibition (Fig. 3c,d ). This corresponded to a significant decrease in lipid uptake (Fig. 3e and Extended Data Fig. 3f ) and lipolysis (Fig. 3f and Extended Data Fig. 3g ) in M2 macrophages on IL-4 stimulation. Mitochondria are the major energy-generating source for the cell. This energy is produced within tightly organized structures called cristae, in which the tighter and more densely packed the cristae, the greater the surface area on which the respiratory complexes of the electron transport chain (ETC) can assemble. Thus, for cells such as M2 macrophages, which rely on enhanced mitochondrial function, the number and size of mitochondria available, as well as the density of the cristae within these mitochondria, are imperative for supporting their immunological functions. Importantly, it has been suggested that PERK signaling regulates mitochondrial morphology 32 and abnormal mitochondrial cristae ultrastructure is strongly correlated with dysfunction in mitochondrial respiratory capacity 33 . Using transmission electron microscopy (TEM), we found that PERK cKO M2 macrophages displayed overall lower numbers of mitochondria that were also smaller compared with wild-type cells (Extended Data Fig. 3h ). In comparison to wild-type, PERK cKO macrophages also appeared to exhibit disorganized cristae (Fig. 3g ), having overall fewer numbers (Fig. 3h ) and wider structure (Fig. 3i ) within the intermembrane space. In addition, mitochondrial mass (Fig. 3j and Extended Data Fig. 3i ) and membrane potential (Fig. 3k and Extended Data Fig. 3j ) were both significantly reduced in PERK cKO macrophages or cells treated with GSK2656157. Given the altered cristae morphology, we further investigated whether the complexes of the ETC were affected by PERK deficiency. We found that PERK cKO M2 macrophages had significantly reduced levels of mitochondrial ETC gene and protein expression (Fig. 3l,m ). These data suggest that these cristae differences contribute to an altered metabolic phenotype between PERK wild-type and deficient macrophages. Fig. 3: PERK activity is essential for metabolic reprogramming in M2 macrophages. a , Basal OCR of IL-4-stimulated BMDMs (M2) from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre ( n = 5, mean ± s.e.m). Data were collected from five independent experiments. b , ATP levels of IL-4-stimulated BMDMs from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice ( n = 3, mean ± s.e.m). Data are from three independent experiments. c , Expression of genes encoding CD36, lysosomal acid lipase (LIPA), peroxisome proliferator-activated receptor γ (PPAR-γ), PPAR-γ coactivator-1β (PGC-1β) and acetyl-CoA acetyltransferase 1 (ACAT1) in M2 (IL-4) macrophages from Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre mice, assessed by RNA-seq analysis. d , Expression of CD36, LIPA, PPAR-γ and PGC-1β in PERK wild-type and deficient macrophages stimulated with IL-4 by RT-qPCR analysis ( n = 4, mean ± s.e.m). Data represent two independent experiments. a.u., arbitrary units. e , Representative histogram (left) and quantitative plot (right) of BODIPY FL C 16 staining in BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). Data represent three independent experiments. f , Representative histogram (left) and quantitative plot (right) of BODIPY (493/503) staining in BMDMs treated with IL-4 ( n = 4. mean ± s.e.m). Data represent three independent experiments. g , Representative images of mitochondrial and cristae (red arrows) from TEM of IL-4-stimulated BMDMs from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice. Scale bar, 500 nm. h , i , Measurements of cristae area ( h ) and cristae width ( i ) as determined using ImageJ. Each dot represents the average of all mitochondria from one cell ( n = 22. mean ± s.e.m). Data represent two biological replicates. j , Representative histogram (left) and quantitative plot (right) of MitoTracker Green + staining in BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). Data represent three independent experiments. k , Representative histogram (left) and quantitative plot (right) of MitoTracker Orange + staining in BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). Data represent three independent experiments. l , Expression of genes encoding molecules involved in the ETC reaction in M2 (IL-4) macrophages from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice, assessed by RNA-seq analysis. m , Immunoblots of mitochondrial proteins ATP5A, UQCRC2, MTCO1, SDHB, NUDFB8 and PHB1 from PERK-sufficient or -deficient BMDMs stimulated with IL-4. Data represent three independent experiments. n , Mitochondrial calcium flux (Rhod-2) from PERK-sufficient or -deficient BMDMs treated with IL-4 determined and normalized by wild-type naive M0 macrophages. The arrow represents stimulation using 10 μM ionomycin ( n = 6, mean ± s.e.m); data represent three independent experiments. All data were analyzed using a two-tailed, unpaired Student’s t -test ( a , b , d – f, h – k ) or a two-tailed, paired Student’s t- test. Source data Full size image Crosstalk between the ER and the mitochondria has been demonstrated to promote bioenergetics in cells 34 and is thought to be mediated by calcium signaling. We noted that the expression of genes encoding mitochondrial calcium transport was downregulated in PERK cKO M2 cells (Extended Data Fig. 3k ). Furthermore, we assessed the mitochondrial calcium flux in PERK intact or deficient M2 BMDMs and found that PERK ablation resulted in a profound reduction of calcium flux within the mitochondria (Fig. 3n and Extended Data Fig. 3l ). Collectively, these findings suggest that suppression of PERK signaling may differentially disrupt mitochondrial homeostasis by preventing adequate crosstalk between the ER and mitochondria. The result of this may then prevent M2 macrophages from being able to produce sufficient energy to fully sustain their immunosuppressive function. PERK induces serine biosynthesis via ATF-4 Proliferating cells, including M2 macrophages, require ATP to build biomass and thus require cellular building blocks such as nucleotides for genome replication, lipids for membrane integrity and amino acids for protein biosynthesis. Notably, however, the pentose phosphate pathway, a component of glycolysis that is crucial to the needs of rapidly dividing cells, is markedly lower in M2 compared with M1 macrophages 35 . It has been shown that the serine biosynthesis pathway (SBP) can provide one-carbon units for de novo nucleotide biosynthesis, supporting the expansion of T cells 36 and tumor cells 37 . We therefore hypothesized that M2 macrophages may also use SBP to support the biosynthesis of various macromolecules required for cellular proliferation and function. Intriguingly, we observed deletion of PERK to significantly reduce the metabolism of amino acids (that is, serine, glycine and threonine) and nucleotides (that is, histidine and pyrimidine) in macrophages responding to IL-4 (Fig. 2 ), suggesting a new role for PERK in tuning the SBP in M2 macrophages. Using GSEA, we compared serine/glycine one-carbon metabolism by IL-4-stimulated mouse peritoneal macrophages versus naive peritoneal macrophages 22 and TAMs versus nontumor macrophages from patients with lung cancer 23 . Our data revealed a significant enrichment of this pathway in both IL-4-stimulated macrophages and TAMs (Fig. 4a,b ). Next, we performed mass spectrometry (MS)-based metabolite profiling and found that the levels of intracellular 3-phosphoglycerate, serine and glycine were significantly elevated by IL-4-stimulated M2 cells compared with M1 macrophages (Fig. 4c ). We also used targeted metabolomics analysis and identified a number of metabolites associated with the serine/glycine one-carbon metabolic pathway to be significantly reduced in PERK cKO M2 macrophages compared with wild-type (Fig. 4d ). Deletion of PERK not only negatively impacted intrinsic serine levels, but also surprisingly downregulated the transcript levels of the key SBP genes ( Phgdh , Psat1 and Psph ) 38 induced by M2 macrophages (Fig. 4e,f and Extended Data Fig. 4a ). Similarly, inhibition of the PERK downstream target, eIF2α, could decrease the production of intracellular serine (Extended Data Fig. 2l ). It has been separately reported in other cells that activated PERK signaling induces the translational activation of ATF-4 (ref. 39 ) and that ATF-4 regulates a set of targets involved in serine/glycine metabolism and mitochondrial function 21 , 40 ; however, no studies have determined whether PERK signals through ATF-4 to support SBP in macrophages. We, therefore, investigated the interconnection between PERK–ATF-4 signaling and SBP in M2 macrophages by first analyzing a chromatin immunoprecipitation sequencing (ChIP-seq) dataset of ATF-4–DNA binding in M0 versus M2 macrophages obtained from a previously published study (see accession no. GSE140029 ) 41 . We found that ATF-4-bound DNA was enriched in M2 compared with M0 (histogram, Fig. 4g ); and, importantly, the enriched ATF-4 target gene loci were associated with downstream PERK signaling ( Nfe2l2 and Ddit3 ) and also serine one-carbon metabolism ( Psat1 , Shmt2 , Mtfhr , Mthfd1l and Mthfd2 ) in M2 macrophages (Fig. 4g ). We also found that ATF-4 could bind to genes involved in the regulation of mitochondrial respiration and lipid metabolism ( Uqcrq , Atp9b , Ndufa4l2 , Pparg and Abca1 ). Moreover, PERK ablation reduced the protein expression of ATF-4, phosphoglycerate dehydrogenase (PHGDH) and PSAT1 (Fig. 4h ). We next generated retrovirally mediated short hairpin (sh)RNA against ATF-4 to further understand the interrelationship of PERK, ATF-4 and M2 activation. Similar to the PERK cKO , ATF-4 knockdown reduced the expression of ATF-4, PHGDH and PSAT1 (Fig. 4i,j ), and also suppressed the IL-4-induced expression of CD206, CD301, PD-L2 and Relmα (Fig. 4k ). Overexpression of ATF-4 in PERK cKO could rescue the defective M2 activation and metabolic function (basal OCR and basal ECAR) in BMDMs stimulated with IL-4 (Fig. 4l-n ). Together, our results strongly imply that activated PERK regulates ATF-4 to reprogram cellular metabolic networks tailoring M2 activation. Fig. 4: PERK regulates intrinsic serine biosynthesis via ATF-4. a , Enrichment plot of serine glycine one-carbon metabolism (SGOC) genes in IL-4c-treated mouse peritoneal macrophages compared with naive (phosphate-buffered saline, PBS) macrophages by GSEA. FDR, false discovery rate. b , GSEA result comparing SGOC genes between TAMs and nontumor macrophages from patients with lung carcinoma. c , Abundance of 3-phosphoglycerate, serine and glycine in extracts of BMDMs cultured for 24 h in M1- (LPS + IFN-γ) or M2 (IL-4)-stimulating conditions, assessed by MS ( n = 4, mean ± s.e.m). Data are collected from four independent experiments. d , Targeted metabolomics profiling indicated metabolites from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre BMDMs stimulated with IL-4 ( n = 4 independent experiments). e , Intracellular serine levels in extracts of PERK wild-type or knockout BMDMs treated with IL-4 ( n = 3, mean ± s.e.m). The serine level from naive (M0) wild-type macrophages is indicated by a dotted horizontal line. Data represent two independent experiments. f , Expression of genes encoding PHGDH and PSAT1 in M2 (IL-4) macrophages from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice, assessed by RNA-seq analysis. g , Comprehensive heatmap of ATF-4-binding regions by ChIP-seq. TSS, transcription start site. h , Immunoblot analysis of PERK, ATF-4, PHGDH, PSAT1 and β-actin in macrophages from PERK wild-type or deficient BMDMs. Data represent three independent experiments. i , j , RT-qPCR ( i ; n = 4, mean ± s.e.m) and immunoblot ( j ) analysis of ATF-4, PHGDH, PSAT1 and β-actin in BMDMs transduced with retrovirus-expressing ATF-4 or luciferase (Luc) shRNA, and stimulated with IL-4. Data represent two independent experiments. a.u., arbitrary units. k , Expression of CD206, CD301, PD-L2 and Relmα in BMDMs transduced with retrovirus-expressing ATF-4 or Luc shRNA and stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent two independent experiments. l – n , PERK wild-type (WT) or deficient (KO) BMDMs were transduced with either retrovirus overexpressing a control reporter gene (EV) or a reporter gene plus the Atf4 sequence ( Atf4 O/E ), and stimulated with IL-4. Representative expression of CD206 and CD301 was determined by flow cytometry ( l ; n = 4, mean ± s.e.m), and basal OCR ( m ) and basal ECAR ( n ) were measured using Seahorse Flux analyzer ( n = 3, mean ± s.e.m). Data represent two independent experiments. All data were analyzed using a two-tailed, unpaired Student’s t -test ( c , e , I , k ) or a one-way ANOVA with Dunnett’s multiple comparisons test ( m and n ). Source data Full size image Serine biosynthesis contributes to M2 activation We next sought to determine whether serine metabolism was necessary for immunosuppressive M2 macrophages. We treated BMDMs with the pharmacological PHGDH inhibitor CBR-5884 (ref. 42 ) or retrovirally mediated shRNA targeting Phgdh (Extended Data Fig. 4b ) and Psat1 (Extended Data Fig. 4c ). Inhibition of SBP enzymes substantially decreased the expression of CD206, CD301, PD-L2 and resistin-like molecule α (Relmα; Fig. 5a and Extended Data Fig. 4d,e ). We also observed that inhibition of PHGDH via another selective inhibitor NCT-503 (ref. 43 ) strikingly suppressed Relmα + M2 activation (Fig. 5b ) and proliferation (Fig. 5c ) in macrophages on IL-4c administration in the mouse peritoneal cavity in vivo. To study the effect of serine metabolism in macrophages, we generated myeloid cell conditional Psat1 knockout animals by crossing Psat1 fl/fl with LysM Cre mice ( Psat1 fl/fl × LysM Cre; designated as PSAT1 cKO ). Similar to the effects found with pharmacological inhibition and genetic knockdown, lower expression of CD206, CD301, PD-L2 and Relmα was detected in PSAT1 cKO BMDMs stimulated with IL-4 compared with those from wild-type ( Psat1 fl/fl ) mice (Fig. 5d,e ). In addition, PSAT1 deficiency had no effect on M1 polarization toward an anti-inflammatory phenotype (Extended Data Fig. 4f,g ). In comparison with wild-type macrophages, PSAT1 cKO BMDMs cocultured with B16-F10 or LLCs exhibited a marked attenuation of the suppressive M2 phenotype (Fig. 5f ) and exhibited less capacity to restrain T cell proliferation in vitro (Fig. 5g ). To define the intrinsic role of PSAT1 in immunosuppressive macrophages during tumorigenesis, we transplanted Psat1 fl/fl × LysM Cre and wild-type mice with B16-F10 melanoma cells. Delayed growth and reduced tumor weight were observed in Psat1 fl/fl × LysM Cre mice compared with controls (Fig. 5h,i ). A significant reduction in cell number and M2-positive phenotype was detected in TAMs from Psat1 fl/fl × LysM Cre mice (Fig. 5j,k ). We also found that elimination of PSAT1 in macrophages could enhance anti-tumor T cell responses, resulting in increased percentages of TILs and IFN-γ-expressing CD8 + and CD4 + T cells from B16-F10 tumor-bearing mice (Fig. 5l–n ). Fig. 5: Serine biosynthesis promotes an immunosuppressive phenotype in macrophages. a , Expression of CD206 and CD301 in BMDMs transduced with retrovirus-expressing PHGDH shRNA (middle) or BMDMs transduced with retrovirus-expressing PSAT1 shRNA (bottom) ( n = 3, mean ± s.e.m). Data represent three independent experiments. b , c , Representative histogram (left) and quantitative plot (right) of Relmα + ( b ) or Ki67 + ( c ) mouse peritoneal macrophages after treatment with IL-4c in the presence or absence of NCT-503 ( n = 6 mice per group, mean ± s.e.m). Each data symbol represents one individual. d , e , Expression of CD206, CD301 ( d ), PD-L2 and Relmα ( e ) in IL-4-stimulated BMDMs from Psat1 fl/fl or Psat1 fl/fl × LysM Cre ( n = 3, mean ± s.e.m). Data represent three independent experiments. f , Expression of CD206 and CD301 by BMDMs from Psat1 fl/fl or Psat1 fl/fl × LysM Cre mice cocultured with B16-F10 melanoma cells (top) and LLC cells (bottom) for 72 h ( n = 2, mean ± s.e.m). Data were collected from two independent experiments. g , Proliferation of CTV-labelled OT-I CD8 T cells activated with anti-CD3 and anti-CD28 and cocultured with PSAT1 wild-type or knockout BMDMs treated with IL-4 (M2) or LPS + IFN-γ (M1) in a ratio of 1:10 for 72 h ( n = 2, mean ± s.e.m). Data were collected from two independent experiments. h , i , Tumor growth ( h ) and tumor weight ( i ) of B16-F10 melanoma from Psat1 fl/fl or Psat1 fl/fl × LysM Cre mice ( n = 5 mice per group, mean ± s.e.m). Data were collected from two independent experiments. j , k , Absolute number of TAMs ( j ) and frequency of CD206 + TAMs ( k ) in PSAT1 wild-type or knockout mice ( n = 4 mice per group, mean ± s.e.m). l – n Absolute number of TILs ( l ) and the frequency of IFN-γ + CD8 ( m ) or CD4 ( n ) T cells in PSAT1 wild-type or knockout mice ( n = 4 mice per group, mean ± s.e.m). Data represent two independent experiments. All data were analyzed using a two-tailed unpaired Student’s t -test ( b – n ) or a one-way ANOVA with Dunnett’s multiple comparisons test ( a ). Source data Full size image PSAT1 is required for mitochondrial fitness Cellular serine is actively transported into the mitochondria 44 and its availability is known to maintain mitochondrial function and support mitochondrial fatty acid metabolism 45 . We, therefore, hypothesized whether the mitochondrial dysfunction present in the PERK cKO macrophages was due to diminished SBP metabolism. We observed that the level of intracellular serine was markedly reduced in PSAT1 cKO M2 macrophages compared with wild-type controls (Fig. 6a ) and, similar to PERK cKO , inhibition of PSAT1-mediated serine biosynthesis appeared to decrease fatty acid oxidation (FAO; OCR), glycolytic activity (ECAR) and ATP generation in macrophages under IL-4 stimulation (Fig. 6b,c and Extended Data Fig. 4h–j ). Yet, it is interesting that this reduction in energy production was not due to dysregulation of mitochondrial mass and membrane potential (Fig. 6d–g ). In addition, we found that loss of PSAT1 did not exhibit a negative impact on ETC assembly compared with controls (Fig. 6h ). However, calcium flux into the mitochondria was significantly compromised in the PSAT1-null M2 macrophages (Fig. 6i ). Together, these findings indicate that serine biosynthesis plays an important role in mitochondrial fitness by supporting FAO and mitochondrial calcium flux, but has no direct bearing on the mitochondrial respiratory chain assembly. Fig. 6: Serine biosynthesis contributes to mitochondrial fitness independent of respiratory chain assembly. a , Intracellular serine levels in extracts of IL-4-stimulated BMDMs from Psat1 fl/fl or Psat1 fl/fl × LysM Cre ( n = 4, mean ± s.e.m). The serine levels from M0 are indicated by a dotted horizontal line. Data represent two independent experiments. b , Basal OCR (left) and basal ECAR (right) of PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). Data were collected from three independent experiments. c , ATP production of PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 4, mean ± s.e.m). Data represent two independent experiments. d , Representative histogram (left) and quantitative plot (right) of MitoTracker Green + staining in BMDMs transduced with retrovirally expressing Luc, PHGDH or PSAT1 shRNA, and stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. ns, not significant. e , Representative histogram (left) and quantitative plot (right) of MitoTracker Orange + staining in BMDMs transduced with retrovirally expressing Luc, PHGDH or PSAT1 shRNA, and stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. f , Representative histogram (left) and quantitative plot (right) of MitoTracker Green + staining in PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. g , Representative histogram (left) and quantitative plot (right) of MitoTracker Orange + staining in PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. h , Immunoblot analysis of mitochondrial ETC complexes from PSAT1 wild-type or knockout BMDMs stimulated with IL-4. Data represent two independent experiments. i , Mitochondrial calcium uptake (Rhod-2) of PERK wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). The arrow indicates stimulation using 10 μM ionomycin. Data were collected from three independent experiments. All data were analyzed using a two-tailed, an unpaired Student’s t- test ( a , b , f and g ), a two-tailed, paired Student’s t- test ( i ) or an ordinary one-way ANOVA with Dunnett’s multiple comparisons test ( d and e ). Source data Full size image PERK–PSAT1 signaling facilitates epigenetic regulation In addition to assisting mitochondrial function, SBP plays an important role in amino acid homeostasis by regulating intracellular α-KG levels for the maintenance of epigenetic modifications 5 , 46 . PSAT1 requires glutamine-derived glutamate as a substrate for a transamination reaction, resulting in the production of serine as well as α-KG. In our data, we found that loss of PERK resulted in a decreased expression of glutamine and glutamate metabolism (Fig. 2b,e ). In support of this, we quantified the levels of intracellular glutamine and α-KG and found that PERK cKO led to a notable reduction in glutamine consumption and α-KG production in IL-4-stimulated M2 macrophages (Fig. 7a,b ). As expected, the cellular level of α-KG was also strongly dysregulated in PSAT1-ablated M2 macrophages compared with wild-type controls (Fig. 7c ). These findings suggest that PERK fine-tunes PSAT1 activity to produce α-KG in M2 macrophages. Fig. 7: Dysregulation of SBP suppresses JMJD3-mediated histone demethylation. a , b , Glutamine consumption ( a ) and intracellular α-KG levels ( b ) from PERK wild-type or knockout BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). The levels from M0 are indicated by a dotted horizontal line. Data represent two independent experiments. c , Intracellular α-KG levels from PSAT1 wild-type or knockout BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). The levels from M0 are indicated by a dotted horizontal line. Data represent two independent experiments. d , e , Immunoblot analysis of histone methyl mark H3K27me3, PSAT1, PERK, histone H3 from PERK ( d ) or PSAT1 ( e ) wild-type and knockout BMDMs treated with IL-4. Data represent three independent experiments. f , H3K27me3 histone modifications of Irf4 , Pparg , Phgdh and Mgl2 from PERK wild-type and knockout BMDMs stimulated with IL-4. Data represent three independent experiments. g , h , RT-qPCR analysis of indicated M2 genes from PERK ( g ) or PSAT1 ( h ) wild-type and knockout BMDMs cultured with IL-4 for 6 h in the presence or absence of dmKG (1 mM) or GSK-J4 (25 μM) ( n = 3, mean ± s.e.m). Data were collected from three independent experiments. All data were analyzed using either a two-tailed, unpaired Student’s t- test ( a – c ) or an ordinary one-way ANOVA with Dunnett’s multiple comparisons test ( g and h ). Source data Full size image α-KG is an essential cofactor for JMJD3 histone demethylation. Previously, JMJD3–α-KG signaling has been implicated in M2 activation in macrophages 29 . We therefore reasoned that the decrease in immunosuppressive M2 properties by PSAT1 and PERK deficiencies may be due to reduced histone demethylation as a result of lower α-KG availability. Indeed, we found that histone methylation marks on H3K27 were elevated by PSAT1 cKO and PERK cKO (Fig. 7d,e ). This hypermethylation was not due to decreased expression of Jmjd3 mRNA (Extended Data Fig. 5a,b ) or protein (Extended Data Fig. 5c,d ). To understand whether the hypermethylation of H3K27 was specifically occurring on M2-related genes, we performed an unbiased ChIP-seq analysis of PERK-sufficient and -deficient macrophages. In comparison to PERK wild-type macrophages, PERK cKO had significantly more regions with increased H3K27 methylation in M2 macrophages compared with M1 macrophages (Extended Data Fig. 5e ). As expected, we observed increased H3K27 methylation at the loci of M2 genes, including Irf4, Pparg and Mgl2 in PERK-deficient M2 cells (Fig. 7f ). In contrast, the H3K27 methylation state of those M2 gene promoters was unaffected in both naive macrophages (M0) and macrophages stimulated with LPS + IFN-γ (Extended Data Fig. 5f ). Moreover, the distributions of H3K27me3 in M1-related genes were not affected by PERK deficiency in M0, M1 or M2 conditions (Extended Data Fig. 5g ). We then asked whether supplementation of α-KG could rescue the immunosuppressive phenotype in PERK- and PSAT1-deficient M2 macrophages. We found that the expression of M2-associated genes was restored in both PSAT1 cKO and PERK cKO M2 macrophages by supplementation with dimethyl α-KG (dmKG) (Fig. 7g,h ), and this phenomenon could be reversed by the inhibition of JMJD3 using a selective inhibitor GSK-J4. Collectively, our data strongly suggest that JMJD3-dependent histone modifications are important for M2 gene expression and are sensitive to the α-KG availability mediated by an unconventional PERK–PSAT1 metabolic pathway. Inhibition of PERK signaling promotes anti-tumor immunity To recapitulate our findings in therapeutic models, we next evaluated the anti-tumor effects of GSK2656157 (PERK inhibitor) and NCT-503 (PHGDH inhibitor) (Extended Data Fig. 6a ). We found that delayed tumor progression and development was observed in B16-F10 tumor-bearing mice treated with both small molecule inhibitors (Fig. 8a,b ) and corresponded with a profound reduction in the numbers and immunosuppressive activity of intratumoral TAMs (Fig. 8c,d ). These treatments induced higher expansion of CD8 + and CD4 + T cells (TILs) expressing IFN-γ, but this increase was only statistically significant for the GSK2656157-treated mice (Fig. 8e–g ). In addition, both treatments of GSK2656157 and NCT-503 had no marked impact on other tumor-infiltrating immune cell populations (Extended Data Fig. 6b ), but were able to significantly extend survival in tumor-bearing mice (Fig. 8h ). We noted that, although NCT-503 could strikingly reduce tumor growth and weight, this treatment had much more variable effects on mouse body weight, suggesting poorer tolerance (Extended Data Fig. 6c ). Given that GSK2656157 appeared to have better outcomes overall, we then tested whether it could synergistically work with anti-PD-1 immunotherapy to further suppress tumor progression. We observed that GSK2656157 potentiated the anti-tumor efficacy of anti-PD-1 monoclonal antibody against B16-F10 melanoma (Extended Data Fig. 6d and Fig. 8i ). Overall, our data demonstrate a crucial role for PERK signaling in macrophage suppressive activity and that a PERK inhibitor provides significant anti-tumor efficacy. Moreover, PERK inhibition with the combination of immune checkpoint blockade can reprogram the TME toward a more immunostimulatory environment, leading to better cancer treatment. Fig. 8: Therapeutic PERK inhibition suppresses tumorigenesis. a , b , Tumor growth ( a ) or weight ( b ) from mice bearing B16-F10 melanoma treated with either GSK2656157 or NCT-503. Drug administration is indicated by arrows ( n = 9 in vehicle, n = 10 in GSK and NCT treatment groups; mean ± s.e.m). c , Absolute number of TAMs ( n = 7 in vehicle, n = 8 in GSK and NCT treatment groups; mean ± s.e.m). d , Frequency of CD206 + TAMs ( n = 8, mean ± s.e.m). e – g , Absolute number of TILs ( e ), frequency of IFN-γ + CD8 ( f ) and IFN-γ + CD4 ( g ) T cells ( n = 7 in vehicle, n = 10 in GSK and NCT treatment groups; mean ± s.e.m). h , Survival analysis of mice treated with vehicle, GSK2656157 or NCT-503 ( n = 10 mice per group). i , Tumor growth from mice bearing B16-F10 melanoma and treated with IgG2a control, αPD-1, GSK2656157 or αPD-1 + GSK2656157 (n = 7 mice per group). All data were collected from two independent experiments. Data were analyzed by an ordinary one-way ANOVA with Dunnett’s multiple comparisons test ( a – g ) or the Mantel–Cox test for survival ( h ). Source data Full size image Discussion Classic T H 2 immune responses such as helminth infections are required not only to promote worm clearance but also to resolve inflammatory tissue damage 47 . Similarly, tumor sites, which are considered a form of chronic stress, also induce sustained inflammation 48 . An emerging body of research has suggested that macrophages, mediated by cellular metabolism and/or type 2 cytokines, play critical roles in orchestrating tissue inflammation during pathogenic insults 49 , 50 . Yet, the molecular drivers that contribute to the metabolic and energetic rewiring of macrophage effector function remain elusive. ER stress responses have recently emerged as a crucial regulatory process underlying multiple essential cellular functions in addition to proteostasis. In the present study, we found that the ER protein PERK acts as a metabolic nexus point that connects extracellular cues to intracellular reprogramming. We found that both IL-4 and the TME are initiation factors that stimulate the immunosuppressive phenotype, and these environmental cues are coordinated with and ‘translated’ by PERK signaling to control the processes necessary for promoting the immunosuppressive activity of M2 macrophages. The metabolic pathways mediated by PERK activity could effectively result in suppressed T cell effector functions, and inhibition of PERK was able to restore the cell expansion and IFN-γ production of T cells, evoking better anti-tumor immunity. It has become evident that metabolism dictates immunity 6 . The widespread metabolic derailment found in PERK cKO macrophages is evidence that PERK acts as a vital metabolic hub to effectively convey the cellular signals and/or demands for effector function. Our results indicated that PERK signaling was required for mitochondrial bioenergetics, upregulating mitochondrial mass, cristae ETC activity and calcium exchange. We found that the mitochondrial respiration and FAO critical for meeting the energy demands of M2 macrophages were mediated by PERK signaling. Moreover, activated PERK signaling induced the downstream transcription factor ATF-4 to regulate PSAT1-mediated serine biosynthesis, which in turn generated serine and supported mitochondrial FAO and calcium signaling. We also found that the activity of PSAT1 produced α-KG, which could then support JMJD3 in epigenetic histone modifications to promote immunosuppressive gene expression in macrophages. The necessity of these functions underlies the metabolic pathways required for macrophage M2 activation under harsh pathological insults. Together, these data uncover an unexpected molecular interconnection between PERK and other important organelles within the macrophage, and the relationship between PERK and PSAT1-mediated serine biosynthesis provides potential strategies to reprogram or edit M2 macrophages that could benefit the treatment of cancers or other inflammatory diseases. Previous studies have illustrated that deletion of transcription factor ATF-4 affects mitochondrial respiration, glycolysis and amino acid metabolism, leading to impaired CD4 + T cell proliferation and effector function 51 . In addition, defective nonessential amino acid metabolism, such as serine synthesis, attenuates the proliferative capacity of T lymphocytes to modulate adaptive immunity 36 . We have shown a previously undescribed role for activated PERK and downstream ATF-4 signaling in inducing the expression of enzymes (PHGDH and PSAT1) involved in diverting intermediary metabolites from glycolysis to de novo serine biosynthesis. Perturbations of PHGDH and PSAT1 or inhibition of PERK all effectively lowered the cellular pool of serine. In addition, genetic ablation of PSAT1 significantly decreased mitochondrial FAO and mitochondrial calcium flux in M2 macrophages. Notably, the expansion of suppressive M2 macrophages present in mice after IL-4c administration or challenge with tumor did not occur when de novo serine biosynthesis was inhibited. This finding may reflect reduced proliferation and function of these cells due to diminished essential macromolecule biosynthesis and mitochondrial FAO supported by the serine metabolic program 38 , 45 . Glutamine-derived glutamate can contribute to the production of glycine via the serine synthesis pathway and through the transamination of 3-phosphohydroxypyruvate to phosphoserine by PSAT1. Reports have indicated that PSAT1 governs the intracellular levels of serine/glycine and α-KG and is a metabolic vulnerability in cancer cells 52 , 53 , 54 . Our genetic PERK- and PSAT1-deficient models provide a unique avenue to explore these metabolic dynamics in macrophages. We found that M2 activation was associated with increased levels of glutamine/glutamate metabolism, and glutamine utilization was markedly diminished by PERK ablation. Importantly, inhibition of PSAT1 or PERK significantly reduced the cellular concentrations of α-KG in M2 macrophages. α-KG is known not only to support the activity of the citrate cycle 55 , but also to serve as an essential cofactor for the histone demethylase JMJD3. JMJD3 has been previously found to promote immune cell functionality 31 , 56 ; thus, loss of α-KG potentially prevented the activity of JMJD3, resulting in hypermethylation of histone H3K27. It is also likely that α-KG can mediate the activity of other epigenetic regulators such as ten-eleven translocation (TET) enzymes to control immune cell fate 5 . It has been reported that the activity of TET2 is important to repress proinflammatory activity in macrophages 57 and loss of TET2 fails to promote/sustain the immunosuppressive function of macrophages in the TME 58 . Nevertheless, our results suggest that the cellular production of α-KG is necessary for the metabolic reprogramming and epigenetic modification of M2 macrophages and is, in part, mediated by PERK–PSAT1 signaling. The relationship between PERK and PSAT1 suggests that this would make an effective therapeutic target against melanoma. We found that treatment of melanoma with GSK2656157 or NCT-503 markedly delayed tumor growth and suppressed the immunosuppressive activation of TAMs. However, only GSK2656157 could further enhance T cell anti-tumor immunity, as demonstrated by increased IFN-γ + TILs. GSK2656157 was also able to boost the efficacy of anti-PD-1 immunoblockade and confer protection against melanoma. Collectively, our study provides a new role for PERK in the regulation of metabolic circuits and epigenetic regulation that supports immunosuppressive M2 activation and function in macrophages. Our findings suggest that modulation of PERK signaling may offer therapeutic potential in diseases in which immunosuppressive M2 macrophages have deleterious effects. Methods Animals and in vivo experiments C57BL/6J, LysM Cre and Gcn2 −/− mice were purchased from Jackson Laboratory. Psat1 tm1a(KOMP)Wtsi / Mmucd ( Psat1 fl/fl ) mice were purchased from the Mutant Mouse Resource and Research Center (MMRCC). Eif2ak3 fl/fl and OT-I mice were provided by S. Adoro and A. Huang, respectively. All mice were bred and maintained in specific pathogen-free conditions under protocols approved by institutional animal care at Case Western Reserve University School of Medicine, and both male and female mice were used at age 8–12 weeks. For IL-4c experiments, age- and sex-matched C57BL/6J or PERK wild-type or PERK cKO mice were injected intraperitoneally with or without 30 mg kg −1 of NCT-503 (Cayman Chemical) and with 300 μl of 3% thioglycolate (Sigma-Aldrich) immediately before IL-4 complexed to the monoclonal antibody anti-IL-4 (IL-4c; containing 5 μg of IL-4 (PeproTech) and 25 μg of anti-IL-4 clone 11B11, BioXcell) 59 . For all tumor experiments, 5 × 10 5 B16-F10 melanoma cells were subcutaneously transplanted into age- and sex-matched mice as described. Tumors were measured every 2 d using calipers starting at day 7 post-tumor injection until day 15, 16 or 18. Tumor volume was calculated using the formula ((length × width × (length × width) 0.5 ) × π/6). For the therapeutic drug treatment: mice were given an intraperitoneal injection of either 100 μl of dimethylsulfoxide, 30 mg kg −1 of GSK2656157 (Selleck Chem) and 30 mg kg −1 of NCT-503 (Selleck Chem) on days 8, 10 and 12 post-tumor injection. Tumors were measured every 2 d starting at day 8 and until day 16. Maximal tumor size cut-off was determined to be 2 cm for non-endpoint studies. For the anti-PD-1 tumor experiment: mice were given control 200 μg per mouse of immunoglobulin (Ig)G2a (Lienco Technologies), 200 μg per mouse anti-PD-1 (Lienco Technologies), 30 mg kg −1 of GSK2656157 or combination therapy of anti-PD-1 + GSK2656157 on days 8, 10 and 12 post-tumor injection. Tumors were measured every 2 d starting on day 8 until day 18. Cell lines B16-F10 (CRL-6475) melanoma and LL/2 (LLC; CRL-1642) LLCs were purchased from American Type Culture Collection and maintained using complete medium (RPMI 1640 containing 2 mM l -glutamine, 100 U ml −1 of penicillin–streptomycin and 10% fetal bovine serum (FBS)). Cell lines were passaged for a maximum of 20 passages. Tumor digestion and cell isolation Tumors were minced in RPMI with 2% FBS, DNase I (1 μg ml −1 , Sigma-Aldrich) and collagenase (1 mg ml −1 , Sigma-Aldrich), followed by digestion at 37 °C for 1 h and filtration through a 45-μm cell strainer. Filtered cells were incubated with ACK Lysing Buffer (Thermo Fisher Scientific) to lyse red blood cells and then washed with FACS buffer (phosphate-buffered saline (PBS) + 1% FBS + 0.1% sodium azide). Leukocyte enrichment was performed by density gradient centrifugation (800 g , 30 min) at 25 °C with 40% and 80% Percoll (GE Healthcare). Preparation of macrophages from bone marrow and macrophage activation Bone marrow cells were differentiated in the presence of recombinant mouse macrophage colony-stimulating factor (M-CSF; 20 ng ml −1 ; PeproTech) in complete medium (RPMI 1640 containing 10 mM glucose, 2 mM l -glutamine, 100 U ml −1 of penicillin–streptomycin and 10% FBS) for 7 d. Fresh complete medium with M-CSF was supplemented on day 6 before use. Day 7 macrophages were washed and variously stimulated with IL-4 (20 ng ml −1 ; PeproTech) or LPS (20 ng ml −1 ; Sigma-Aldrich) plus IFN-γ (50 ng ml −1 ; PeproTech) in the absence or presence of 5 μM GSK2656157 (Cayman Chemical), 30 μM CBR-5884 (Cayman Chemical) or 5 μM ISRIB (Cayman Chemical). Macrophages were harvested after 24 h and analyzed by flow cytometry for the expression of M2 or M1 activation. For tumor coculture experiments, macrophages and tumor cells were collected on day 7. Tumor cells were plated at a density of 5–6 × 10 5 in a 12-well plate for at least 1 h before the addition of macrophages. Once tumor cells were attached, 2 × 10 5 macrophages were added to each well for 72 h. For some experiments, macrophages were cultured with IL-4 in the presence or absence of 1 mM dmKG (Sigma-Aldrich) or 25 μM GSK-J4 (Selleck Chem) for 6 h and cells were harvested for the further experiments as indicated. In vitro T cell proliferation assay Mouse splenic CD8 + T cells from OT-I mice were isolated using EasySep Mouse CD8α Positive Selection Kit (STEMCELL Technologies). Isolated OT-I T cells were labeled with CellTrace Violet (CTV) Cell proliferation Kit (Thermo Fisher Scientific) according to the manufacturer’s instructions. The CTV-labeled CD8 + T cells (0.5 × 10 5 per well) were cultured in plate-bound 1 µg ml −1 of anti-CD3 (clone 145-2C11; Thermo Fisher Scientific) and 5 µg ml −1 of anti-CD28 (Clone 37.51; Thermo Fisher Scientific) with complete RPMI medium containing 55 µM β-mercaptoethanol and 10 ng ml −1 of IL-2. IL-4- or LPS + IFN-γ-stimulated macrophages were then added into T cell cultures at a ratio of 1:10 (macrophages:T cells) for 72 h. Cells were then harvested and CTV-positive signal in the CD8 + gate was measured by flow cytometry. Flow cytometry For surface staining, cells were kept at 4 °C and blocked with 5 μg ml −1 of anti-CD16/32 (clone 93; eBiosciences) before staining to CD45 (clone 30-F11, BioLegend), Gr1 (clone 8C5, BioLegend), CD3 (clone 17A2, eBiosciences), CD3 (clone 145-2C11, BioLegend), CD4 (clone GK1.5, BioLegend), CD8 (clone 53-6.7, BioLegend), CD19 (clone 6D5, BioLegend), F4/80 (clone BM8, BioLegend and eBiosciences), CD11b (clone M1/70, BioLegend and eBiosciences), CD206 (clone C068C2; BioLegend), CD301 (clone ER-MP23, BioRad) and PD-L2 (clone TY25, eBiosciences). For intracellular staining of Relmα (PeproTech), Ki67 (eBiosciences), NOS2 (clone C-11, Santa Cruz Biotechnology), TNF (clone MP6-XT22, BioLegend) or p-PERK(T980) (Bioss), cells were fixed with BD Cytofix/Cytoperm buffer (BD Biosciences) and stained with appropriate primary antibody followed by incubation with appropriate fluorochrome-conjugated anti-rabbit IgG (Jackson Immunoresearch) or anti-mouse IgG (clone Poly4053, BioLegend). For IFN-γ staining, cells were cultured in complete RPMI medium containing phorbol 12-myristate 13-acetate (50 ng ml −1 ; Sigma-Aldrich), ionomycin (750 ng ml −1 ; Sigma-Aldrich) and GolgiStop (1,000×; BD Biosciences) for 4 h at 37 °C. After surface staining, the cells were fixed using BD Cytofix/Cytoperm buffer and stained with IFN-γ antibody (clone XMG1.2; BioLegend). Lipid uptake was stained with 1 μM BODIPY FL C 16 (Invitrogen) and measured by flow cytometry. Intracellular neutral lipids were stained with 500 ng ml −1 of BODIPY 493/503 (Invitrogen) and measured by flow cytometry. Mitochondrial mass and membrane potential were stained with 50 nM MitoTracker Green (Invitrogen) and 50 nM MitoTracker Orange (Invitrogen), respectively, and measured by flow cytometry. Retroviral transduction Retroviral transduction of macrophages was accomplished using protocols that we have used previously 24 . Sequences for luciferase and Phgdh , Psat1 and Atf4 shRNAs were obtained from Open Biosystems and cloned into the MSCV-LTRmir30-PI8 retroviral vector, encoding human CD8 (huCD8) as a reporter. For overexpression, the Atf4 sequence was cloned into MSCV–IRES retroviral vector, encoding huCD8 as a reporter. Day 3 BMDC cultures were spin infected with retrovirus. At day 7 of culture, macrophages were harvested and transduced cells were identified by huCD8 expression. Cell fractionation and Immunoblot analysis Cells were lysed in radioimmunoprecipitation assay buffer (Thermo Fisher Scientific) with protease and phosphatase inhibitors (Cell Signaling Technologies). Anti-PERK (C33E10), anti-PHB1, anti-XBP1s (E9V3E), anti-trimethyl-histone H3 Lys27 (C36B11) and anti-histone H3 (D1H2) were all purchased from Cell Signaling Technologies. Polyclonal antibody p-PERK(T980) was purchased from Bioss (catalog no. BS-3330R). Anti-JMJD3 was purchased from Abcepta. Anti-PHGDH and anti-PSAT1 antibodies were purchased from Protein Tech. Total OXPHOS rodent antibody cocktail was purchased from Abcam. Anti-ATF-4 (C-20) was purchased from Santa Cruz Biotechnology. Anti-β-actin was purchased from Sigma-Aldrich. Primary antibody staining was followed by peroxidase-linked secondary antibody and ECL immunoblot detection (BioRad). Immunoblots for p-PERK(T980) and PERK were performed using a phos-tag-based acrylamide gel (FUJIFILM Wako Chemicals). RNA extraction and RT-qPCR RNA was extracted using TRIzol reagent (Life Technologies). Complementary DNA was generated using PrimeScript RT Reagent Kit with gDNA Eraser (Takara Bio) according to the manufacturer’s instruction. The TaqMan or SYBR green method was used for RT-qPCR with primers from Applied Biosystems or IDT. The assay was performed on a BioRad CFX96 machine. Relative expression was normalized by β-actin in each sample. The following primer sequences were used: for SYBR green, β-actin (forward, 5′-TCCATCATGAAGTGTGACGT-3′; reverse, 5′-TACTCCTGCTTGCTGATCCAC-3′), Atf4 (forward, 5′-GCAAGGAGGATGCCTTTTC-3′; reverse, 5′-GTTTCCAGGTCATCCATTCG-3′) , Phgdh (forward, 5′-TGGCCTCGGCAGAATTGGAAG-3′; reverse, 5′-TGTCATTCAGCAAGCCTGTGGT-3′); and Psat1 (forward, 5′-GATGAACATCCCATTTCGCATTGG-3′; reverse, 5′-GCGTTATACAGAGAGGCACGAATG-3′); for TaqMan, β-actin (mm00607939_s1), Arg1 (mm00475988_m1), Cd36 (mm00432399_m1), Chil3 (mm00657889_mH), Lipa (mm00498820_m1), Mrc1 (mm00485148_m1), Pparg (mm01184322_m1), Ppargc1b (mm00504723_m1) and Irf4 (mm00516431_m1). Metabolic analysis For real-time analysis of ECARs and OCRs, macrophages were analyzed using an XF e 96 Extracellular Flux Analyzer (Agilent). Three or more consecutive measurements were taken under basal conditions followed by the sequential addition of 1 μM oligomycin to inhibit the mitochondrial ATP synthase, 3 μM fluorocarbonyl cyanide phenylhydrazone (FCCP), a protonophore, which uncouples ATP synthesis from oxygen consumption by the ETC, and 100 nM rotenone plus 1 μM antimycin A (all drugs for this assay were purchased from Sigma-Aldrich), which inhibits the ETC. In this assay, basal oxygen consumption can be established by measuring the OCR in the absence of drugs. ATP production was measured using the ATP Determination Kit (Invitrogen). Glutamine consumption was measured using the Glutamine Assay Kit (Abnova). Serine and α-KG levels were measured using the dl -Serine Assay Kit or the Alpha Ketoglutarate Assay Kit following the manufacturer’s instruction (Abcam). Metabolite levels of cholesterol, 3-phosphoglycerate, serine and glycine in stimulated macrophages were measured using Metabolon. Calcium flux analysis Cells were washed with calcium flux buffer (1× Hanks’ balanced salt solution), 0.1% bovine serum albumin, 25 mM Hepes, pH 7.4 and 2.5 mM probenecid). Cells were incubated with 1 μM Rhod-2 (Thermo Fisher Scientific) in calcium flux buffer in the dark for 30 min at 37 °C. After incubation, cells were washed with calcium flux buffer and incubated for an additional 5 min in calcium flux buffer without Rhod-2 at 37 °C. Rhod-2 was measured using a BioTek Gen5 plate reader. After 5 min of baseline reading, Rhod-2-loaded cells were stimulated with 10 μM ionomycin. Calcium flux measurements were taken every 30 s until a plateau was reached (approximately 20 min). RNA-seq and bioinformatics analysis The mRNA was extracted from lysates of cells that had been stimulated for 24 h. Random primers and reverse transcriptase of TruSeq Stranded kit were used to synthesize cDNA and cDNA library sequencing was performed using Illumina HiSeq. The differential expression test, GSEA and visualization were performed using R (v.5.2.0), ggplot2 (v3.2.1), edgeR (v.3.32.1), pheatmap (v.1.0.12), clusterProfiler (v.3.10) and MSigDB (v.7.0). Targeted metabolomics analysis Sample preparation Cell culture was extracted by the addition of MeOH:H 2 O (4:1) (1 ml) 60 , 61 . This solution containing scraped lysed cells was further homogenized in the Cryolys Precellys 24-sample Homogenizer (2× 20 s at 10,000 r.p.m., Bertin Technologies) with ceramic beads. Homogenized extracts were centrifuged for 15 min at 4,000 g at 4 °C and the resulting supernatant was collected and evaporated to dryness in a vacuum concentrator (LabConco). Dried sample extracts were resuspended in MeOH:H 2 O (4:1, v:v) before liquid chromatography (LC)–tandem MS (MS/MS) analysis according to the total protein content, using 75 µl as the minimal reconstitution volume corresponding to the sample with the lowest protein content. Liquid chromatography LC–MS/MS. Cell lysates were analyzed by hydrophilic interaction liquid chromatography coupled to tandem mass spectrometry (HILIC–MS/MS) in both positive and negative ionization modes using a 6495 triple quadrupole system (QqQ) interfaced with a 1290 UHPLC system (Agilent Technologies) 62 . In positive mode, the chromatographic separation was carried out in an Acquity BEH Amide, 1.7 μm, 100-mm, 2.1-mm inner diameter column. The mobile phase was composed of A (20 mM ammonium formate and 0.1% fatty acids in water) and B (0.1% formic acid in acetonitrile (ACN)). The linear gradient elution from 95% B (0–1.5 min) down to 45% B was applied (1.5−17 min) and these conditions were held for 2 min, followed by 5 min of column re-equilibration at the initial gradient conditions. The flow rate was 400 μl min −1 , column temperature 25 °C and sample injection volume 2 µl. In negative mode, a SeQuant ZIC-pHILIC (100 mm, 2.1-mm inner diameter and 5-μm particle size, Merck) column was used. The mobile phase was composed of A (20 mM ammonium acetate and 20 mM NH 4 OH in water at pH 9.7) and B (100% ACN). The linear gradient elution from 90% (0–1.5 min) to 50% B (8–11 min) down to 45% B (12–15 min) was applied, followed by a 9-min post-run for column re-equilibration. The flow rate was 300 μl min −1 , column temperature 30 °C and sample injection volume 2 µl. For both analyses, the electrospray ionization source conditions were set as follows: dry gas temperature 290 °C, nebulizer 35 p.s.i. (241.317 kPa) and flow 14 l min −1 , sheath gas temperature 350 °C and flow 12 l min −1 , nozzle voltage 0 V and capillary voltage ±2000 V. Data were acquired in dynamic multiple reaction monitoring (MRM) mode with a total cycle time of 600 ms. Pooled quality control (QC) samples (representative of the entire sample set) were analyzed periodically throughout the overall analytical run to assess the quality of the data, correct the signal intensity drift and remove the peaks with poor reproducibility (coefficient of variation (CV) > 30%) 63 . In addition, a series of diluted QCs were prepared by dilution with methanol: 100% QC, 50% QC, 25% QC, 12.5% QC and 6.25% QC, and analyzed at the beginning and end of the sample batch. This QC dilution series served as a linearity filter to remove the features that don’t respond linearly or correlation with dilution factor is <0.65 (ref. 64 ). Data processing and statistical analysis Raw LC–MS/MS data were processed using the Agilent Quantitative analysis software (v.B.07.00, MassHunter Agilent technologies). Relative quantification of metabolites was based on extracted ion chromatogram areas for the monitored MRM transitions. Data quality assessment was done in R ( ). Signal intensity drift correction was done within the LOWESS/Spline algorithm 65 followed by filtering of ‘not-well behaving’ peaks (CV (QC peaks) >30% and R 2 (QC dilution curve) <0.75). TEM For TEM analysis, cells were seeded on to 6-well plates with 12-mm diameter inserts (Corning Snapwell inserts) at a density of 1–2 × 10 5 cells per well. Cells were stimulated as described previously. After 24 h, the membrane filter with their attached cells was immersed in fixative. The initial fixative was 2.5% glutaraldehyde in cacodylate buffer, pH 7.3. The specimen was postfixed in ferrocyanide-reduced 1% osmium tetroxide. After a soak in acidified uranyl acetate, the specimen was dehydrated in ethanol, passed through propylene oxide and embedded in Embed-812 (Electron Microscopy Sciences). Thin sections (70 nm) were cut on an RMC MT6000-XL ultramicrotome. Sections were cut in a horizontal plane parallel to that of the membrane to provide panoramic views of the cells. These were mounted on Gilder square 300-mesh nickel grids (Electron Microscopy Sciences) and then sequentially stained with acidified methanolic uranyl acetate and stable lead-staining solution. These were coated on a Denton DV-401 carbon coater (Denton Vacuum LLC), and examined in an FEI Tecnai Spirit (T12) with a Gatan US4000 4kx4k CCD. ChIP-seq Five million cells were fixed with 1% formaldehyde in medium at 1 × 10 6 cells ml −1 for 10 min at room temperature with constant agitation. Fixation was stopped and quenched with 125 mM glycine for 5 min on ice. After one wash with PBS, cell pellets were snap-frozen with liquid nitrogen and stored at −80 °C until nuclei preparation. Nuclei were isolated using lysis buffer (50 mM Hepes, pH 7.5, 140 mM NaCl, 1 mM EDTA, 10% glycerol, 0.5% NP40 and 0.25% Triton X-100) and were washed once with wash buffer (10 mM Tris-HCl, pH 8.0, 200 mM NaCl, 1 mM EDTA and 0.5 mM (ethylenebis(oxonitrilo))tetra-acetate). The pellets were resuspended in shearing buffer (10 mM Tris-HCl, pH 8.0, 1 mM EDTA and 0.1% sodium dodecylsulfate (SDS)) and the chromatin was sheared using BioruptorPico (Diagenode) for 14 cycles (30 s on, 30 s off) at 4 °C. Then, 2.5 µg of chromatin was diluted to a final of 300 µl of RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1 mM EDTA, 1% NP40, 0.1% SDS and 0.5% sodium deoxycholate) and incubated with 2.5 µg of rabbit monoclonal anti-H3K27me3 antibody (Cell Signaling Technologies, clone C36B11, lot 19) overnight at 4 °C with constant rotation. For immunoprecipitation, the samples were incubated with 10 µl of precleared Protein A Magnetic Beads (Thermo Fisher Scientific) for 2 h at 4 °C. The beads were washed twice with RIPA buffer, once with high-salt buffer (50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 1 mM EDTA, 1% NP40 and 0.1% SDS), once with LiCl buffer (50 mM Tris-HCl, pH 8.0, 250 mM LiCl, 1 mM EDTA, 1% NP40 and 1% sodium deoxycholate) and finally once with TE buffer (10 mM Tris-HCl, pH 8.0 and 1 mM EDTA). All washes were incubated for 5 min at 4 °C with constant rotation. After the last wash, beads were resuspended with 200 μl of elution buffer (100 mM NaHCO 3 and 1% SDS) with 0.5 mg ml −1 of RNase A (QIAGEN), and incubated at 65 °C on a thermoshaker for 20 min at 1,200 r.p.m. For decrosslinking, proteinase K (0.5 mg ml −1 ; Ambion) and NaCl (200 mM) were added into the eluted samples and incubated in a thermocycler at 65 °C overnight. The samples were purified by using a Zymo PCR purification kit. Sequencing libraries were prepared using NEB Ultra II kits according to the standard protocol (New England Biobanks). Samples were sequenced with a Illumina Novaseq 6000 at 50 bp pair-ended with an SP flow cell (Nationwide Children Hospital, Columbus, OH, USA). Bioinformatics analysis of ChIP-seq Pair-ended reads were first analyzed using FastQC and then mapped to mm10 mouse reference genome GRCm38 (December 2011) using Bowtie 2 (v.2.4.2). Samtools was used to remove unaligned and duplicated reads. Peaks were called using HOMER and reads overlapped with the mm10 blacklisted regions (ENCODE 2016) were removed. Bigwig files were generated using Deeptools (v.3.5.1) with the command bamCoverage (--normalizeUsing RPKM). Count matrix was generated using DiffBind (v.2.0.2) (22217937) dba.count with normalization DBA_SCORE_TMM_MINUS_EFFECTIVE. The differential H3K27me3-enriched regions between were determined using edgeR in DiffBind with a cut-off of P ≤ 0.01. Heatmaps were generated using Deeptools. Data analysis and statistics Data were analyzed using Graphpad Prism (v.9). Comparisons for three or more groups were calculated using one-way analysis of variance (ANOVA) and, where indicated, unpaired or paired, two-tailed Student’s t -tests. Differences were considered significant when P values were <0.05. Pilot in vivo studies were used for estimation of the sample size required to ensure adequate power. No statistical methods were used to predetermine sample size, but our sample sizes are similar to those reported in previous publications 66 . Data distribution was assumed to be normal but this was not formally tested. Age- and sex-matched animals were randomly assigned to experimental conditions. Data collection and analysis were not performed blind to the conditions of the experiments. No data exclusion was performed. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability RNA-seq and ChIP-seq results are available in the Gene Expression Omnibus database under accession nos. GSE165836 and GSE183287 , respectively. Other data available on request to corresponding author. Source data are provided with this paper. | None | [] | [] | [] | SciNews | Medicine | Lydia N. Raines et al, PERK is a critical metabolic hub for immunosuppressive function in macrophages, Nature Immunology (2022). DOI: 10.1038/s41590-022-01145-x Journal information: Nature Immunology | https://dx.doi.org/10.1038/s41590-022-01145-x | https://medicalxpress.com/news/2022-04-pathway-cancerous-tumors-body-immune.html | Researchers at Case Western Reserve University School of Medicine have made a breakthrough in cancer treatment by discovering that manipulating immune cells called macrophages can suppress the growth of solid tumors in research models. By altering the metabolism of macrophages and influencing their relationship with T cells, the team was able to significantly reduce the size of tumors in some mouse models. The study found that a protein called PERK, which is involved in metabolic signaling in macrophages, plays a key role in promoting tumor growth. By targeting PERK, the researchers were able to block tumor growth and even combine it with other treatments to achieve significant reductions in tumor size. The team hopes to identify a clinical drug that can inhibit PERK and is working towards developing new therapeutic treatment options for solid tumor cancers, which account for almost half of all new cancer cases in the US.
Cancer researchers at Case Western Reserve University School of Medicine say they have successfully suppressed the growth of some solid tumors in research models by manipulating immune cells known as macrophages. The researchers say this discovery is significant because many solid tumor cancers, such as lung cancer, are difficult to treat. According to the National Cancer Institute, breast, lung, prostate and colorectal cancers—all of which are solid tumor cancers—account for almost half of all new cancer cases in the United States. In this new research, the scientists discovered that altering the macrophage metabolism—and, in doing so, influencing their relationship with T cells—suppressed the tumor's growth. The result was a significant reduction in overall tumor size in some mouse models. "The race to find a cure for cancer never stops," said Stanley Huang, an assistant professor of immunology in the Department of Pathology at the School of Medicine, who led the research. "Our research creates a pathway to a [potential] new form of cancer treatment for those with solid tumor cancers." The study appeared recently in the journal Nature Immunology. T cells and macrophages Generally, the body's immune response to disease involves mobilizing white blood cells that attack invaders like germs and bacteria. Macrophages are specialized white blood cells that consume invading cells to destroy pathogens. They are considered the "frontline soldiers" of the body's immune system and can activate T cells, which are another type of white blood cell. Yet, despite their typically protective role, macrophages can be co-opted by tumor cells to encourage tumor growth. Targeting macrophages and PERK protein As tumors grow and macrophages interact with the tumor cells, they create a response protein, which the study linked to tumor growth. Huang said the team believed it was possible to target macrophages and that particular protein—known to scientists by its shorthand, PERK ("protein kinase R" (PKR)-like endoplasmic reticulum kinase)—to block tumor growth. "Knocking out PERK suppresses downstream metabolic signaling in tumor macrophages, resulting in more T cells to fight the cancer cells," said Huang. Findings and future steps The study's findings suggest that the PERK protein is involved in several key pathways of metabolism in macrophages—and when the gene is removed, macrophages can no longer promote tumor growth; meaning tumors become smaller. Follow-up experiments further revealed that combination treatment of a PERK inhibitor drug with an inhibitor called "anti-PD-1" could significantly reduce tumor growth. Next, the researchers hope to identify a clinical drug that will act as an inhibitor for the PERK protein. "There are several strategies to enhance anti-tumor immunity like targeting or editing cell metabolism," Huang said. "We can target genes and their pathways to enhance immune function and work toward future therapeutic treatment options." |
10.1038/s41467-022-34618-6 | First-of-its-kind experimental evidence defies conventional theories about how plasmas emit or absorb radiation | Most people are familiar with solids, liquids, and gases as three states of matter. However, a fourth state of matter, called plasmas, is the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. Because dense plasma—a hot soup of atoms with free-moving electrons and ions—typically only forms under extreme pressure and temperatures, scientists are still working to comprehend the fundamentals of this state of matter. Understanding how atoms react under extreme pressure conditions—a field known as high-energy-density physics (HEDP)—gives scientists valuable insights into the fields of planetary science, astrophysics, and fusion energy. One important question in the field of HEDP is how plasmas emit or absorb radiation. Current models depicting radiation transport in dense plasmas are heavily based on theory rather than experimental evidence. In a new paper published in Nature Communications, researchers at the University of Rochester Laboratory for Laser Energetics (LLE) used LLE's OMEGA laser to study how radiation travels through dense plasma. The research, led by Suxing Hu, a distinguished scientist and group leader of the High-Energy-Density Physics Theory Group at the LLE and an associate professor of mechanical engineering, and Philip Nilson, a senior scientist in the LLE's Laser-Plasma Interaction group, provides first-of-its-kind experimental data about the behavior of atoms at extreme conditions. The data will be used to improve plasma models, which allow scientists to better understand the evolution of stars and may aid in the realization of controlled nuclear fusion as an alternative energy source. "Experiments using laser-driven implosions on OMEGA have created extreme matter at pressures several billion times the atmospheric pressure at Earth's surface for us to probe how atoms and molecules behave at such extreme conditions," Hu says. "These conditions correspond to the conditions inside the so-called envelope of white dwarf stars as well as inertial fusion targets." A NASA image of plasma bursting from the sun. Plasma—a hot soup of atoms with free moving electrons and ions—is the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. A new study from University of Rochester researchers provides experimental data about how radiation travels through dense plasmas, which will help scientists to better understand planetary science and fusion energy. Credit: NASA Using X-ray spectroscopy The researchers used X-ray spectroscopy to measure how radiation is transported through plasmas. X-ray spectroscopy involves aiming a beam of radiation in the form of X-rays at a plasma made of atoms—in this case, copper atoms—under extreme pressure and heat. The researchers used the OMEGA laser both to create the plasma and to create the X-rays aimed at the plasma. When the plasma is bombarded with X-rays, the electrons in the atoms "jump" from one energy level to another by either emitting or absorbing photons of light. A detector measures these changes, revealing the physical processes that are occurring inside the plasma, similar to taking an X-ray diagnostic of a broken bone. A break from conventional theory The researchers' experimental measurements indicate that, when radiation travels through a dense plasma, the changes in atomic energy levels do not follow conventional theories currently used in plasma physics models—so-called "continuum-lowering" models. The researchers instead found that the measurements they observed in their experiments can only be explained using a self-consistent approach based on density-functional theory (DFT). DFT offers a quantum mechanical description of the bonds between atoms and molecules in complex systems. The DFT method was first described in the 1960s and was the subject of the 1998 Nobel Prize in Chemistry. "This work reveals fundamental steps for rewriting current textbook descriptions of how radiation generation and transport occurs in dense plasmas," Hu says. "According to our experiments, using a self-consistent DFT approach more accurately describes the transport of radiation in a dense plasma." Says Nilson, "Our approach could provide a reliable way for simulating radiation generation and transport in dense plasmas encountered in stars and inertial fusion targets. The experimental scheme reported here, based on a laser-driven implosion, can be readily extended to a wide range of materials, opening the way for far-reaching investigations of extreme atomic physics at tremendous pressures." | Researchers at the University of Rochester Laboratory for Laser Energetics have conducted an experiment to study how radiation travels through dense plasmas, a state of matter that is the most abundant in the universe. Using the OMEGA laser, they created a plasma of copper atoms under extreme pressure and heat, and then bombarded it with X-rays to measure how the radiation was transported. The results showed that the changes in atomic energy levels did not follow conventional theories, and instead could only be explained using a self-consistent approach based on density-functional theory (DFT). This new understanding of radiation transport in dense plasmas has important implications for fields such as planetary science, astrophysics, and fusion energy, and could potentially aid in the realization of controlled nuclear fusion as an alternative energy source. | None | Abstract Spectroscopic measurements of dense plasmas at billions of atmospheres provide tests to our fundamental understanding of how matter behaves at extreme conditions. Developing reliable atomic physics models at these conditions, benchmarked by experimental data, is crucial to an improved understanding of radiation transport in both stars and inertial fusion targets. However, detailed spectroscopic measurements at these conditions are rare, and traditional collisional-radiative equilibrium models, based on isolated-atom calculations and ad hoc continuum lowering models, have proved questionable at and beyond solid density. Here we report time-integrated and time-resolved x-ray spectroscopy measurements at several billion atmospheres using laser-driven implosions of Cu-doped targets. We use the imploding shell and its hot core at stagnation to probe the spectral changes of Cu-doped witness layer. These measurements indicate the necessity and viability of modeling dense plasmas with self-consistent methods like density-functional theory, which impact the accuracy of radiation transport simulations used to describe stellar evolution and the design of inertial fusion targets. Introduction The physics of warm and hot dense matter can unravel the mysterious inner workings of planetary cores and stellar interiors 1 . These conditions span a large range of densities and temperatures ( ρ = 10 0 –10 6 g cm −3 and T = 10 3 –10 7 K), with pressures varying from ~1 Mbar (or, one million times that of Earth’s atmospheric pressure; 1 Mbar = 10 11 Pa) to ~500 Gbar (1 gigabar = 10 14 Pa). Understanding the physics of matter at such ultrahigh pressures can have many applications, including determining the age of the Universe through white dwarf cosmochronometry 2 , interpreting astrophysical observations 3 , 4 , 5 , and designing high-performance inertial fusion targets 6 , 7 , 8 . Thanks to technological advances in high-power lasers (including x-ray free electron lasers) and pulsed-power machines, this extreme state of matter can now be accessed in the laboratory 9 , 10 , 11 , but only for a short period of time (picosecond to microsecond timescales) depending on the driver and experimental geometry. Nonetheless, these techniques provide a unique “window” for interrogating the physics of matter at extreme conditions. The implosion spectroscopy measurements and model development presented in this work aim to reveal a more-detailed picture of atomic physics in dense-plasma environments at billion atmosphere (Gbar) pressures. Spherically-convergent techniques uniquely access the gigabar pressure regime in experiments, providing the necessary data to test atomic physics models for warm and hot dense plasmas. X-ray spectroscopy, a common and sometimes only means to diagnose and understand short-lived plasmas, measures x-ray emission and absorption with spatial, spectral, and/or temporal resolution 12 , 13 , 14 , 15 , 16 . Observing atomic line positions and spectral widths can reveal the physical processes that are occurring inside the system. Reliable atomic and plasma physics models are required to interpret these spectral signatures and have generally proven to be adequate for spectroscopically diagnosing classical/ideal plasmas 17 , 18 , 19 , 20 . In this regime, collisional-radiative equilibrium ( CRE ) models 21 , 22 are successfully used, which combine accurate atomic data from isolated atom calculations with appropriate continuum-lowering models to describe dilute plasma effects (e.g., ionization, screening, and broadening). This approach can provide guidance, for example, on the inference of plasma density and temperature 17 , 18 , 19 , 20 . However, with increasing energy density, experimental measurements over the last decade have revealed potential inconsistencies with traditional CRE treatments. For instance, experimental measurements 23 , 24 on the K-edge shift of solid-density aluminum plasmas (heated by x-ray free electron lasers) favored the continuum lowering model developed by Ecker and Kroll 25 , while shock-compression experiments 26 on the same material gave better agreement with a different continuum-lowering model by Stewart and Pyatt 27 . In addition, iron opacity measurements 28 at pressures below 1 Mbar showed very good agreement with traditional CRE -type opacity calculations, while significant disagreements 29 , 30 were found between measurements and theory at elevated densities and temperatures (for example, at around 10 Mbar for iron plasmas). It remains an “unsolved mystery” to this day, even though much effort has been applied to this open question from both theoretical and experimental perspectives 30 , 31 , 32 . Today, one can accurately compute the electronic energy levels of an isolated atom by solving the many-body Schrödinger or Dirac equations, for which the calculation precision can be improved systematically by varying the sophistication of the methods that are implemented, from the simplest Hartree–Fock method to advanced multi-configuration interactions. However, when atoms are put into a non-ideal (i.e., strongly-coupled and/or degenerate ) plasma environment, significant discrepancies appear between detailed spectroscopic measurements and calculations. One outstanding example is the inconsistency of hydrogen line broadening in the dilute, but cold ( n e = 10 15 –10 18 cm −3 and T = 10 3 –10 5 K) photospheric plasmas of white dwarfs 33 , in which plasma conditions inferred from the broadening of different lines in the same plasma can vary significantly, even amongst the best atomic physics models that are currently available. These variations can have significant implications for deducing the mass and age of white dwarfs by affecting the standard candle for cosmochronometry 2 . A similar situation occurs in warm dense plasmas under high-energy-density (HED) conditions, in which high-density effects (many-body coupling) and quantum electron degeneracy can drastically alter atomic physics relative to the isolated case. Reconciling how atomic physics changes in such non-ideal plasmas demands progress in both experiments and theory, which must account for the plasma environment self-consistently. Over the last few years, high-resolution absorption and fluorescence spectra have been used in magnetically driven inertial fusion (cylindrical liner) experiments to study the electronic structure of warm dense matter under extreme compression 16 , 34 . These studies have shown that a self-consistent field model based on density-functional theory (DFT) could reproduce K-edge and fluorescence line shifts at independently diagnosed, imploded plasma conditions (10 eV and n e = 10 24 cm −3 ), but collisional-radiative models with ad-hoc density effects could not reproduce the measured x-ray spectra 34 . A pure compressional experiment without thermal or ionization effects measured density-induced shifts in the K β line of cobalt at 8 Mbar, in good agreement with a self-consistent DFT model, and found significant differences among the predictions of several CRE models 35 . It is also noted that DFT-based modeling has been successfully applied to x-ray near-edge absorption spectroscopy (XANES) for warm-dense matter 36 , 37 , 38 . These earlier XANES experiments showed absorption features in good agreement with DFT calculations 36 , 37 , 38 . Extension of these studies to gigabar pressures are very important because of their relevance to fundamental dense plasma theory, inertial fusion energy, and laboratory astrophysics. Here, we report x-ray spectroscopy measurements at gigabar pressures using laser-driven implosions. These measurements are used to test a DFT-based multi-band kinetic model ( VERITAS ), which is developed in this work. The VERITAS model uses DFT-derived band (atomic level) information to compute the radiative transition rates that can be coupled to the radiation transfer equation to describe the radiation generation and transport processes in a dense plasma. With Cu (as a witness element) doped inside a 30-μm-thick plastic shell implosion, we performed time-integrated and time-resolved Cu K α emission (the 2p → 1 s transition) and 1s-2p absorption measurements during shell stagnation. Both of these inverse processes are observed on the same experiment; photo-ionization of 1 s electrons enables K α -emission, and thermal-ionization of 2p electrons enables 1s-2p absorption. These observations are directly connected to the time-dependent atomic ionization balance in the assembled dense plasma. The system is further constrained by integrated measurements of the compressed areal density (ρR), neutron yield and bang-time, and ion temperature, allowing the spectroscopic data to differentiate the DFT-based kinetic model from traditional treatments based on isolated-atom calculations and ad hoc continuum-lowering models. The paper is organized as follows: first, the necessity of a reliable atomic physics model for interpreting x-ray spectroscopic measurements is demonstrated using a surrogate dense-plasma object. The experimental results are then presented with a detailed spectral comparison between measurements and simulations based on traditional atomic physics models and the DFT-based approach that is developed in this work. Finally, the implications of these results for understanding dense plasma environments are discussed. Results Surrogate dense-plasma object To illustrate why a reliable atomic physics model is required a priori for interpreting dense plasma spectroscopy measurements, we construct a surrogate dense-plasma object in spherical geometry and compute synthetic x-ray spectra based on different atomic physics treatments. The surrogate plasma object consists of a 20-μm-radius core of 1%-Ar—doped deuterium plasma, having a given mass density of ρ = 10 g cm −3 and temperature of kT = 1000 eV, surrounded by four concentric homogenous shells of CH or Cu-doped CH with densities, temperatures, and thicknesses shown in Fig. 1a . The Cu-doped CH plasma serves as a “witness” layer (denoted as CHCu[2%]), which has 2% atomic fraction of Cu, uniformly mixed into the CH plasma. Fig. 1: Illustration of predicted spectroscopic differences for warm-/hot-dense plasmas by different atomic physics models. a Schematic of a surrogate dense-plasma object consisting of a Cu-doped CH plasma layer for spectroscopy. b The predicted Kα emission signal from the doped Cu layer of mass density of ρ = 20 g cm −3 and kT = 200 eV, by three different models: VERITAS (blue solid), collision radiative equilibrium ( CRE ) models with continuum lowering of Stewart–Pyatt (green long dash) and Ecker–Kroll (black dash-dotted). c The predicted 1 s—2p absorption feature from the doped Cu layer at mass density of ρ = 20 g cm −3 and temperature kT = 300 eV. Full size image Synthetic spectra, calculated with three different atomic physics models, are shown in Fig. 1b, c for the same CHCu[2%] density of ρ = 20 g cm −3 , but different temperatures ( kT = 200 eV and kT = 300 eV, respectively). The traditional CRE simulations used an atomic database (ATBASE), implemented by the Spect3D software package, based on a kinetic description for the atomic level populations, by which levels are populated and depopulated by radiative and collisional processes, and coupled to nonlocal radiation transport. As discussed above, these CRE models need to invoke continuum-lowering models to “destroy” bound levels and account for plasma effects (pressure ionization and lowering of ionization thresholds). The remaining results in Fig. 1b, c come from VERITAS , a new DFT-based multi-band kinetic model for dense plasma spectroscopy. The VERITAS code The details of VERITAS can be found in Methods; here we briefly describe its essential components: (1) the electronic structure of dense Cu-doped CH plasma is determined self-consistently by DFT through quantum molecular-dynamics simulations using all-electron potentials, for a given density and temperature grid; (2) certain electronic bands, such as the 1 s, 2p, 3p , and continuum , are chosen to be included in the model – the oscillator strengths among these bands are calculated for the considered radiative dipole transitions; and (3) the kinetic equation invoking the DFT-determined transition rates is used to describe the population change in these energy bands due to radiative transitions, which is coupled to the non-local radiation transport equation to ensure that the local radiation field is consistent with the band population. In contrast to a traditional CRE treatment, our DFT-based kinetic code – VERITAS —explicitly accounts for the microphysical interactions among ions and the dense plasma environment. Energy band shifting and ionization balance are self-consistently described, without invocation of an ad hoc continuum lowering model. This model development is based on the preliminary success of treating warm-dense plasmas as quantum many-body systems 39 , 40 , 41 , 42 , 43 , 44 with mean-field DFT. The VERITAS predictions for the surrogate dense-plasma object prescribed by Fig. 1a are indicated by the blue solid curves in Fig. 1b, c . For the case of kT = 200 eV, Fig. 1b shows that when the hot-spot radiation streams out through the CHCu[2%] layer, high-energy photons excite or ionize the 1 s core electron of Cu, leading to K α emission (due to the 2p → 1 s transition). As the temperature of the Cu-doped layer increases to kT = 300 eV, the spectroscopic feature changes from K α emission to the dominant 1s-2p absorption, shown by Fig. 1c . This feature change is caused by the appreciable depletion of the Cu 2p population at this higher temperature. Compared to the DFT-based VERITAS model, the two CRE models marked as “ATBASE + Stewart–Pyatt” (green dashed curve) and “ATBASE + Ecker–Kroll” (black dash-dotted curve) give quite different spectroscopic predictions for the same plasma conditions. These differences are quantitatively distinguishable: (1) the K α -emission peak shifts by ~20 eV in the two CRE models when compared to VERITAS for the low temperature case shown in Fig. 1b ; (2) the K α -emission peak from VERITAS is more than two-fold stronger than both CRE models, while the Ecker–Kroll model predicts a 1s-2p absorption feature even at kT = 200 eV; (3) at a higher temperature of kT = 300 eV, all models predict the 1s-2p absorption although the ATBASE + Ecker–Kroll model gives a wider and stronger absorption feature as indicated by Fig. 1c ; and (4) at this temperature the VERITAS and Stewart–Pyatt models give a similar absorption width, but the latter shows a “double-dip” feature. To investigate what detailed atomic physics drives these different observations, we compare in Table 1 the free-electron density, average Cu ionization ( \({Z}_{{Cu}}^{*}\) ), and Cu 2p population predicted by the following spectral models: ATBASE + Stewart–Pyatt ( Spect3D - a CRE code), DFT + QMD ( VERITAS ), DFT + AA (Muze ), FAC + AA [flexible atomic code with plasma environment inferred by an average-atom (AA) model] and one other CRE code ‒ SCRAM. The “FAC + AA” model uses FAC-code calculations for the atomic structure of a Cu atom that is “embedded” in a CH plasma mixture in which the plasma environment is described by an average-atom (AA)-type model. It embodies a similar “ spirit ” to DFT with a self-consistent-field (SCF) calculation of plasma screening for an atom embedded in a plasma mixture. Table 1 Comparisons of free-electron density (n e ), average ionization \({Z}_{{Cu}}^{*}\) and 2p-population ( f 2p ) of Cu in warm-/hot-dense plasmas Full size table The comparison indicates that both \({Z}_{{Cu}}^{*}\) (which governs K α shifts) and the depletion of 2p (which controls the K α intensity) are similar among the three DFT-based models [ VERITAS , Muze , and FAC + AA ]. By contrast, the traditional CRE models with similar ad-hoc continuum-lowering treatments differ from the self-consistent models and even from each other. These noticeable differences have motivated us to design and perform experiments in a similar regime, aiming to inform the development of a more-reliable HED atomic physics model for radiation generation and transport in dense plasmas. Experimental setup and diagnostics The experiment used a spherical, laser-driven implosion on the Omega Laser Facility. The target, shown schematically in Fig. 2a , consists of a 30-μm—thick polystyrene (CH) shell with a 10-μm—thick layer uniformly doped with 2% Cu (atomic fraction) and a 1%-Ar-doped deuterium (D2Ar[1%]) core fill. The 10- \(\mu m\) -thick Cu-doped layer begins ~3-μm from the inner surface of the CH shell. The target was imploded by 60 laser beams on OMEGA with a 1-ns square pulse having a total energy of ~26 kJ. When the laser pulse irradiates the spherical capsule, laser ablation launches a strong shock wave that compresses the target. After the shock breaks out of the inner surface of the shell into the gas-filled core, the shell is accelerated inwards until it stagnates at a certain radius. At stagnation, the contained gas is compressed and heated to form a hot core, which emits x-rays that probe the stagnating shell and enable our spectroscopic measurements. Fig. 2: Time-resolved x-ray spectroscopy experiment of warm-/hot-dense plasmas at white-dwarf’s envelope conditions of Gbar pressures. a Schematic targets for implosion spectroscopy on OMEGA. b Example of streaked spectra measured in experiments. c The pressure-density region probed by various HED experiments: GEKKO 47 , OMEGA 48 , Nova 45 , NIF by Doeppner et al. 11 , NIF by Kritcher et al. 51 , 52 , NIF by Fletcher et al. 49 , as well as non-Hugoniot work by Doeppner et al. 50 on NIF. d The density-temperature conditions of a typical white dwarf of 0.6M ʘ (0.6 solar mass) as it’s cooling down from hot and young state (right) to older and colder structures (left). Convective regions in the stars are shown in red. The regime probed by the experiments is shown by the green dashed circle. Inferred from DRACO simulations, the plasma temperature and density conditions of the imploding Cu-doped layer vary from kT ≈ 10‒50 eV and ρ ≈ 2‒10 g cm −3 (in-flight stage) to kT ≈ 200‒500 eV and ρ ≈ 10‒25 g cm −3 during the stagnation. Full size image Both time-integrated and time-resolved x-ray spectrometers were used to record the emergent radiation spectrum (see Methods for further details). Figure 2b shows a typical time-resolved x-ray spectrum in the photon energy range 7800 to 8600 eV, which clearly indicates the Cu emission and absorption features of interest during the shell stagnation and core flash. Implosion performance These high-adiabat ( α = 10) and low-velocity (~250 km s −1 ) implosions are stable to laser imprint and other perturbations, as indicated by one- and two-dimensional radiation-hydrodynamic simulations using the LILAC and DRACO codes (see below), as well as integrated experimental measurements of the implosion performance (see Table 2 ). Table 2 shows that the DD fusion neutron yield, neutron-averaged ion temperature < T i > n , neutron-averaged shell areal density < ρR > n , and neutron bang-time are in close agreement with LILAC and DRACO simulations. Based on these observations, we can reasonably process the radiation-hydrodynamic-simulations with atomic physics models to obtain synthetic x-ray spectra and compare them to experimental measurements. Table 2 Comparisons of implosion performance between experiment and DRACO simulation Full size table Compared to other shocked-CH studies 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 carried out mainly along the principal Hugoniot, our experiment has extended the pressure and density conditions at which both time-integrated and time-resolved x-ray spectroscopic measurements have been conducted: gigabar (Gbar) pressures and ~15–20 × solid-density, as indicated by Fig. 2c . Inferred from DRACO simulations, the plasma temperature and density conditions in the imploding Cu-doped layer vary from kT ≈ 10‒50 eV and ρ ≈ 2‒10 g cm −3 (in-flight stage) to kT ≈ 200‒500 eV and ρ ≈ 10‒25 g cm −3 during stagnation. The corresponding pressure in the compressed shell changes from ~50 Mbar to a maximum value approaching ~5 Gbar. When one casts these dense plasma conditions achieved on OMEGA to the density-temperature conditions of a typical white dwarf (0.6 M ʘ ) during its cooling phase, Fig. 2d shows the experiment can potentially probe the equation of state and transport properties of the convective region of a white dwarf’s envelope. Accurate knowledge about ionization balance in such conditions could directly affect the modeling of conduction and the radiative cooling of white dwarfs. Spectroscopic modeling Using the DRACO -simulated dynamic plasma conditions, we investigated x-ray generation and transport through the target using two CRE models (ATBASE and FAC) and the DFT-based kinetic code VERITAS . The predicted time-integrated spectra are compared with the experimental measurements in Fig. 3 , in which the x-ray signal is plotted as a function of photon energy [all normalized to the continuum signal level at 7800 eV]. The experimental spectra (Fig. 3b ) show both the pronounced K α -emission peaked at ~8042 eV and the 1s-2p absorption of Cu in the higher photon energy range of 8100–8250 eV. Both the location and amplitude of the emission and absorption features are appropriately captured by VERITAS (Fig. 3a ). Fig. 3: Comparison of time-integrated K α -emission and 1s-2p absorption signals between experiment and models. a The DFT-based VERITAS calculations. b The experimental measurement. c The CRE model calculations with atomic database (ATBASE) in combination with Stewart–Pyatt and Ecker–Kroll continuum lowering models. d The CRE model calculations using flexible atomic physics (FAC) code calculation with the two continuum lowering models. The time integration in calculations has been done from t = 1.7 ns to t = 2.4 ns during the hot-spot flash, with snapshots for each 20-ps time interval. Full size image Figure 3c, d show the Spect3D simulation results, in which either the atomic database (ATBASE) or the flexible atomic code (FAC) calculations are combined with the Ecker–Kroll and Stewart–Pyatt continuum lowering models. When these CRE results are compared to experiments, they give a conflicting conclusion about the continuum lowering model. Namely, the experimental emission and absorption features are qualitatively reproduced by the two CRE simulations of “ATBASE + Stewart–Pyatt” and “FAC + Ecker–Kroll” in Fig. 3d, e (though the emission peaks are too high), while the other two combinations drastically disagree with experiments. This illustrates again the dilemma of the traditional spectroscopic treatment for warm dense plasmas: which ad hoc continuum lowering model works better depends on the atomic physics model that is invoked. The resemblance between the FAC + Ecker–Kroll model (Fig. 3d ) and experiments is likely coincidental, as other recent measurements 53 of ionization-potential-depression have defied the Ecker–Kroll model. Overall, the DFT-based VERITAS model, without invocation of an ad hoc continuum lowering model, better resembles the observed x-ray signal in the experiments. Nonetheless, one can see that the VERITAS -predicted continuum slope, the K α -emission amplitude, and the 1s-2p absorption width are still slightly mismatched with respect to the experiment. This small spectroscopic discrepancy might be attributed to some unavoidable three-dimensional effects, even though the time-integrated implosion measurements overall agree with 2D DRACO simulations. For instance, the stalk perturbation could inject a small and localized portion of the Cu-doped layer closer to the hot spot, which, to some extent, could contribute to the measured spectra in ways that are not accounted for in the 2D model. These small differences are further discussed in Supplementary Information . Time-resolved x-ray spectrum Experimental and synthetic time-resolved x-ray signals are presented in Fig. 4 . The top three panels compare the measured streak-camera image of the x-ray emission and absorption over the core flash (Fig. 4b ) with predictions from the ATBASE + Stewart–Pyatt model (Fig. 4a ) and VERITAS (Fig. 4c ). Figure 4d–f give quantitative comparisons at t = 1.95 ns, t = 2.05 ns, and t = 2.15 ns. All these cases are normalized to each other with the same continuum signal level. The experimental time-resolved spectrum shows pronounced K α emission early in time (Fig. 4d ), which changes to a dominant 1s-2p absorption as time proceeds. Fig. 4: Comparison of time-resolved x-ray signals between experiment and models during the core flash. a The streaked spectra predicted by traditional CRE model ( Spect3D ) with isolated atomic database plus continuum-lowering (Stewart–Pyatt). b The experimental measurement. c The streaked spectra predicted by VERITAS (a DFT-based kinetic model). d – f The spectral comparisons among the three cases at three distinct time line-outs: t = 1.95-ns, 2.05-ns, and t = 2.15-ns. The experimental error bar of ±40% is mainly from x-ray photon statistics of the streaked signal. Full size image At early times, the DFT-based kinetic model ( VERITAS ) agrees well with the K α emission measurements, while the ATBASE + Stewart–Pyatt model over-predicts the emission peak and resolves the K α1 and K α2 spectral lines (due to less broadening), as shown in Fig. 4a . At t = 2.05 ns the heat wave reaches the Cu-doped layer, leading to a 1s-2p absorption “dip” in Fig. 4b . Again, the ATBASE + Stewart–Pyatt model gives a stronger absorption dip at lower photon energy in comparison to experiment, while VERITAS shows the same level of 1s-2p absorption depth. It is noted that the experimental 1s-2p absorption feature is somewhat wider than in the model predictions. This slight discrepancy might come from the possibility that regions of the Cu-doped CH were driven deeper towards the hot spot by the stalk or other 3D perturbations in the experiments. Finally, when the shock and heat-wave have propagated through most of the CHCu[2%] layer, VERITAS still catches the experimental absorption level and width appropriately (Fig. 4f ), while the ATBASE + Stewart–Pyatt model gives somewhat stronger absorption (green dashed curve). Discussion The spectroscopic evolution from K α emission to 1s-2p absorption is directly related to the plasma conditions that are dynamically changing in the Cu-doped layer during stagnation. The density and temperature contours of the imploding shell at stagnation are respectively depicted in the upper and lower panels of Fig. 5a , as predicted by DRACO simulations. It shows the formation of a hot D2Ar[1%] core, with a return shock reaching the Cu-doped-CH layer, and a heat wave from the hot core propagating outward by thermal conduction. Fig. 5: The rad-hydro-predicted warm-/hot-dense plasma conditions during core flash of Cu-doped CH target implosions on OMEGA. a The density (upper) and temperature (down) contour plots of dense plasma conditions at stagnation (t = 2.05 ns) from 2D DRACO radiation-hydrodynamic simulation. The inner and outer circles of dotted lines indicate the inner and outer boundary of the Cu-doped layer whose region is marked by the black arrow. b The time evolution of plasma ρ/T conditions as well as the population of Cu’s 2p state in the Cu-doping region (inferred by VERITAS ) during the core flash, in which red symbols represent the situation at the inner interface and blue symbols are for the outer interface. The stagnation time (t = 2.05 ns) is marked by the vertical dashed line. Full size image To further illustrate this process, we plot in Fig. 5b the angularly-averaged plasma density and temperature at the inner and outer surfaces of the CHCu[2%] layer as a function of time. One sees that the return shock reaches the inner surface of the sample layer at t = 1.95 ns, causing a density jump and shock heating to a temperature of kT = 250 eV; a heat wave follows the return shock due to strong thermal conduction from the hot core as a result of the large temperature gradient, leading to heating of the CHCu[2%] layer to kT ≈ 540 eV (mid-panel of Fig. 5b ); finally, the return shock approaches the outer surface of the sample layer at a later time of t = 2.2 ns. Using these plasma conditions, we show the history of the Cu 2p -band population, as predicted by VERITAS , in the lower panel of Fig. 5b . For a fully occupied 2p band in Cu, there are six electrons in this band (energy level). The population of this 2p band starts to deplete significantly at t = 2.0 ns when the heat wave raises the sample layer’s temperature to over 300 eV, leading to the onset of 1s-2p absorption, which is observed in the time-resolved spectra (Fig. 4e ). Before this time, the fully occupied 2p band does not allow 1s-2p absorption to occur, so that K α emission is the dominant feature in the x-ray spectra measured at early times (Fig. 4d ). The plasma conditions change throughout the sample layer as the return shock and heat wave propagate through the CHCu[2%] layer. The observed spectrum represents this competition between K α emission and shifted 1s-2p absorption from different radial locations. Namely, the unshocked and colder regions give pronounced K α emission, while the heated parts contribute dominantly to the 1s-2p absorption feature. These processes compete in generating and transporting radiation and determine what is measured by the x-ray spectrometers. Overall, the DFT-based VERITAS model reasonably describes the dynamic change in measured x-ray spectral features. The traditional CRE models might give the proper level of both K α emission and 1s-2p absorption, but their predictions tend to be highly dependent on their underlying atomic structure and continuum lowering models, which can make it difficult to isolate the physical effects of interest. For these high-adiabat and relatively-low velocity implosion studies on OMEGA, it is noted that the x-ray spectroscopy data are reproducible (see Supplementary Information ). To summarize, we have performed a theoretical and experimental study of atomic physics in Cu-doped plastic at several billion atmospheres of pressure. Overall, a DFT-based approach reproduces many of the emission and absorption features that are observed in the experiment, while traditional plasma spectroscopy treatments show sensitivity to the combination of atomic physics and continuum lowering models that are implemented. This sensitivity contributes to the present open questions on the validity of ad hoc continuum lowering models (see also ref. 54 ). This work indicates the necessity for a self-consistent treatment of dense plasma effects on altering atomic energy levels/bands and their populations at ultrahigh pressures. The DFT-based VERITAS approach, with potential future benchmarks using other buried metal and metal-alloy layers, could provide a reliable way for simulating radiation generation and transport in dense plasmas encountered in stars and inertial fusion targets. The experimental scheme reported here, based on a laser-driven implosion, can be readily extended to a wide range of materials in single- and multiple-shell geometries, opening the way for far-reaching investigations of extreme atomic physics and DFT models at tremendous pressures. Methods Implosion experiment on OMEGA The experiments were conducted by symmetric laser drive using 60 Omega laser beams. Standard implosion diagnostics were used for these experiments, including neutron yield detector 55 , wedge range filter for area-density measurement 56 , and the neutron-timing diagnostic (NTD) for ion temperature and bang time 57 . Measurement of x-ray emission spectra Bragg reflection crystal spectrometers recorded time-integrated and time-resolved x-ray spectra in the energy range of 7800–8600 eV. One spectrometer 58 was coupled to an x-ray streak camera to achieve 80-ps time-resolution; the other was coupled to x-ray–sensitive image plate. Conversion to source emission From pinhole camera measurements of such implosions, the estimated x-ray source size is ~100-μm in diameter. With respect to the x-ray spectrometers which are 13–19.3 cm away from the target chamber center, the imploded capsule can be represented as a point source. The measured spectra were converted to source emission \({S}_{\nu }\) incident on each resolution element: \({S}_{\nu }\left[\frac{{ph}}{{sr}\cdot {eV}}\right]=\frac{{I}_{{{{{{\rm{\nu }}}}}}}}{f\left(E\right)T\left(E\right)G\left(E\right)}\) , where \({I}_{{{{{{\rm{\nu }}}}}}}\) is the measured signal density (photo-stimulated luminescence (PSL) per pixel for IP and analog-to-digital units (ADU) per pixel for the streak camera), \(f(E)\) the instrument sensitivity function (signal (ADU or PSL) per photon), \(T\left(E\right)\) the filter transmission, and \(G\left(E\right)=R(E)\frac{{dE}}{d\theta }\frac{d\Omega }{{dA}}\) the spectrometer crystal response 59 . \(f(E)\) is constructed from calibration measurements and detector models for both the IP 60 and streak camera 61 , 62 . The integrated reflectivity \(R\left(E\right)\) is calculated from the x-ray optics software XOP 63 . \(\frac{{dE}}{d\theta }\) and \(\frac{d\Omega }{{dA}}\) are calculated from a geometric ray-trace for each spectrometer. Statistical uncertainty due to photon statistics in the time integrated spectrometer is low, of order 0.5% after averaging over pixels in the non-dispersive dimension. In the time-resolved spectrometer, stochastic processes inherent to the streak camera amplification dominate statistical uncertainty, yielding ~30% fractional uncertainty after averaging over a resolution element of 80 ps. Systematic uncertainties include calibration measurements, filter thicknesses, and the crystal integrated reflectivity. The overall resolving power of the x-ray spectrometers is calibrated to be E/ \(\triangle E\) ≈ 1100. The resolving power of the x-ray spectrometers is primarily limited by source size broadening and crystal rocking curve effects 58 . Streak camera time-base The time-base \(t\left(x\right)\) was measured on separate shots by irradiating a Au foil with a train of laser pulses of known timing. The integration time of each pixel (“dwell time”) \(\Delta t=\frac{{dt}\left(x\right)}{{dx}}\) is used to calculate the source emission rate \(\frac{d{S}_{{{{{{\rm{\nu }}}}}}}}{{dt}}\left[\frac{{ph}}{{sr}\cdot {eV}\cdot s}\right]=\frac{{S}_{{{{{{\rm{\nu }}}}}}}}{\Delta t}\) . X-rays at the high energies of this spectrometer predominantly originate from the core; assuming the x-ray emission closely follows neutron production, the time-base is shifted to align the time of peak x-ray emission with the time of peak neutron production. Radiation-hydrodynamic simulations The radiation-hydrodynamic simulations of implosion experiments have been performed with the 1-D LILAC 64 and 2-D DRACO 65 codes developed at the Laboratory for Laser Energetics. State-of-the-art physics models are employed in these simulations, including 3-D ray-tracing for laser energy deposition with a cross-beam energy transfer (CBET) model 66 , the iSNB 67 nonlocal thermal transport model, and the first-principles equation-of-state (FPEOS 8 , 68 , 69 ) and opacity tables (FPOT 70 , 71 ) for constituent materials. For radiation energy transport, a multi-group diffusion model was used in DRACO with 48-group opacity tables. Cylindrical symmetry was enforced in the 2D DRACO simulations, in which r-z coordinates are employed with the azimuthal symmetry axis along the z -axis. The quasi-1D nature of such high-adiabat implosions should lead to small 2D or 3D effects in x-ray emission calculations, justifying use of 2D (as opposed to 3D) simulations. Laser imprinting was simulated up to a maximum laser-speckle mode of \(l=150\) in DRACO , even though these simulations showed little effect of laser perturbations on such high-adiabat implosions with 1-ns square pulses at a laser intensity of \(\sim 1.1\times {10}^{15}\) W cm −2 . DRACO simulation results were compared with experiments (e.g., see Table 2 ). The time-dependent 2-D density and temperature profiles, predicted from DRACO simulations, were used for further processing by a variety of CRE models and VERITAS , for x-ray spectral comparisons with experimental measurements. Collisional radiative equilibrium ( CRE ) modeling Taking the DRACO -predicted plasma density and temperature profiles, we have applied the simulation package Spect3D 21 to perform the CRE modeling of x-ray spectra from these implosions. Spect3D uses atomic databases and continuum lowering models to track energy level populations, which are coupled to nonlocal radiation transport for x-ray generation (e.g., K α -emission of Cu), absorption, and propagation. The spectral resolving power \((E/\triangle E=1100)\) , temporal resolution ( \(\delta t=80\) ps), and spatial resolution ( \(\delta x=10\) μm) were applied to the synthetic x-ray spectra from Spect3D . In Spect3D simulations, we processed 10 equal-angle-spaced radial lineouts of DRACO -predicted density and temperature profiles along different angular directions with kinetic models incorporating detailed atomic physics and radiation transport. We then averaged the resulting spectra among radial lineouts. Given the largely 1D-like performance of such high-adiabat implosions (see Fig. 5a and Table 2 ), this quasi-2D treatment should be reasonable to avoid the time-consuming computations of 2D radiation transport with detailed atomic kinetics, for a relatively big 2D grid size of \(601\times 591\) . Nevertheless, the use of 1-D radial lineouts in lieu of a full 2D or 3D analysis could be an additional cause for the small discrepancy observed in VERTAS -experiment comparisons. VERITAS: DFT-based multi-band kinetic modeling The VERITAS code, developed in this work, is based on a density-functional theory (DFT) description of energy bands in dense plasma. The kinetic modeling of multi-band populations \(({n}_{{{{{{\rm{i}}}}}}})\) is coupled with radiation transfer, as described by the following coupled equations for the steady state condition: $$\left\{\begin{array}{c}\frac{d{n}_{{{{{{\rm{i}}}}}}}}{{dt}}={-n}_{{{{{{\rm{i}}}}}}}\mathop{\sum }\limits_{j\ne i}^{N}{W}_{{{{{{\rm{ij}}}}}}}(I,\nu )+\mathop{\sum }\limits_{j\ne i}^{N}{n}_{{{{{{\rm{j}}}}}}}{W}_{{{{{{\rm{ji}}}}}}}(I,\nu )=0,\,{{{{{\rm{for}}}}}}\,{{{{{\rm{band}}}}}}\,i\\ \mu \frac{\partial I(r,\nu )}{\partial r}+\frac{(1-{\mu }^{2})}{r}\frac{\partial I(r,\nu )}{\partial \mu }=\eta \left(r,\nu \right)-\chi (r,\nu )I(r,\nu )\end{array}\right.$$ (1) with \({W}_{{{{{{\rm{ij}}}}}}}\) being the transition rates among the total \(N\) bands considered, which may depend on the specific intensity \(I(r,\, \nu)\) of x-rays at radius r and frequency \(\nu\) (e.g., for photoionization and stimulated radiative processes). Here, the line of sight is along the \(z\) axis, which has an angle \(\theta\) relative to the 1-D spherical radial coordinate \(r\) , i.e., \(\mu={{\cos }}\left(\theta \right).\) The above rate equation describes the population change for each band at the steady state condition \(({dn}/{dt}=0)\) , due to radiative processes among the dipole-transition–allowed energy bands. For example, the population rate of change on band i (due to radiative coupling to band j with \({E}_{{{{{{\rm{i}}}}}}} < {E}_{{{{{{\rm{j}}}}}}}\) ) can be defined as the sum of the depopulating term \(-{{n}_{{{{{{\rm{i}}}}}}}W}_{{{{{{\rm{ij}}}}}}}={-{n}_{{{{{{\rm{ij}}}}}}}B}_{{{{{{\rm{ij}}}}}}}\bar{{I}_{{{{{{\rm{ij}}}}}}}}\) (stimulated absorption from i to j) and the populating term \({{n}_{{{{{{\rm{j}}}}}}}W}_{{{{{{\rm{ji}}}}}}}={{n}_{{{{{{\rm{ji}}}}}}}A}_{{{{{{\rm{ji}}}}}}}+{{n}_{{{{{{\rm{ji}}}}}}}B}_{{{{{{\rm{ji}}}}}}}\bar{{I}_{{ij}}}\) (spontaneous and stimulated emission from j to i) . Note that for the case of \({E}_{{{{{{\rm{i}}}}}}} > {E}_{{{{{{\rm{j}}}}}}}\) only the depopulating term appears in the rate equation for band i . Here, \({n}_{{{{{{\rm{ij}}}}}}}\) and \({n}_{{{{{{\rm{ji}}}}}}}\) are the maximum populations allowed for the corresponding radiative process. For instance, \({n}_{{{{{{\rm{ji}}}}}}}\) will depend on the number of “ holes ” (depletion) in band i and the weighted population in band j : \({n}_{{{{{{\rm{ji}}}}}}}={\min }[({n}_{{{{{{\rm{i}}}}}},{{{{{\rm{full}}}}}}}-{n}_{{{{{{\rm{i}}}}}}}),({n}_{{{{{{\rm{j}}}}}}}\times {g}_{{{{{{\rm{i}}}}}}}/{g}_{{{{{{\rm{j}}}}}}})]\) with \({n}_{{{{{{\rm{i}}}}}},{{{{{\rm{full}}}}}}}\) being the fully-occupied population on band i and \({g}_{{{{{{\rm{i}}}}}}}({g}_{{{{{{\rm{j}}}}}}})\) standing for the degeneracy of band i ( j ). The Einstein coefficients A and B are related to the oscillator strength between bands i and j , which can be calculated using DFT-determined orbitals. The frequency averaged mean radiation intensity is defined as \(\bar{{I}_{{{{{{\rm{ij}}}}}}}}=\int I\left(\nu \right){\times \phi }_{{{{{{\rm{ij}}}}}}}(\nu -{\nu}_{{{{{{\rm{ij}}}}}}})d\nu\) , with the Voigt line profile \({\phi }_{{{{{{\rm{ij}}}}}}}(\nu -{\nu}_{{{{{{\rm{ij}}}}}}})\) centered at the frequency \({\nu}_{{ij}}\) corresponding to the energy gap between bands i and j and with line broadening models discussed later. The emissivity \(\eta \left(r,\, n,\, \nu\right)\) and absorption coefficient \(\chi \left(r,\, n,\, \nu\right)\) have a dependence on the band population \((n)\) of dense plasmas at the local grid \(r\) and the radiation frequency \(\nu\) . The population changes on multiple energy bands are kinetically modeled by the rate equation (top equation), in which the radiative transition coefficients among different energy bands are calculated by using the DFT-determined orbitals, in contrast to the use of atomic databases of isolated atom plus continuum lowering in traditional CRE models. The DFT-based rate equation is then coupled with the radiation transfer equation (bottom equation) to simulate x-ray photoionization, emission, and band-band absorption processes throughout the density-temperature grid given by radiation-hydrodynamic codes. The radiation field at any spatial grid is determined by self-consistently solving the coupled rate and radiation transfer equations until a steady-state solution of the populations is achieved, similar to the procedure employed in Spect3D . Since the DFT description of energy bands in dense plasmas is self-consistent with the plasma environment, the band energy shift is naturally included for a given plasma condition. So, there is no need to invoke a continuum lowering model in VERITAS . In principle, many energy bands can be included in VERITAS , without prescribing which are bound and which belong to the continuum , even though such a designation can be determined from density-of-state (DOS) calculations. Specifically, we have included the measurement-relevant energy bands [ 1 s, 2p, 3p , and continuum of copper (Cu)] for modeling the implosion spectroscopy experiments. The main radiative processes considered among Cu’s energy bands are: \(1s\leftrightarrow {continuum}\) (photoionization/radiative-recombination), \(1s\leftrightarrow 2p\) (band-band absorption/K α -emission), \(1s\leftrightarrow 3p\) (band-band absorption/K β -emission), \(2p\leftrightarrow {continuum}\) (photoionization/radiative-recombination), and \(3p\leftrightarrow {continuum}\) (photoionization/radiative-recombination). Even though we are focusing on the \(1s\leftrightarrow 2p\) transition spectra, the inclusion of the bounded \(3p\) -band of Cu is to ensure that all relevant population and depletion channels of the \(1s\) -band are properly accounted for in the kinetic simulations. The exclusion of 2 s and 3 s bands from VERITAS modeling of the current experiments is based on the fact that their transitions to the 1 s band are dipole-forbidden and their coupling to 2p and 3p are outside the spectral range of interest. At the plasma conditions encountered here, the n = 4 bands of Cu have already merged into the continuum . For the conditions studied here ( \(20-500\) eV and \(2-20\times\) solid-density), the rates for electron collisional processes are so high that local thermodynamic equilibrium is well maintained. Thus, one can take the thermal-DFT predicted band populations as a starting point in VERITAS to simulate the aforementioned radiative processes only, while the fast electron collisional processes are assumed to balance each other so that they can be omitted from current VERITAS simulations. To enable the VERITAS simulations of these x-ray spectroscopy experiments, we have first built a DFT-based table storing the relevant frequency and oscillator strength of transitions among Cu’s energy bands, for a density and temperature grid of CHCu[2%] spanning the mass density and temperature ranges of \(\rho=2-50\) g cm −3 and \({kT}=10-500\) eV. For each density and temperature condition, we have performed quantum molecular-dynamics (QMD) simulations to sample a variety of ionic configurations of dense CHCu[2%] plasma, based on the thermal-DFT formalism in either the Kohn-Sham orbital-based format or the orbital-free scheme. These DFT-based QMD calculations have been performed by using ABINIT 72 and our in-house OFMD code ( DRAGON ), with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional 73 . For temperatures below ~50 eV the QMD simulations were done by using the orbital-based Kohn-Sham DFT implemented in ABINIT , while for high temperatures ( kT > 50 eV) we turned to orbital-free DFT to perform our QMD simulations. Taking snapshots from QMD calculations, we performed the oscillator strength calculations for radiative transitions by using the Kubo-Greenwood package – KGEC@QUANTUM-ESPRESSO 74 , with the all-active-electron projector-augmented-wave (PAW) potential 75 . These choices of DFT packages, exchange-correlation functional, and potentials follow standard practice in the warm-dense matter physics community. The band-energy deficiency from DFT calculations has been compensated with constant shifts derived from comparison with experimental energy levels of Cu at ambient condition. The band-energy “ deficiency ” from DFT refers to the small (~1–2%) difference between the DFT-calculated 1s-2p energy gap of Cu and the experimentally measured Kα energy at ambient conditions. This small band-gap difference has been known as an intrinsic fact of DFT that uses approximated exchange-correlation functionals (e.g., PBE used here) which suffer from self-interaction error. These DFT calculations invoked 100–200 atoms of C, H, and Cu according to their atomic fractions in a supercell with periodic boundary conditions, which were converged with respect to number of bands, K-points, and energy cut-off. With such a pre-calculated DFT table accessible to VERITAS , the radiative transition rates can be computed at any spatial grid for plasma conditions of CHCu[2%] given by rad-hydro simulations. It is noted that the kinetic modeling was done only for the sample CHCu[2%] layer, while radiation transport in D 2 Ar and pure CH plasmas were calculated by using the emissivity and opacity tables from PrOpacEOS 76 , same as what were used in CRE modeling. In principle, the band broadening of Cu can be determined from direct DFT-based QMD calculations. However, due to the limited number of Cu atoms involved in such demanding calculations, the resulting band-width (broadening) is currently not reliable due to the lack of sufficient sampling for charge-state distribution (CSD). Instead, we have adopted in VERITAS the temperature- and density-dependent broadening information coming from both SCRAM 22 and FAC 77 calculations for Stark (with an enhancement factor of ~5) and CSD broadening effects, as well as Doppler shift due to fluid motion, in a Voigt line profile. Both the SCRAM and FAC codes consider traditional plasma broadening mechanisms, including electron thermal-collision broadening 78 , Stark broadening due to ion micro fields 79 , and broadening from the charge-state distribution 80 . While all of these broadening mechanisms can explain the line-shape observations in low-density and high-temperature classical plasmas, they appear unable to account for the enhanced broadening seen in the dense plasmas created and reported here. We speculate that the current treatment of micro-field induced Stark broadening might have missed some of the density effects from coupled ions in such dense plasmas, hence the ad-hoc 5x increase in broadening applied to the VERITAS results. We hope the experimental observations of enhanced broadening reported here shall motivate future investigations on how density effects change line broadening in warm-dense plasmas. Finally, since VERITAS is based on the DFT description of dense plasmas, we expect its low-density limit to be around or slightly below ambient solid density, below which DFT calculations are no longer practically feasible and traditional atomic physics models should work better. Data availability The experimental data, Spect3D simulation data, VERITAS simulation data that support the findings of this study are available from the corresponding authors upon request. Code availability The VERITAS code that support the findings of this study are available from the corresponding authors upon request. | None | [] | [] | [] | SciNews | Physics | S. X. Hu et al, Probing atomic physics at ultrahigh pressure using laser-driven implosions, Nature Communications (2022). DOI: 10.1038/s41467-022-34618-6 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-022-34618-6 | https://phys.org/news/2022-11-first-of-its-kind-experimental-evidence-defies-conventional.html | Researchers at the University of Rochester Laboratory for Laser Energetics have conducted an experiment to study how radiation travels through dense plasmas, a state of matter that is the most abundant in the universe. Using the OMEGA laser, they created a plasma of copper atoms under extreme pressure and heat, and then bombarded it with X-rays to measure how the radiation was transported. The results showed that the changes in atomic energy levels did not follow conventional theories, and instead could only be explained using a self-consistent approach based on density-functional theory (DFT). This new understanding of radiation transport in dense plasmas has important implications for fields such as planetary science, astrophysics, and fusion energy, and could potentially aid in the realization of controlled nuclear fusion as an alternative energy source.
Most people are familiar with solids, liquids, and gases as three states of matter. However, a fourth state of matter, called plasmas, is the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. Because dense plasma—a hot soup of atoms with free-moving electrons and ions—typically only forms under extreme pressure and temperatures, scientists are still working to comprehend the fundamentals of this state of matter. Understanding how atoms react under extreme pressure conditions—a field known as high-energy-density physics (HEDP)—gives scientists valuable insights into the fields of planetary science, astrophysics, and fusion energy. One important question in the field of HEDP is how plasmas emit or absorb radiation. Current models depicting radiation transport in dense plasmas are heavily based on theory rather than experimental evidence. In a new paper published in Nature Communications, researchers at the University of Rochester Laboratory for Laser Energetics (LLE) used LLE's OMEGA laser to study how radiation travels through dense plasma. The research, led by Suxing Hu, a distinguished scientist and group leader of the High-Energy-Density Physics Theory Group at the LLE and an associate professor of mechanical engineering, and Philip Nilson, a senior scientist in the LLE's Laser-Plasma Interaction group, provides first-of-its-kind experimental data about the behavior of atoms at extreme conditions. The data will be used to improve plasma models, which allow scientists to better understand the evolution of stars and may aid in the realization of controlled nuclear fusion as an alternative energy source. "Experiments using laser-driven implosions on OMEGA have created extreme matter at pressures several billion times the atmospheric pressure at Earth's surface for us to probe how atoms and molecules behave at such extreme conditions," Hu says. "These conditions correspond to the conditions inside the so-called envelope of white dwarf stars as well as inertial fusion targets." A NASA image of plasma bursting from the sun. Plasma—a hot soup of atoms with free moving electrons and ions—is the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. A new study from University of Rochester researchers provides experimental data about how radiation travels through dense plasmas, which will help scientists to better understand planetary science and fusion energy. Credit: NASA Using X-ray spectroscopy The researchers used X-ray spectroscopy to measure how radiation is transported through plasmas. X-ray spectroscopy involves aiming a beam of radiation in the form of X-rays at a plasma made of atoms—in this case, copper atoms—under extreme pressure and heat. The researchers used the OMEGA laser both to create the plasma and to create the X-rays aimed at the plasma. When the plasma is bombarded with X-rays, the electrons in the atoms "jump" from one energy level to another by either emitting or absorbing photons of light. A detector measures these changes, revealing the physical processes that are occurring inside the plasma, similar to taking an X-ray diagnostic of a broken bone. A break from conventional theory The researchers' experimental measurements indicate that, when radiation travels through a dense plasma, the changes in atomic energy levels do not follow conventional theories currently used in plasma physics models—so-called "continuum-lowering" models. The researchers instead found that the measurements they observed in their experiments can only be explained using a self-consistent approach based on density-functional theory (DFT). DFT offers a quantum mechanical description of the bonds between atoms and molecules in complex systems. The DFT method was first described in the 1960s and was the subject of the 1998 Nobel Prize in Chemistry. "This work reveals fundamental steps for rewriting current textbook descriptions of how radiation generation and transport occurs in dense plasmas," Hu says. "According to our experiments, using a self-consistent DFT approach more accurately describes the transport of radiation in a dense plasma." Says Nilson, "Our approach could provide a reliable way for simulating radiation generation and transport in dense plasmas encountered in stars and inertial fusion targets. The experimental scheme reported here, based on a laser-driven implosion, can be readily extended to a wide range of materials, opening the way for far-reaching investigations of extreme atomic physics at tremendous pressures." |
10.1038/s41467-022-32007-7 | Proteins and natural language: Artificial intelligence enables the design of novel proteins | Artificial intelligence (AI) has created new possibilities for designing tailor-made proteins to solve everything from medical to ecological problems. A research team at the University of Bayreuth led by Prof. Dr. Birte Höcker has now successfully applied a computer-based natural language processing model to protein research. Completely independently, the ProtGPT2 model designs new proteins that are capable of stable folding and could take over defined functions in larger molecular contexts. The model and its potential are detailed scientifically in Nature Communications. Natural languages and proteins are actually similar in structure. Amino acids arrange themselves in a multitude of combinations to form structures that have specific functions in the living organism—similar to the way words form sentences in different combinations that express certain facts. In recent years, numerous approaches have therefore been developed to use principles and processes that control the computer-assisted processing of natural language in protein research. "Natural language processing has made extraordinary progress thanks to new AI technologies. Today, models of language processing enable machines not only to understand meaningful sentences but also to generate them themselves. Such a model was the starting point of our research. With detailed information concerning about 50 million sequences of natural proteins, my colleague Noelia Ferruz trained the model and enabled it to generate protein sequences on its own. It now understands the language of proteins and can use it creatively. We have found that these creative designs follow the basic principles of natural proteins," says Prof. Dr. Birte Höcker, Head of the Protein Design Group at the University of Bayreuth. The language processing model transferred to protein evolution is called ProtGPT2. It can now be used to design proteins that adopt stable structures through folding and are permanently functional in this state. In addition, the Bayreuth biochemists have found out, through complex investigations, that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution. These findings shed light on the immeasurable world of possible proteins and open a door to designing them in novel and unexplored ways. There is a further advantage: Most proteins that have been designed de novo so far have idealized structures. Before such structures can have a potential application, they usually must pass through an elaborate functionalization process—for example by inserting extensions and cavities—so that they can interact with their environment and take on precisely defined functions in larger system contexts. ProtGPT2, on the other hand, generates proteins that have such differentiated structures innately, and are thus already operational in their respective environments. "Our new model is another impressive demonstration of the systemic affinity of protein design and natural language processing. Artificial intelligence opens up highly interesting and promising possibilities to use methods of language processing for the production of customized proteins. At the University of Bayreuth, we hope to contribute in this way to developing innovative solutions for biomedical, pharmaceutical, and ecological problems," says Prof. Dr. Birte Höcker. | A research team at the University of Bayreuth, led by Prof. Dr. Birte Höcker, has successfully applied a computer-based natural language processing model to protein research, creating a new model called ProtGPT2. This model can design new proteins that are capable of stable folding and can take on defined functions in larger molecular contexts, without the need for further functionalization. The model works by using principles and processes from natural language processing to generate protein sequences, allowing it to create proteins that adopt stable structures and are permanently functional. The team has found that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution, opening up new possibilities for designing proteins in novel and unexplored ways. The potential applications of this technology are vast, with the possibility of developing innovative solutions for biomedical, pharmaceutical, and ecological problems. | None | Abstract Protein design aims to build novel proteins customized for specific purposes, thereby holding the potential to tackle many environmental and biomedical problems. Recent progress in Transformer-based architectures has enabled the implementation of language models capable of generating text with human-like capabilities. Here, motivated by this success, we describe ProtGPT2, a language model trained on the protein space that generates de novo protein sequences following the principles of natural ones. The generated proteins display natural amino acid propensities, while disorder predictions indicate that 88% of ProtGPT2-generated proteins are globular, in line with natural sequences. Sensitive sequence searches in protein databases show that ProtGPT2 sequences are distantly related to natural ones, and similarity networks further demonstrate that ProtGPT2 is sampling unexplored regions of protein space. AlphaFold prediction of ProtGPT2-sequences yields well-folded non-idealized structures with embodiments and large loops and reveals topologies not captured in current structure databases. ProtGPT2 generates sequences in a matter of seconds and is freely available. Introduction Natural language processing (NLP) has seen extraordinary advances in recent years. Large pre-trained language models have drastically transformed the NLP field and with it, many of the tools we use in our daily lives, such as chatbots, smart assistants, or translation machines. Analogies between protein sequences and human languages have long been noted by us and others 1 , 2 . Protein sequences can be described as a concatenation of letters from a chemically defined alphabet, the natural amino acids, and like human languages, these letters arrange to form secondary structural elements (“words”), which assemble to form domains (“sentences”) that undertake a function (“meaning”). One of the most attractive similarities is that protein sequences, like natural languages, are information-complete: they store structure and function entirely in their amino acid order with extreme efficiency. With the extraordinary advances in the NLP field in understanding and generating language with near-human capabilities, we hypothesized that these methods open a new door to approach protein-related problems from sequence alone, such as protein design. Although protein sequences and human languages are not without dissimilarities, their analogies have stimulated applying NLP methods to solve protein research problems for decades 2 . Supervised NLP methods, where the input sequences are trained jointly with their labels to produce predictive models, have been applied to various tasks, such as detecting structural similarity or predicting stability 3 , 4 . A remarkable collection of supervised language models applied to biomolecules is available in the BioSeq-BLM platform 5 , 6 . Nevertheless, since the inception of the Transformer 7 , unsupervised learning, where the training occurs on unlabeled data, has emerged as a versatile tool for language modeling. Several Transformer-based models, such as TCR-BERT 8 , epiBERTope 9 , ESM 10 , ProtTrans 11 , or ProteinBERT 12 , have shown to be very competitive with other methods 13 , 14 . Most of these models use BERT-like 15 architectures and denoising autoencoding training objectives, i.e., they are pre-trained by corrupting the input tokens in some way and trying to reconstruct the original sentence 2 . Although these models could be adjusted for generation 16 , their most direct application is sequence embedding. Another important branch of language models benefits from autoregressive training, i.e., models are trained to predict subsequent words given a context. These models, the most well-known of which are possibly the GPT-x series 17 , excel at generating long, coherent text—sometimes to the extent that much debate has been raised about their potential misuse 18 . Protein autoregressive language models, such as ProGen 19 , 20 , 21 , RITA 22 , and DARK 23 have also been studied, and show the potential of autoregressive Transformers for protein design. Motivated by these works and the ever-increasing capabilities of English-speaking models such as the GPT-x series, we wondered whether we could train a generative model to (i) effectively learn the protein language, (ii) generate fit, stable proteins, and (iii) understand how these sequences relate to natural ones, including whether they sample unseen regions of the protein space. Here, we introduce ProtGPT2, an autoregressive Transformer model with 738 million parameters capable of generating de novo protein sequences in a high-throughput fashion. ProtGPT2 has effectively learned the protein language upon being trained on about 50 non-annotated million sequences spanning the entire protein space. ProtGPT2 generates protein sequences with amino acid and disorder propensities on par with natural ones while being “evolutionarily” distant from the current protein space. Secondary structure prediction calculates 88% of the sequences to be globular, in line with natural proteins. Representation of the protein space using similarity networks reveals that ProtGPT2 sequences explore ‘dark’ areas of the protein space by expanding natural superfamilies. The generated sequences show predicted stabilities and dynamic properties akin to their natural counterparts. Since ProtGPT2 has been already pre-trained, it can be used to generate sequences on standard workstations in a matter of seconds or be further finetuned on sequence sets of a user’s choice to augment specific protein families. The model and datasets are available in the HuggingFace repository 24 at ( ). Since protein design has an enormous potential to solve problems in fields ranging from biomedical to environmental sciences 25 , 26 , we believe that ProtGPT2 is a timely advance towards efficient high-throughput protein engineering and design. Results Learning the protein language The major advances in the NLP field can be partially attributed to the scale-up of unsupervised language models. Unlike supervised learning, which requires the labeling of each data point, self-supervised (or often named unsupervised) methods do not require annotated data, thus promoting the use of ever-growing datasets such as Wikipedia or the C4 Corpus 27 . Given both the growth of protein sequence databases and the lack of annotation for a significant part of the protein space, protein sequences have become great candidates for unsupervised training 4 , 10 , 11 and now offer the opportunity to encode and generate protein sequences. To achieve this goal, we trained a Transformer 7 to produce a model that generates protein sequences. Language models are statistical models that assign probabilities to words and sentences. We are interested in a model that assigns high probability to sentences (W) that are semantically and syntactically correct or fit and functional, in the case of proteins. Because we are interested in a generative language model, we trained the model using an autoregressive strategy. In autoregressive models, the probability of a particular token or word (w i ) in a sequence depends solely on its context, namely the previous tokens in the sequence. The total probability of a sentence (W) is the combination of the individual probabilities for each word (w i ): $$p\,\left(W\right)=\mathop{\prod }\limits_{i}^{n}p\left({w}_{i}|{w}_{ < i}\right)$$ (1) We trained the Transformer by minimizing the negative log-likelihood over the entire dataset. More intuitively, the model must learn the relationships between a word w i —or amino acid—and all the previous ones in the sequence, and must do so for each sequence k in dataset (D): $${{{{{{{{\mathscr{L}}}}}}}}}_{{{{{{{{{\mathrm{CLM}}}}}}}}}}=\,-\mathop{\sum }\limits_{k=1}^{D}{log}\,{p}_\theta\left({w}_{i}^{k}|{w}_{ < i}^{k}\right)$$ (2) To learn the protein language, we used UniRef50 (UR50) (version 2021_04), a clustering of UniProt at 50% identity. We chose this dataset versus larger versions of UniParc (such as UR100) as it was previously shown to improve generalization and performance for the ESM Transformers 10 . Uniref50’s sequences populate the entire protein space, including the dark proteome, regions of the protein space whose structure is not accessible via experimental methods or homology modeling 28 , 29 . For evaluation, we randomly excluded 10% of the dataset sequences—these sequences are not seen by ProtGPT2 during the training process. The final training datasets contained 44.9 and 4.9 million sequences for training and evaluation, respectively. We tokenized our dataset using the BPE algorithm 30 . The final model is a decoder-only architecture of 36 layers and 738 million parameters. Analogous to the GLUE benchmark 31 —a collection of tools that computational linguists use to evaluate language models on different tasks such as question answering or translation—we also developed a series of extrinsic tests to assess the quality of ProtGPT2-generated sequences. The following sections elaborate on how ProtGPT2 generates de novo sequences with properties that resemble modern protein space. Statistical sampling of natural amino acid propensities Autoregressive language generation is based on the assumption that the probability distribution of a sequence can be decomposed into the product of conditional next-word distributions (Eq. 1 ). However, there is still considerable debate about the best decoding strategy to emit sequences from a model 32 . It is not uncommon that well-trained generic language models that perform well in GLUE tasks generate incoherent gibberish or repetitive text depending on the sampling procedure 32 . We briefly summarize here the most used sampling strategies for language generation that we applied in this study. Greedy search strategy selects the word with the highest probability at each timestep. Although algorithmically simple, the generated sequences are deterministic and soon also become repetitive (Fig. 1a ). Beam search tries to alleviate this problem by retaining the most probable candidates, although the resulting texts still suffer from repetitiveness and are not as surprising as those from humans, which tend to alternate low and high probability tokens 32 (Fig. 1b ). Lastly, random sampling moves away from deterministic sampling by randomly picking a word out of the top-k most probable ones (Fig. 1c, d ). Fig. 1: Examples with different sampling parameters for GPT2-large after the context input: ‘ten best things to do in Lisbon’ (a–d) and ProtGPT2 without context (e–h). While greedy and beam search produce repetitive sentences ( a , b ) and protein sequences ( e , f ), sampling generates creative texts, which, however, can be degenerate ( c ) or not sample natural sequence propensities ( g ) for small values of k. Larger values of k produce quality text ( d ) and sequences whose propensities match natural ones. Repetitive and degenerative text are shown in blue and orange, respectively. Full size image In a recent study, Holtzman et al. 32 investigated several sampling strategies to find the best parameters for text generation. Inspired by this work, we systematically generated sequences following different sampling strategies and parameters (Fig. 1 ). To assess what sampling procedure generates the most natural-like sequences, we compared the amino acid propensities of the generated set to that found in natural protein sequences (Methods). As stated by Hoffmann et al., we also observe greedy and beam search to produce repetitive, deterministic sequences, while random sampling dramatically improves the generated propensities (Fig. 1 ). Moreover, we also observe that high values of k are needed to generate sequences that resemble natural ones, i.e., our best results occur in the range of k > 800 and we specifically chose k = 950 in this work (Fig. 1h ). As observed with other generative models 33 , 34 , our sampling improves when applying a repetition penalty of 1.2. Consequently, we used these sampling parameters for the rest of this work. ProtGPT2 sequences encode globular proteins In order to evaluate ProtGPT2’s generated sequences in the context of sequence and structural properties, we created two datasets, one with sequences generated from ProtGPT2 using the previously described inference parameters, and the other with randomly chosen sequences from UR50. Each dataset consists of 10,000 sequences. Since ProtGPT2 was trained in an unsupervised manner, i.e., without including functional annotations, our analyses focus on validating the structural and biochemical properties of ProtGPT2 sequences. We first studied disordered and secondary structural content in the datasets. It has been previously shown that approximately 14% of the proteins found in bacteria and archaea are disordered 28 . To this end, we ran IUPred3 35 to analyze if the ProtGPT2-generated sequences are more prone to be disordered than a set of natural sequences. Interestingly, our analysis shows a similar number of globular domains among the ProtGPT2-generated sequences (87.59%) and natural sequences (88.40%). Several methods have been reported that detect short intrinsically disorder regions 36 . Since our goal is to provide high-level comparisons of globularity and prevalent disorder across datasets, we further performed an analysis of the protein sequences at the amino acid level using IUPred3. Remarkably, our results show a similar distribution of ordered/disordered regions for the two datasets, with 79.71 and 82.59% of ordered amino acids in the ProtGPT2 and natural datasets, respectively (Table 1 ). Table 1 Disorder and secondary structure predictions of the natural and ProtGPT2 dataset Full size table We next investigated whether the similarities in disorder are a consequence of equivalent secondary structure element content. To this end, we computed PSIPRED 37 predictions for the ProtGPT2 and natural sequence datasets. The natural sequences display alpha-helical, beta-sheet, and coil contents of 45.19, 41.87, and 12.93%, respectively. The ProtGPT2 dataset presented percentages of 48.64, 39.70, and 11.66%, respectively. These results indicate that ProtGPT2 generates sequences that resemble globular domains whose secondary structure contents are comparable to those found in the natural space. ProtGPT2 sequences are similar yet distant to natural ones Proteins have diversified immensely in the course of evolution via point mutations as well as duplication and recombination. Using sequence comparisons, it is, however, possible to detect similarities between two proteins even when their sequences have significantly diverged. We wondered how related ProtGPT2 sequences are to natural ones. To this end, we utilized HHblits, a sensitive remote homology detection tool that uses profile hidden Markov models to search query sequences against a database 38 . We searched for homologs of the 10,000 sequences in ProtGPT2’s dataset against the Uniclust30 database 39 . For comparison purposes, we also performed the same search with the natural dataset using the same settings. In addition, to analyze how completely random sequences would compare against ProtGPT2 ones, we also crafted a third dataset by randomly concatenating the 25 letters in the vocabulary. Because we want to provide a quantitative comparison of the datasets’ relatedness to modern protein space, we produced identity vs sequence length plots (Fig. 2 ). In detail, for each of the alignments found in Uniclust30, we depict the one with the highest identity and length. As a reference point in this sequence identity-length space, we use the HSSP curve 40 , a boundary set to define the confidence of protein sequence relatedness. Proteins whose identity falls below this curve, an area known as the “twilight zone”, do not necessarily have similar 3D structures nor are likely homologous. Since the sequences in the ProtGPT2 and random datasets are not the consequence of protein evolution, we use the curve as a well-known threshold to compare the datasets. Fig. 2: Pairwise sequence identities vs. alignment length for each of the datasets (a: natural (yellow), b: ProtGPT2 (green), and c: random (red)) as computed with HHblits against the Uniclust30 database. The lines depicted in red on each plot represent the HSSP curve, which we use as a reference to compare the three datasets 40 . Each plot shows a hexbin compartmentalization of the best-scoring identities and their distributions. While natural ( a ) and protGPT2 ( b ) sequences show similar percentages below the curve, 93% of the sequences in the random dataset ( c ) do not have significantly similar sequences in the Uniclust30 database. Natural and ProtGPT2 datasets show significant differences in the high-identity range ( n = 10,000 independent sequences/dataset). Full size image When looking at the distribution of hits above and below the curve, we observe that HHblits finds many hits in the Uniclust30 database that are related to the dataset of natural sequences (Fig. 2a ). Specifically, out of the 10,000 dataset sequences, 9621 (96.2%) showed identities above the HSSP curve. Similarly, 9295 ProtGPT2-generated sequences (93%) also have counterparts in the Uniclust30 database that align above the HSSP curve (Fig. 2b ). Conversely, 93% of the randomly generated sequences fall below this threshold (Fig. 2c ). Despite these similar patterns for the natural and ProtGPT2 datasets, the two datasets show differences in their distribution of hits. With a one-standard-deviation range of 31.5–69.7%, the natural dataset has a higher mean identity than the ProtGPT2 set, with a range of 32.9–64.1% (Fig. 2a, b ). The differences between the natural and ProtGPT2 sequence distributions are not statistically significant ( p value <0.05 Kolmogorov–Smirnoff). However, substantial differences between the natural and ProtGPT2 datasets occur in the high-identity range (>90%). Although 365 sequences in the ProtGPT2 dataset have high-identity sequences in Uniclust30, they correspond in all cases to alignments below 15 amino acids, whereas the natural dataset displays 760 sequences over 90% with an alignment length in the one-standard-deviation range of 14.8–77.3 amino acids. These results suggest that ProtGPT2 effectively generates sequences that are distantly related to natural ones but are not a consequence of memorization and repetition. ProtGPT2 generates ordered structures One of the most important features when designing de novo sequences is their ability to fold into stable ordered structures. We have evaluated the potential fitness of ProtGPT2 sequences in comparison to natural and random sequences in the context of AlphaFold predictions, Rosetta Relax scores, and molecular dynamics (MD) simulations. AlphaFold 41 , 42 produces a per-residue estimate of its confidence on a scale from 0–100 (pLDDT). This score has been shown to correlate with order 43 : Low scores (pLDDT > 50) tend to appear in disordered regions, while excellent scores (pLDDT > 90) appear in ordered ones 43 . Here we produced five structure predictions per sequence. The mean pLDDT of the dataset is 63.2 when taking the best-scoring structure per sequence and 59.6 when averaging across all five predictions per sequence. Moreover, 37% of sequences show pLDDT values over 70, in agreement with other recent studies 23 . A representation of all data points is shown in Supplementary Fig. 2a . Since pLDDT scores are a proxy for structural order, we turned to the natural and random datasets to see how they compare to ProtGPT2 sequences. In agreement with previous works, 66% of the sequences in the natural dataset were predicted with pLDDT values greater than 70 43 , giving an average value of 75.3 for the whole dataset (Supplementary Fig. 2b ). In contrast, the predictions in the random dataset revealed a mean pLDDT value of 44, with only 7.4% of sequences with pLDDT values over 70 (Supplementary Fig. 2c ). To further validate the quality of the model, we performed Rosetta-RelaxBB runs on the three datasets 44 . Rosetta Relax performs a Monte Carlo optimization over the Rosetta energy function, which results in different backbone and rotamer conformations. Lower Rosetta Energy conformers correlate with more relaxed structures 45 . The most recent Rosetta Energy Forcefield (REF2015) strongly correlates with experimental variables such as heat capacity, density, and enthalpy 46 . This scoring function reflects the thermodynamic stability of one static protein conformation. Here we have performed Rosetta Relax experiments for the 30,000 sequences of the three datasets (Fig. 3a ). A broad rule of thumb is that the total score (Rosetta Energy Units, REU) should lie between −1 and −3 per residue 47 . We observe such distribution in the natural and ProtGPT2 datasets, with averages of 1.90 and 1.73 REU/residue, respectively. As expected, the dataset of random sequences showed an average value of 0.13 REU/residue. Fig. 3: Comparison of Rosetta and molecular dynamics calculations among the three datasets. a Average Rosetta energy units per residue for the three datasets. AlphaFold prediction structures were used as input for the Rosetta RelaxBB protocol. 10,000 structures were run per dataset, one replica per system. b Root mean square deviation (RMSD) distribution for each MD dataset as computed by averaging RMSDs independently for each trajectory, represented as a boxplot. Twelve structures were simulated per dataset, three replicas per system. In both plots, the median is indicated as a black line; boxes depict the interquartile range (IQR), and whiskers represent 1.5 x IQR. Points outside this range are displayed as individual data points. Full size image We further tested if ProtGPT2 sequences show similar dynamic properties as natural sequences. Proteins are dynamic entities; without their inherent flexibility, they would not be capable of interacting with other biomolecules and performing their functions in the cell 48 . To evaluate whether ProtGPT2 sequences show flexibility patterns in the same range as natural proteins, we randomly selected 12 sequences per dataset and ran three replicas of molecular dynamics (MD) of 100 ns each, totaling 108 trajectories and an aggregate time of 10.8 microseconds (Methods). To ensure that the dynamics observed during the simulations were not an artifact of different pLDDT values—and hence possible different disorder predictions—we made sure that differences among dataset-pLDDT mean values were not statistically different (Supplementary Fig. 3 ). The Root Mean Square Deviation means for each of the trajectories in the natural and ProtGPT2 datasets resulted in average values of 2.93 and 3.12 Å, respectively (Fig. 3b ). As expected, the random sequences showed significant deviations during the trajectories, with an average of 9.41 Å. While ProtGPT2 sequences showed higher values than the natural ones, the distributions are not significantly different (Mann–Whitney U -test, p value 0.39). The results indicate that ProtGPT2 sequences might have similar dynamic properties as proteins found in nature. The complete list of the trajectories’ RMSD is presented in Supplementary Figs. 4, 5 . ProtGPT2 transcends the boundaries of the current protein space Several studies tried to reduce the large dimensionality of protein sequences into a few discernible dimensions for their analysis. Most representation methods consist of (i) hierarchical classifications of protein structures such as the ECOD and CATH databases 49 , 50 , (ii) Cartesian representations 51 , and similarity networks 52 , 53 . We recently represented the structural space in a network that showed proteins as nodes, linked when they have a homologous and structurally-similar fragment in common 54 and made the results available in the Fuzzle database 55 . The network represented 25,000 domains from the seven major SCOP classes and showed that the modern known protein space has both connected and “island-like” regions. It is implausible that evolution has explored all possible protein sequences 56 . Therefore, the challenge has been posed whether we can design proteins that populate unexplored—or dark—regions of the protein space and if, by doing so, we can design novel topologies and functions 56 . Here, we integrated the ProtGPT2 sequences into our network representation of the protein space. To this end, we generated an HMM profile for each SCOPe2.07 and ProtGPT2 sequence, compared them in an all-against-all fashion using HHsearch and represented the networks with Protlego 57 . To avoid that specific sequences with several alignments end up represented by the same node in the network, we duplicate entries with two non-overlapping alignments, as previously described 54 . The network contains 59,612 vertices and 427,378 edges, comprising 1847 components or ‘island-like’ clusters (Fig. 4 ). The major component accumulates more than half of the nodes (30,690)—a number significantly higher than the number observed in a network produced with the same settings but excluding ProtGPT2 sequences (Supplementary Fig. 6 )— strongly suggesting that ProtGPT2 generates sequences that bridge separate islands in protein space. We select six examples across different areas of the network from topologically different SCOPe classes to showcase ProtGPT2 sequences at the structural level (Fig. 4 ). In particular, we report an all-β ( 751 ), two α/β ( 4266 , 1068 ), one membrane protein ( 4307 ), an α + β ( 486 ) and all-α ( 785 ) structures. These structures illustrate ProtGPT2’s versatility at generating de novo structures. For each case, we searched the most similar protein structure found in the PDB database using FoldSeek 58 . ProtGPT2 generates well-folded all-β structures ( 751 , 4307 ), which despite recent impressive advances 59 , have for long remained very challenging 60 . ProtGPT2 also produces membrane proteins ( 4307 ), which pose a difficult target for protein design due to the challenges at specifying structure within the membrane and the laborious experimental characterizations 61 . Besides the generation of natural fold representatives, ProtGPT2 also produces previously unreported topologies. For example, we report protein 4266 , whose topology does not match any of the currently reported structures in the PDB, with a low DALI Z-score of 5.4 and an RMSD of 3.0 Å to PDB 5B48 over 67 residues (identity 9%). Fig. 4: An overview of the protein space and examples of proteins generated by ProtGPT2. Each node represents a sequence. Two nodes are linked when they have an alignment of at least 20 amino acids and 70% HHsearch probability. Colors depict the different SCOPe classes, and ProtGPT2 sequences are shown in white. As examples, we select proteins of each of the major five SCOP classes: all-β structures (751), α/β (4266 and 1068), membrane protein (4307), α+β (486), and all-α (785). The selected structures are colored according to the class of their most similar hit. The structures were predicted with AlphaFold, and we indicate the code of the most similar structure in the PDB as found by FoldSeek 58 , except for protein 4266, where no structures were found. Full size image Nevertheless, possibly the most remarkable property of ProtGPT2 sequences is their significant deviation from all previously designed de novo structures, which often feature idealized topologies with loops and minimal structural elements. De novo proteins have the advantage of not carrying any evolutionary history and are thus amenable as a scaffold for virtually any function, but in practice, the lack of embodiments and longer loops hamper the design of crevices, surfaces, and cavities—necessary for the interaction with other molecules and function realization. ProtGPT2 sequences resemble the complexity of natural proteins, with multifaceted surfaces capable of allocating interacting molecules and substrates, thus paving the way for functionalization. In Fig. 4 , we show structures 486 and 1060 , two examples of such complex structures. In particular, 1068 shows a TIM-barrel fold, a topology which to date has met impressive success in de novo design 62 , 63 , 64 , but whose idealized structure has nevertheless proven challenging to extend via additional secondary elements and longer loops 65 , 66 . Preserved functional hotspots Visual inspection of the structural superimposition of the best hits found with FoldSeek revealed several instances where the sidechains of ligand-interacting residues are conserved. Two examples are shown in Fig. 5 . The natural structure most similar to sequence 357 (Fig. 5a ) corresponds to PDB code 1X0P (chain A), a blue-light sensor domain that binds FAD. When superimposing the structures, we observe that 357 has retained the sidechain binding hotspots, with three residues identical (D169, Q150, and N131) and two different but capable of forming the same interactions, Lysine at position R165 and Histidine at position K127. Sequence 475 (Fig. 5b ) is most similar to PDB code 5M1T (chain A), a phosphodiesterase that folds into a TIM-barrel and binds to the bacterial second messenger cyclic di-3′,5′-guanosine monophosphate (PDB three-letter code C2E). Out of the five sidechain-interacting residues, the ProtGPT2 sequence preserves three residues (Q455, R473, and E469), and includes one substitution for another residue capable of hydrogen-bonding (aspartic acid for Q513). It is remarkable to note that ProtGPT2 has generated these sequences in a zero-shot fashion, i.e., without further finetuning in these two particular folds. These results have impactful consequences for protein engineering because ProtGPT2 appears to preserve binding positions in the generated sequences, despite the low identities (31.1 and 29.2% for 357 and 45, respectively), and can be used to augment the repertoires of specific folds and families. Fig. 5: Superimposition of the predicted structures for sequences 357 and 475 and the respective top scoring proteins in FoldSeek. a Structural alignment of 357 with pdb 1X0P (chain A, blue). Shown are five residues in 1X0P that interact via their sidechains with the ligand FAD. Of these, three are identical in 357 , and another two correspond to substitutions to the same amino acid type (R165 to lysine and Q150 to histidine). b Structural alignment of 475 with pdb 5M1T (chain A) depicting five sidechain-interacting residues with ligand C2E. All amino acids in 475 are conserved except for residue R614, which was substituted by a glycine. The PDB structures are shown in color with their sidechains in a thinner representation. Full size image Discussion The design of de novo proteins harnessing artificial intelligence methods has been meeting incredible success in the last 2 years 10 , 67 , 68 . Motivated by the unprecedented advances in NLP, we have implemented a generative language model, ProtGPT2, which has effectively learned the protein language. ProtGPT2 can generate sequences that are distantly related to natural ones and whose structures resemble the known structural space, with non-idealized complex structures. Since ProtGPT2 has been trained on the entire sequence space, the sequences produced by the model can sample any region, including the dark proteome and areas traditionally regarded as very challenging in the protein design field, such as all-β structures and membrane proteins. Visual superimposition of ProtGPT2 proteins with distantly related natural protein structures reveals that ProtGPT2 has also captured functional determinants, preserving ligand-binding interactions. As the design of artificial proteins can solve many biomedical and environmental problems, we see extraordinary potential in our protein language model. ProtGPT2 designs fit globular proteins in a matter of seconds without requiring further training on a standard workstation. ProtGPT2 can be conditioned towards a particular family, function, or fold by finetuning the model on a set of sequences of a user’s choice. In this context, ProtGPT2 will enable the screening for proteins with similarities to natural proteins in order to improve, fine-tune or alter a specific biochemical function of a natural protein. Large-scale screening of ProtGPT2-designed protein libraries might identify proteins with folds not captured in structural databases and functions that have no related counterpart in the natural space. ProtGPT2 constitutes a big step forward towards efficient protein design and generation, and lays the groundwork for future experimental studies exploring the structural and functional parameters of designed proteins, and their subsequent real-world applications. Future efforts include the inclusion of conditional tags, which will enable the controlled generation of specific functions. Methods Vocabulary encoding We use a BPE 30 tokenizer to train the vocabulary of our dataset. BPE is a sub-word tokenization algorithm that finds the most frequently used word roots, ensuring better performance than one-hot tokenization and avoiding the out-of-vocabulary problem. Given the size of Uniref50, we used Swiss-Prot (2021_04) containing >0.5 M sequences to train our tokenizer. Following the training strategy of GPT2 17 , our final vocabulary contained 50,256 tokens that correspond to the most widely reused oligomers in protein space, with an average size of four amino acids per token (Supplementary Fig. 1 ). Learned positional embeddings were used as in the original GPT2. Dataset preparation We took Uniref50 version 2021_04 as the dataset for training, containing 49,874,565 sequences. 10% of the sequences were randomly selected to produce the validation dataset. The final training and validation datasets contained 44.88 and 4.99 million sequences, respectively. We produced two datasets, one using a block size of 512 tokens, and another one with 1024 tokens. The results shown in this work correspond to a model trained with a block size of 512 tokens. Model pre-training We use a Transformer decoder model as architecture for our training which processes input sequences tokenized with a BPE strategy. The model uses during training the original dot-scale self-attention as introduced by ref. 7 . The model consist of 36 layers with a model dimensionality of 1280. The architecture matches that of the previously released GPT2-large Transformer 17 , which was downloaded from HuggingFace 24 . Model weights were reinitialized prior to training. The model was optimized using Adam (β 1 = 0.9, β 2 = 0.999) with a learning rate of 1e-03. For our main model, we trained 65,536 tokens per batch (128 GPUs × 512 tokens). A batch size of 8 per device was used, totaling 1024. The model trained on 128 NVIDIA A100s in 4 days. Parallelism of the model was handled with DeepSpeed 69 . Model inference We systematically sampled sequences using our main model using different inference parameters. In particular, we varied the repetition penalty from a range of 1.1 to 3.0 at each 0.1 units, top_k from 250 to 1000 sampling every 50 units, and a top_p from 0.7 to 1.0 with a window of 0.05 units. 100 sequences were produced for each sampling parameter set and the frequency of their amino acids compared to natural sequences. We observed which parameters produced fewer differences in the set of the seven most common amino acids in natural sequences. We also explored the beam search algorithm for beams in the range 50 to 100 using a window of 1 unit but it produced worse matches in all cases. To determine amino acid frequencies in natural sequences for comparison to ProtGPT2 samples, we randomly picked 1 million sequences from the Uniref50 dataset. The best matching parameters were further downsampled with finer windows and their frequencies compared with radar plots, as shown in Fig. 1 in the main text. The best performing parameters in our dataset were top_k 950, repetition penalty of 1.2, and default temperature and top_p values of 1. Sequence dataset generation Three sequence datasets were produced to compare their properties. The ProtGPT2 dataset was generated by sampling 1000 batches of 100 sequences, each with the selected inference parameters and a window context of 250 tokens. This step produced 100,000 sequences. We filtered from this set those sequences whose length had been cut due to the window context, giving a total of 29,876 sequences. From this set, we randomly selected 10,000 sequences. Their average length is 149.2 ± 50.9 amino acids. The natural dataset was created by randomly sampling 100,000 sequences from Uniref50. 10,000 of these sequences were further chosen to ensure their average and standard deviation lengths matched that of the ProtGPT2 dataset sequences. The random dataset was created by concatenating the 25 amino acids that appear in UniRef50, which includes the 20 standard amino acids and other IUPAC codes such as “X”, “B”, “U”, “O”, and “Z”, by randomly concatenating them into sequences with a length taken from a normal distribution between 5 and 267 amino acids. Homology detection Each sequence in the three 10k datasets was searched for similarity against the PDB70 and uniclust30 databases using HHblits 70 . We used the Uniclust30 database version 2018_08 and the pdb70 version 2021_04. As HHblits produces a list of alignments we selected all those over the HSSP curve as possible matches, and from these, selected the largest alignment. Thus, for each sequence in each dataset, the longest and the highest identity scoring alignment was selected and represented in Fig. 2 . Disorder prediction IUPred3 was run on ProtGPT2 and natural datasets using all three possible options to detect shorter (“short”) or longer (“longer”) unstructured regions, as well as structured regions (“glob”) 35 . Ordered content was determined with the “short” option. The output of the “glob” analysis also reports if any structured, globular domain was found, as shown in Table 1 . We ran secondary structure prediction using PSIPRED v4.0 for each sequence in natural and ProtGPT2 datasets 37 . The alignments of the abovementioned HHblits searches were used as multiple sequence alignments. We computed the percentages for each secondary element by dividing the number of amino acids with a certain prediction by the total number of amino acids with a confidence value of 5 or more. AlphaFold2 structure prediction We predicted five structures for each sequence in the ProtGPT2 dataset using AlphaFold ColabFold batch v1.2 41 . Network construction Sequences in the ProtGPT2 and SCOP 2.07 filtered at 95% datasets were joined. For each sequence, we produced a multiple sequence alignment (MSA) using HHblits against the database Uniclust 2018_08. Hidden Markov model profiles were produced for each MSA using HHblits 70 , and an all-against-all search for each profile was performed using HHsearch 38 . The network was constructed by representing every sequence as a node, and linking two nodes whenever they have an alignment of at least 20 amino acids with 70% HHsearch probability. Extensive details on the all-against-all comparison and network construction, and tools to generate the networks can be found in our previous works Fuzzle 54 , 55 and Protlego 57 . Detection of similar topologies was determined with FoldSeek 58 . Molecular dynamics simulations Simulation systems were built and run with the software HTMD 71 . In all cases, systems comprised solvated all-atom cubic boxes. Simulation boxes consisted of a protein centered at the origin of coordinates and explicit solvent molecules and neutralizing NaCl ions were added to each box. The Amber 19SB forcefield was used 72 . Three replicas were constructed per sequence. All systems were minimized, equilibrated, and run with ACEMD 73 using default parameters: each system was minimized and relaxed under NPT conditions for 1 ns at 1 atm and 300 K using a time-step of 4 fs, rigid bonds, cutoff of 9 Å, and PME for long-range electrostatics. Heavy protein and ligand atoms were constrained by a 10 kcal/mol/Å2 spring constant. Production simulations were run in the NVT ensemble using a Langevin thermostat with a damping of 0.1 ps −1 and a hydrogen mass repartitioning scheme to achieve timesteps of 4 fs 74 . Rosetta calculations Rosetta Relax runs were produced with the Rosetta Software Suite v3.12 44 using as input structure the best-scoring prediction from AlphaFold. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The model weights are publicly available in the HuggingFace repository: and Zenodo: [ ]. The dataset for training is available at: . The three sequence datasets in this work are available at: . The AlphaFold predictions for the three datasets are available at . The Uniref50 original database version 21_04 is available at . The Uniclust30 database version 2018_08 is available at . Code availability The model was trained with the HugginFace transformers Trainer version 4.14.1. The code and documentation are available here: . | None | [] | [] | [] | SciNews | Biology | Noelia Ferruz et al, ProtGPT2 is a deep unsupervised language model for protein design, Nature Communications (2022). DOI: 10.1038/s41467-022-32007-7 Noelia Ferruz et al, Controllable protein design with language models, Nature Machine Intelligence (2022). DOI: 10.1038/s42256-022-00499-z Journal information: Nature Machine Intelligence , Nature Communications | https://dx.doi.org/10.1038/s41467-022-32007-7 | https://phys.org/news/2022-08-proteins-natural-language-artificial-intelligence.html | A research team at the University of Bayreuth, led by Prof. Dr. Birte Höcker, has successfully applied a computer-based natural language processing model to protein research, creating a new model called ProtGPT2. This model can design new proteins that are capable of stable folding and can take on defined functions in larger molecular contexts, without the need for further functionalization. The model works by using principles and processes from natural language processing to generate protein sequences, allowing it to create proteins that adopt stable structures and are permanently functional. The team has found that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution, opening up new possibilities for designing proteins in novel and unexplored ways. The potential applications of this technology are vast, with the possibility of developing innovative solutions for biomedical, pharmaceutical, and ecological problems.
Artificial intelligence (AI) has created new possibilities for designing tailor-made proteins to solve everything from medical to ecological problems. A research team at the University of Bayreuth led by Prof. Dr. Birte Höcker has now successfully applied a computer-based natural language processing model to protein research. Completely independently, the ProtGPT2 model designs new proteins that are capable of stable folding and could take over defined functions in larger molecular contexts. The model and its potential are detailed scientifically in Nature Communications. Natural languages and proteins are actually similar in structure. Amino acids arrange themselves in a multitude of combinations to form structures that have specific functions in the living organism—similar to the way words form sentences in different combinations that express certain facts. In recent years, numerous approaches have therefore been developed to use principles and processes that control the computer-assisted processing of natural language in protein research. "Natural language processing has made extraordinary progress thanks to new AI technologies. Today, models of language processing enable machines not only to understand meaningful sentences but also to generate them themselves. Such a model was the starting point of our research. With detailed information concerning about 50 million sequences of natural proteins, my colleague Noelia Ferruz trained the model and enabled it to generate protein sequences on its own. It now understands the language of proteins and can use it creatively. We have found that these creative designs follow the basic principles of natural proteins," says Prof. Dr. Birte Höcker, Head of the Protein Design Group at the University of Bayreuth. The language processing model transferred to protein evolution is called ProtGPT2. It can now be used to design proteins that adopt stable structures through folding and are permanently functional in this state. In addition, the Bayreuth biochemists have found out, through complex investigations, that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution. These findings shed light on the immeasurable world of possible proteins and open a door to designing them in novel and unexplored ways. There is a further advantage: Most proteins that have been designed de novo so far have idealized structures. Before such structures can have a potential application, they usually must pass through an elaborate functionalization process—for example by inserting extensions and cavities—so that they can interact with their environment and take on precisely defined functions in larger system contexts. ProtGPT2, on the other hand, generates proteins that have such differentiated structures innately, and are thus already operational in their respective environments. "Our new model is another impressive demonstration of the systemic affinity of protein design and natural language processing. Artificial intelligence opens up highly interesting and promising possibilities to use methods of language processing for the production of customized proteins. At the University of Bayreuth, we hope to contribute in this way to developing innovative solutions for biomedical, pharmaceutical, and ecological problems," says Prof. Dr. Birte Höcker. |
10.1038/s41467-022-28783-x | How the mechanism of photoionization can provide insights into complex molecular potentials | How can researchers use the mechanism of photoionization to gain insight into complex molecular potential? This question has now been answered by a team led by Prof. Dr. Giuseppe Sansone from the Institute of Physics at the University of Freiburg. The researchers from Freiburg, the Max Planck Institute for Nuclear Physics in Heidelberg and groups at the Universidad Autonoma in Madrid/Spain and the University of Trieste/Italy have published their results in the journal Nature Communications. In the origin of photoionization, also called the photoelectric effect, an atom or molecule absorbs one quantum of light, usually indicated as photon, from an external field. The energy absorbed in this process is transferred to an electron, which is freed, leaving behind a singly charged ion. In several aspects and for several applications, the effect can be regarded as instantaneous, meaning that there is no significant time delay between the absorption of the photon and the instant when the electron is emitted. However, several experiments conducted in the last years have evidenced that tiny, but measurable delays lying in the attosecond range (1 as = 10-18 s) occur between these two processes. Generation of attosecond pulses "Thanks to the advanced laser sources and specially designed spectrometers available in our laboratory, we can generate the shortest bursts of light, lasting only few hundreds of attoseconds," Sansone explains. "Moreover, we can reconstruct the orientation of simple molecules when they absorb a photon from an external laser pulse. We have used such pulses to investigate the motion of the electrons after the absorption of a photon." Electrons experience paths with potential peaks and valleys The researchers found that on its way out from the molecule, the electron experiences a complex landscape characterized by potential peaks and valleys. These are determined by the spatial distribution of the atoms composing the system. The path followed by the electron during its motion can affect the time it takes to be freed. Extension to more complex molecular systems possible In the experiment, the team measured the time delays accumulated by the electrons emitted from CF4 molecules in different spatial directions were measured using an attosecond pulse train combined with an ultrashort infrared field. "Combining this information with the characterization of the spatial orientation of the molecule, we can understand how the potential landscape and, in particular, potential peaks affect the time delay," says the Freiburg physicist. The work can be extended to more complex molecular systems and to potentials changing on ultrashort timescales. In general, Sansone emphasizes, this approach could give the possibility to map complex potential landscapes from within, with unprecedented temporal resolution. | Researchers led by Prof. Dr. Giuseppe Sansone from the University of Freiburg have used the mechanism of photoionization to gain insight into complex molecular potential. By generating attosecond pulses and reconstructing the orientation of simple molecules, the team found that electrons experience a complex landscape of potential peaks and valleys as they move out of the molecule. The path followed by the electron affects the time it takes to be freed, and by measuring the time delays accumulated by electrons emitted from CF4 molecules in different spatial directions, the researchers can understand how the potential landscape and peaks affect the time delay. This approach can be extended to more complex molecular systems and could provide a way to map complex potential landscapes with unprecedented temporal resolution. | None | Abstract Photoionisation time delays carry structural and dynamical information on the target system, including electronic correlation effects in atoms and molecules and electron transport properties at interfaces. In molecules, the electrostatic potential experienced by an outgoing electron depends on the emission direction, which should thus lead to anisotropic time delays. To isolate this effect, information on the orientation of the molecule at the photoionisation instant is required. Here we show how attosecond time delays reflect the anisotropic molecular potential landscape in CF 4 molecules. The variations in the measured delays can be directly related to the different heights of the potential barriers that the outgoing electrons see in the vicinity of shape resonances. Our results indicate the possibility to investigate the spatial characteristics of the molecular potential by mapping attosecond photoionisation time delays in the recoil-frame. Introduction Molecular systems are characterised by complex potential landscapes determined by their chemical composition and by the spatial arrangement of their constituents. In general, the electronic potential presents a non-spherical shape, which plays a key role in the stereo-dynamics of atom-molecule collisions 1 and molecule–molecule interactions 2 . As explained in textbooks, the effect and the spatial gradient of a potential can be unveiled by monitoring the motion of a probe charge immersed in that potential 3 . In atoms and molecules, this charge can be one of the electrons contained in the system, which must absorb enough energy from an external source to overcome the ionisation potential and to acquire the necessary kinetic energy to explore the potential landscape, while staying long enough in the molecular surroundings to sample its relevant features. This is ideally possible by using ultraviolet radiation, i.e. photon energies of a few tens eV. In a classical picture, an electron with 10 eV energy takes about 53 as to travel through the typical molecular extension of 1 Å. The extremely short timescale of this motion calls for the application of attosecond pulses, which can efficiently generate photoelectron wave packets and provide the necessary time resolution 4 , 5 , 6 , 7 . The dynamics of photoionising wave packets is usually investigated by means of pump-probe experiments, in which an isolated or a train of attosecond pulses in the extreme ultraviolet (XUV) range set the photoelectron wave packet free and a synchronised infra-red (IR) pulse probes the instant at which the electron enters the continuum 8 . Using this approach, the role of electronic correlation effects in the photoionisation of atoms has been investigated in real-time 9 , 10 , 11 . Attosecond time delays have also been reported in photoionisation in molecular systems, showing the relevance of nuclear motion in hydrogen 12 and the role played by shape resonances in N 2 O 13 and nitrogen 14 , 15 . Moreover, the role of the localisation of the initial wave function 16 and of a functional molecular group 17 has also been demonstrated. In atomic systems, the photoionisation time delays are usually decomposed into a term specific of the atomic potential (usually indicated as Eisenbud–Wigner–Smith delay 18 ) and a measurement-induced contribution due to the action of the IR probe pulse on the photoelectron wave packet moving in the long-range Coulomb potential 19 , 20 . While in atoms the influence of the latter term can be usually quantified through simple formulas independent of the specific target 20 and its angular dependence has been characterised 21 , in molecules the effect of the IR field on the measured time delays has not been characterised yet. In general, in the case of molecular systems, the contributions of the two terms cannot be disentangled 22 , which requires a more involved analysis. A fundamental prerequisite for the characterisation of the combined effect of the anisotropic molecular landscape and of the IR field is to have access to the orientation of the molecule at the photoionisation instant. This can be done by measuring the emission direction(s) of ionic fragment(s) after the interaction with the XUV radiation, which defines the recoil frame. Symmetric molecules consisting of only a few atoms are ideal to test these effects in this frame. On the one hand, small molecules present a limited number of photoionisation and photofragmentation pathways, making feasible the identification of the electronic level of the outgoing photoelectron and, under suitable conditions, the determination of the molecular orientation during the interaction with the ionising radiation. On the other hand, the symmetry of the molecule gives the opportunity to identify specific privileged directions in the recoil frame and to characterise the effect of the molecular potential along them. In this work we investigate the photoionisation dynamics induced by a train of attosecond XUV pulses on CF 4 molecules by means of photoelectron-photoion coincidence spectroscopy 23 . The advantage of this approach is the possibility to derive information on the molecular orientation at the instant of photoionisation by measuring in coincidence the momenta of the emitted electron and the fragment ions resulting from the ulterior dissociation of the molecular cation 24 . In this way, we have been able to unambiguously identify individual ionisation channels and obtain time-resolved recoil-frame photoelectron angular distributions (RFPADs) from which the variations of photoionisation delays with the electron emission direction have been extracted. The measured delays are in very good agreement with those obtained from calculated time-resolved RFPADs where all the transitions induced by the attosecond pulse train and the IR probe, in particular those induced in the continuum by the IR pulse, are taken into account in a time-dependent formalism. The agreement confirms the validity of our experimental approach and opens the route to orientation-specific exploration and understanding of molecular photoionisation delays. Results Recoil frame and XUV spectroscopy of CF 4 We focus on photoelectrons emitted from the Highest-Occupied Molecular Orbital of CF 4 , which is triply degenerate and belongs to the irreducible representation T 1 of the point group T d (see Fig. 1 a). The xyz system represents the molecular frame, where one fluorine atom (1) is positioned along the negative z-axis and a second fluorine atom (2) is contained in the xz plane (the carbon atom occupies the centre). In the molecular frame, the common direction of the polarisation vectors of the collinear XUV and IR fields (indicated as a magenta arrow in Fig. 1 a) is identified by the angles β (polar angle) and α (azimuthal angle). Fig. 1: Molecular and recoil frames, and averaged RABBITT traces for the parallel and perpendicular cases. a CF 4 molecule and definition of the different quantities in the molecular frame. The direction of emission of the CF \({}_{3}^{+}\) ion defines the z-axis (blue arrow), which identifies the recoil frame in this experiment. The x -axis is contained in the plane identified by two fluorine atoms (1 and 2 in the figure). The orientation of the polarisation vector of the electric field (magenta arrow) is defined in the molecular frame by the angles α and β . The emission direction of the photoelectron in the molecular frame (cyan arrow) is defined by the angles θ and φ . RABBITT traces obtained for the parallel (0 ∘ ≤ β ≤ 45 ∘ and 135 ∘ ≤ β ≤ 180 ∘ ) b and perpendicular (60 ∘ ≤ β ≤ 120 ∘ ) c configurations. The angles α and φ cannot be determined in our measurements. Full size image Photoelectrons and photoions emitted after the interaction with an attosecond pulse train generated in argon and krypton and a synchronised IR field were measured using a Reaction Microscope 25 . This approach usually called RABBITT (Reconstruction of attosecond beating by two-photon transitions) 8 allows one to combine temporal and spectral resolution, which turns out to be advantageous for identifying the different fragmentation channels or yields involved in the experiment. The setup used in the measurements is described in ref. 26 . For XUV photon energies in the range 20–46 eV photoionisation of neutral CF 4 into five different cationic states (X 2 T 1 , A 2 T 2 , B 2 E, C 2 T 2 , D 2 A 1 ) is energetically possible 27 . While the first three predominantly lead to the formation of CF \({}_{3}^{+}\) ions, the last two lead to the formation of CF \({}_{2}^{+}\) ions (see Supplementary Figs. 1 and 2 and Supplementary Table 1 ). The parent molecular ion CF \({}_{4}^{+}\) was not observed, in agreement with previous spectroscopic measurements 28 . Additional correlation between the kinetic energy release (KER) of the ions and the photoelectrons gives the possibility to isolate the contribution of the photoelectrons associated with the X 2 T 1 state from those associated with the A 2 T 2 and B 2 E states (see Supplementary Figs. 3 and 4 ). Photoionisation from the ground state (leading to cations in the X 2 T 1 state) results in fast dissociation and emission of CF \({}_{3}^{+}\) fragments with 100% probability 29 , giving access to the relative orientation of the polarisation direction of the external fields with respect to the polar angle β (the azimuthal angle α cannot be determined in our measurements) at the instant of photoionisation using the recoil approximation 30 . The validity of the approximation is further confirmed by the good agreement between the measured photoelectron angular distributions (PADs) for this state and those calculated by assuming a fixed nuclei configuration (see below). Because the position of the fluorine atoms in the tripod is not determined in the experiment, the measured PADs are plotted as a function of the θ angle, i.e. with respect to the z-axis (see Fig. 1 ), which defines the recoil frame and are therefore integrated over the φ angle. RABBITT measurements The photoelectron spectra measured as a function of the delay between the XUV and IR pulses for specific molecular orientations with respect to the light polarisation vector of the collinear fields are presented in Fig. 1 b, c. The spectra are thus retrieved capturing only photoelectrons in coincidence with the momentum of the dissociating CF \({}_{3}^{+}\) ion being parallel (0 ∘ ≤ β ≤ 45 ∘ and 135 ∘ ≤ β ≤ 180 ∘ ) or perpendicular (60 ∘ ≤ β ≤ 120 ∘ ) with respect to the light polarisation vector. The signal is therefore obtained by integrating over all possible photoelectron emission directions, but for the specific molecular orientations with respect to the light as indicated in the right side insets of Fig. 1 . The excellent quality of the traces offers the opportunity to investigate the oscillation of the sidebands for different photoemission directions θ in the recoil frame. The attosecond time delays determined for different recoil frame emission angles θ are reported in Fig. 2 for the parallel (panel a–c) and perpendicular (panel d–f) cases, respectively. The photoemission time delay averaged over the angle θ for the parallel and perpendicular cases were estimated from Fig. 1 b, c and subtracted from the angular-resolved delays to remove the effect of the attosecond chirp (see also Supplementary Fig. 5 ). Fig. 2: Experimental and simulated attosecond time delays for the parallel and perpendicular cases. Experimental (black points and black line) and theoretical (red lines) attosecond time delays as a function of the photoelectron emission angle θ along the recoil axis for the parallel ( a – c ) and perpendicular ( d – f ) configurations for the sidebands 16 ( a , d ), 18 ( b , e ), and 20 ( c , f ). The shaded areas in a – c indicate the three angle intervals θ A , θ B , and θ C . The error bars were obtained by weighting the photoionisation delays with the root-mean-square phase noise over an integrated electron kinetic-energy range of 1.2 eV centred around the maximum of each sideband 13 . The complete data set for all sidebands is presented in Supplementary Figs. 9 and 10 . Full size image Theoretical modelling The theoretical results obtained by solving the TDSE within the static-exchange DFT method described in 15 , 31 and with laser parameters that reproduce the experimental conditions (see details in the SI), are also shown in Fig. 2 . We checked that the simulations reproduce the photoelectron spectrum and the main features of the RFPADs generated by the XUV pulses for the parallel and perpendicular cases (see Supplementary Figs. 6 – 8 ). Considering the same fitting procedure as in the experiment, we have retrieved the time delays from the calculated RABBITT spectra for the parallel and perpendicular molecular orientations. Discussion Overall the experimental and theoretical photoemission delays are in good agreement. The experimental attosecond time delays for the parallel case exhibit a minimum around θ = 90 ∘ , while they are relatively independent of θ in the perpendicular case (see also Suppementary Figs. 9 and 10 ). The minima present negative values (on the order of a few hundreds of attoseconds), indicating that the maxima of the sidebands oscillations observed at these angles occur at earlier time delays. The theoretical data reproduce these trends, with the minimum at 90 ∘ for the parallel configuration and a smooth evolution as a function of the photoelectron emission angle for the perpendicular configuration. For the parallel case (left panels in Fig. 2 ), the minimum observed around θ = 90 ∘ can be attributed to the interaction with the IR field (see Supplementary Figs. 11 – 13 ). In Fig. 2 a–c, three different regions are highlighted corresponding to the intervals θ A = [0 ∘ − 29 ∘ ], θ B = [60 ∘ − 75. 5 ∘ ], and θ C = [151 ∘ − 180 ∘ ] centred around the directions A ( θ = 15 ∘ ), B ( θ = 68 ∘ ) and C ( θ = 165 ∘ ), respectively. Photoelectrons leaving the molecule along these three directions, experience significant differences in the molecular landscape, as shown in Fig. 3 a, which reports the molecular potential resulting from the simulations and the three escape directions A, B, and C (indicated with dot-dashed lines). For a better visualisation, two-dimensional cuts of the potential are plotted in each plane. The nearly opposite directions A and C correspond to photoelectrons emitted (almost) parallel or antiparallel to the ion emission direction. For these directions, the molecular potentials exhibit barriers with quite different heights, as shown in Fig. 3 b. On the contrary, the direction B corresponds to photoelectrons escaping the molecule along a direction characterised by a barrier, whose height is close to that along the C direction. The photoionisation cross-sections along the three directions are also reported in Fig. 3 c, indicating the existence of shape-resonances in the photon-energy range ≈ 28-33 eV for the three directions. Fig. 3: Molecular potential and photoionisation cross-sections. a Two-dimensional cuts of the molecular potential along the xy , xz and yz planes. b Molecular potential along the three directions indicated in a . c Photoionisation cross-sections along the three directions indicated in a . Full size image The anisotropic molecular potential influences the photoionisation dynamics, as demonstrated in Fig. 4 a, b, which present the difference of the time delays measured along the directions A and C (Δ τ A − C = τ A − τ C ) and B and C (Δ τ B − C = τ B − τ C ), respectively. The experimental values (red squares) were determined integrating the photoelectrons emitted in the parallel configuration over the emission angles θ in the intervals θ A = [0−29 ∘ ], θ B = [60−75.5 ∘ ], and θ C = [151−180 ∘ ] along the directions A, B, and C, respectively. The theoretical values (black circles) were obtained from the calculated RABBITT spectra in the same way. The delay difference Δ τ A − C presents a minimum in the experimental as well as in the theoretical data around 34 eV photon energy, which approximately matches the maximum of the shape resonance for the direction C observed in Fig. 3 c. For photon energies beyond the shape-resonance regions of both directions, the absolute value of the difference Δ τ A − C becomes smaller. Fig. 4: Influence of the anisotropic molecular potential on attosecond time delays. Difference of attosecond time delays between the emission directions A ( θ = 15 ∘ ) and C ( θ = 165 ∘ ) ( a ) and B ( θ = 68 ∘ ) and C ( b ) for the experiment (red squares) and extracted from the simulations (black circles). The RABBITT spectra were previously averaged over the intervals of emission angles θ A , θ B , and θ C for the emission directions A, B, and C, respectively. The blue triangles correspond to the differences of the stereo-Wigner time delays between the directions A and C ( a ) and B and C ( b ) and averaged over the corresponding emission angle intervals, respectively. The error bars were derived by error propagation from those of the single directions. Full size image The experimental points are in good agreement with the theoretical curve (except for the point corresponding to sideband 20). The stereo Wigner time delay 22 , 32 , 33 obtained from the one-photon (XUV) dipole matrix elements and integrated over the corresponding angle intervals is also reported (blue triangles) and indicates a minimum in the same energy region. The deviations between the difference of the stereo Wigner time delays and the corresponding time delays obtained for the two-colour simulations, support the conclusion that the delay in photoionisation of molecules cannot be decomposed, in general, into the sum of a contribution due to the Wigner delay and one associated with the continuum-continuum delay 22 . The latter, indeed, should cancel out when subtracting the delay estimated along the directions A and C. In any case, the overall evolution of the delay is qualitatively similar suggesting the relevance of the differences in the position of the shape resonances along the directions A and C in the photoionization process. Differently from the previous case, the experimental data for the delay difference Δ τ B − C (red square) do not show any significant variation of the delay as a function of the photon energy, as expected. The theoretical values (black circle) are in good agreement with the experimental data, indicating only a moderate linear increase of the delay. The absence of any remarkable variation of the delay as a function of the sidebands for these two directions can be attributed to the similar heights of the trapping potentials, as shown in Fig. 3 b. In this condition, the evolution of the stereo Wigner time delays (blue triangles) is close to the experimental one. We have demonstrated that the anisotropic molecular landscape affects the stereo-photoemission time delays. In particular, different heights of the trapping potentials introduce different delays in the emission of the photoelectron wave packet into the continuum. Our results indicate the importance of coincidence spectroscopy to disentangle the information on the photoemission process from different electronic states and for different molecular orientations. Extension of this approach would be beneficial also for larger and more complex molecular systems. Methods Experimental setup The experiment was performed using a 10 kHz laser Ti:sapphire system providing 30-fs laser pulses centred at 800 nm with a pulse energy of 1 mJ. The input pulse first passes through a 1-mm-thick glass plate with a 3-mm-diameter hole at the centre splitting the beam into two parts (annular and central part), which are spatially separated and temporally delayed. Then, the input beam is focused by 25-cm-focal-length parabolic mirror into a high-harmonic gas cell filled with argon (Ar) or krypton (Kr) to generate XUV photons (20–46 eV) consisting of odd multiples of the input pulse frequency. In our work, the XUV photons were generated by the annular beam. By using an iris before the gas cell and blocking the annular beam, we ensure that the central beam, after focusing it into the gas cell, is not contributing to the XUV emission. After the gas cell, another 1-mm-thick glass plate with a 1-mm-diameter hole, centred with respect to the first plate, is placed to synchronise the XUV and IR probe pulses. The IR probe pulse is part of the input beam which is going through the hole of the first plate and glass of the second one. The delay between XUV and IR probe pulse is varied precisely by tilting the second, drilled plate 26 . For the XUV-only measurements, an aluminium filter is introduced into the beam propagation direction after the high-order harmonic generation cell in order to block the co-propagating IR pulse. Then, the XUV and probe pulses are refocused by a toroidal mirror into the interaction region inside a 3D momentum imaging spectrometer leading to dissociative photoionisation of CF 4 at an ion count rate of about 0.1 per XUV pulse. The resulting ions and electrons are guided using a homogeneous electric field ( ∣ E ∣ ~ 313 V/m) and a weak magnetic field ( ∣ B ∣ ~ 9.4 G) towards time- and position-sensitive detectors located at the opposite ends of the spectrometer. Theoretical methods In brief, theoretical RABBITT spectra have been obtained by solving the time-dependent Schrödinger equation (TDSE) within the static-exchange approximation 15 , 31 . In this method, the time-dependent wave function is expanded in a basis set of N -electron stationary wave functions, built from antisymmetrized products of an ( N − 1)-electron wave function for the bound electrons and a one-electron continuum wave function for the electron ejected into the continuum. The ( N − 1)electron wave functions are built from B-spline representations of the lowest Kohn-Sham (KS) orbitals resulting from the diagonalization of the KS hamiltonian of density functional theory, excluding the orbital from which the electron is ejected to the continuum, and the one-electron continuum wave functions are obtained from an inverse iterative procedure in the former KS orbital basis for each photoelectron energy. All dipole matrix elements describing the transitions between all bound and continuum states as well as between continuum states have been included in the solution of the TDSE, which is essential to describe the bound-continuum and continuum–continuum transitions leading to the RABBITT spectra. We have restricted all simulations to the equilibrium geometry (fixed-nuclei approximation). More details can be found in the Supplementary Information . Data availability The data that support the findings of this study are available on reasonable request from the corresponding author G.S.([email protected]). Requests for data will be processed within 1 week. The data are not publicly available due to further analysis on the findings conducted by the authors of this manuscript. Code availability Analysis codes used in this study are available on reasonable request from the corresponding author G.S.([email protected]). Requests for codes will be processed within one week. | None | [] | [] | [] | SciNews | Physics | H. Ahmadi et al, Attosecond photoionisation time delays reveal the anisotropy of the molecular potential in the recoil frame, Nature Communications (2022). DOI: 10.1038/s41467-022-28783-x Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-022-28783-x | https://phys.org/news/2022-03-mechanism-photoionization-insights-complex-molecular.html | Researchers led by Prof. Dr. Giuseppe Sansone from the University of Freiburg have used the mechanism of photoionization to gain insight into complex molecular potential. By generating attosecond pulses and reconstructing the orientation of simple molecules, the team found that electrons experience a complex landscape of potential peaks and valleys as they move out of the molecule. The path followed by the electron affects the time it takes to be freed, and by measuring the time delays accumulated by electrons emitted from CF4 molecules in different spatial directions, the researchers can understand how the potential landscape and peaks affect the time delay. This approach can be extended to more complex molecular systems and could provide a way to map complex potential landscapes with unprecedented temporal resolution.
How can researchers use the mechanism of photoionization to gain insight into complex molecular potential? This question has now been answered by a team led by Prof. Dr. Giuseppe Sansone from the Institute of Physics at the University of Freiburg. The researchers from Freiburg, the Max Planck Institute for Nuclear Physics in Heidelberg and groups at the Universidad Autonoma in Madrid/Spain and the University of Trieste/Italy have published their results in the journal Nature Communications. In the origin of photoionization, also called the photoelectric effect, an atom or molecule absorbs one quantum of light, usually indicated as photon, from an external field. The energy absorbed in this process is transferred to an electron, which is freed, leaving behind a singly charged ion. In several aspects and for several applications, the effect can be regarded as instantaneous, meaning that there is no significant time delay between the absorption of the photon and the instant when the electron is emitted. However, several experiments conducted in the last years have evidenced that tiny, but measurable delays lying in the attosecond range (1 as = 10-18 s) occur between these two processes. Generation of attosecond pulses "Thanks to the advanced laser sources and specially designed spectrometers available in our laboratory, we can generate the shortest bursts of light, lasting only few hundreds of attoseconds," Sansone explains. "Moreover, we can reconstruct the orientation of simple molecules when they absorb a photon from an external laser pulse. We have used such pulses to investigate the motion of the electrons after the absorption of a photon." Electrons experience paths with potential peaks and valleys The researchers found that on its way out from the molecule, the electron experiences a complex landscape characterized by potential peaks and valleys. These are determined by the spatial distribution of the atoms composing the system. The path followed by the electron during its motion can affect the time it takes to be freed. Extension to more complex molecular systems possible In the experiment, the team measured the time delays accumulated by the electrons emitted from CF4 molecules in different spatial directions were measured using an attosecond pulse train combined with an ultrashort infrared field. "Combining this information with the characterization of the spatial orientation of the molecule, we can understand how the potential landscape and, in particular, potential peaks affect the time delay," says the Freiburg physicist. The work can be extended to more complex molecular systems and to potentials changing on ultrashort timescales. In general, Sansone emphasizes, this approach could give the possibility to map complex potential landscapes from within, with unprecedented temporal resolution. |
10.1186/s12916-019-1321-x | Diabetes patients at higher risk of deadly liver disease, finds study of 18 million people | Many patients with potentially deadly liver cirrhosis and liver cancer are being diagnosed at late advanced stages of disease, according to a study led by Queen Mary University of London and the University of Glasgow. The study of 18 million people across Europe also suggests the people living with type 2 diabetes are at particular risk of this 'silent disease' and should be monitored closely to prevent life-threatening disease progression. Non-alcoholic fatty liver disease (NAFLD) affects up to a quarter of people in the West and is the most common cause of liver disease around the world. It is closely associated with obesity and type 2 diabetes and its rise mirrors the social problems of poor diets and sedentary lifestyles. GPs are often unaware of the condition and patients often go undiagnosed. For the majority, NAFLD is a benign condition, but one in six people will go on to develop the aggressive form of disease, called non-alcoholic steatohepatitis (NASH), leading to liver injury, scarring and eventually in some to cirrhosis, liver failure and even liver cancer. By identifying which patients might go on to develop the more aggressive disease, interventions and treatments could be targeted to those at greatest need. In the largest study of its kind, published in the journal BMC Medicine, the team combined the healthcare records of 18 million European adults from the UK, Netherlands, Italy and Spain. They matched each NAFLD patient to 100 patients who did not have a recorded diagnosis, and looked to see who developed liver cirrhosis and liver cancer over time. Lead researcher Dr. William Alazawi from Queen Mary University of London said: "We were surprised that the number of patients with recorded diagnoses of non-alcoholic fatty liver was much less than expected, meaning that many patients are actually undiagnosed in primary care. Even over the short time frame of the study, some patients progressed to more advanced, life threatening stages of disease, suggesting that they are being diagnosed very late. "The public, doctors and policy makers need to be aware of this silent disease and strategies need to be put in place to tackle the root causes and avoid progression to life-threatening stages. "People living with diabetes are at increased risk of more advanced, life threatening stages of disease, suggesting that we should be focusing our efforts in educating and preventing liver disease in diabetes patients." Naveed Sattar from the University of Glasgow added: "Doctors treating patients with diabetes already have a lot to check on—eyes, kidneys, heart risks—but these results remind us that we should not neglect the liver, nor forget to consider the possibility of NASH. They also remind us that perhaps more efforts are needed to help our patients with diabetes lose weight and cut alcohol." More than 136,000 patients were identified with NAFLD/NASH and were more likely to have type 2 diabetes, hypertension and obesity than matched controls. The strongest association was observed in NAFLD/NASH patients who had a diagnosis of type 2 diabetes—they were more than twice as likely to develop aggressive liver disease. This suggests that diabetes could be a good predictor of liver disease progression. Looking at particular types of advanced liver disease, NAFLD/NASH patients were almost five times more likely to be diagnosed with cirrhosis and more than three and a half times more likely to be diagnosed with liver cancer. The study also found that NAFLD/NASH patients acquired diagnoses of life-threatening liver disease within a relatively short time (around 3.3 years). The researchers say that it is not feasible that this reflects true rates of disease progression. The acquisition of a new diagnosis in the healthcare record does not necessarily mean that disease progression has occurred at that time, nor that the advanced disease did not exist at the time of the initial diagnosis. This suggests that patients in Europe are being diagnosed at the later stages of disease, which are associated with greater risk of liver-related mortality. The results also suggests that primary care records under-estimate disease severity and that some patients with NAFLD diagnoses actually have advanced cirrhosis already. The research was funded by the European Union's Innovative Medicines Initiative and Dr. William Alazawi was funded by the Medical Research Council. | A study of 18 million European adults found that many patients with non-alcoholic fatty liver disease (NAFLD) and liver cancer are being diagnosed at late, advanced stages of disease. The study, led by Queen Mary University of London and the University of Glasgow, suggests that people living with type 2 diabetes are at particular risk of developing NAFLD, which affects up to a quarter of people in the West and is closely associated with obesity and type 2 diabetes. The researchers found that NAFLD patients are more likely to develop aggressive liver disease, cirrhosis, and liver cancer, and that patients with type 2 diabetes are more than twice as likely to develop aggressive liver disease. The study also found that patients are being diagnosed at later stages of disease, which are associated with greater risk of liver-related mortality, and that primary care records underestimate disease severity. The researchers are calling for increased awareness and monitoring of NAFLD, particularly in patients with type 2 diabetes, to prevent life-threatening disease progression. | None | Abstract Background Non-alcoholic fatty liver disease (NAFLD) is a common condition that progresses in some patients to steatohepatitis (NASH), cirrhosis and hepatocellular carcinoma (HCC). Here we used healthcare records of 18 million adults to estimate risk of acquiring advanced liver disease diagnoses in patients with NAFLD or NASH compared to individually matched controls. Methods Data were extracted from four European primary care databases representing the UK, Netherlands, Italy and Spain. Patients with a recorded diagnosis of NAFLD or NASH (NAFLD/NASH) were followed up for incident cirrhosis and HCC diagnoses. Each coded NAFLD/NASH patient was matched to up to 100 “non-NAFLD” patients by practice site, gender, age ± 5 years and visit recorded within ± 6 months. Hazard ratios (HR) were estimated using Cox models adjusted for age and smoking status and pooled across databases by random effects meta-analyses. Results Out of 18,782,281 adults, we identified 136,703 patients with coded NAFLD/NASH. Coded NAFLD/NASH patients were more likely to have diabetes, hypertension and obesity than matched controls. HR for cirrhosis in patients compared to controls was 4.73 (95% CI 2.43–9.19) and for HCC, 3.51 (95% CI 1.72–7.16). HR for either outcome was higher in patients with NASH and those with high-risk Fib-4 scores. The strongest independent predictor of a diagnosis of HCC or cirrhosis was baseline diagnosis of diabetes. Conclusions Real-world population data show that recorded diagnosis of NAFLD/NASH increases risk of life-threatening liver outcomes. Diabetes is an independent predictor of advanced liver disease diagnosis, emphasising the need to identify specific groups of patients at highest risk. Peer Review reports Background Non-alcoholic fatty liver disease (NAFLD) is the most common cause of liver disease worldwide. NAFLD represents a spectrum of disease that includes simple steatosis, non-alcoholic steatohepatitis (NASH) and fibrosis [ 1 ]. The numbers of individuals presenting with end-stage complications of NASH, namely decompensated cirrhosis and hepatocellular carcinoma (HCC), are rising [ 2 , 3 ], and NASH is rapidly becoming the most common indication for liver transplantation [ 4 ]. Yet not all patients within the NAFLD spectrum progress, and for the majority, NAFLD is a benign condition [ 1 ]. A key clinical challenge is to identify the proportion of patients who are at high risk of developing advanced liver disease, so that interventions, including the many novel therapies in development, can be targeted to those at greatest need. Our current understanding of NAFLD epidemiology and progression largely derives from single-centre studies of small- or medium-sized cohorts and meta-analyses of these [ 5 , 6 , 7 ]. These studies, together with emerging data from placebo arms of therapeutic trials [ 8 ], have taught us that patients with existing evidence of progressive disease (e.g., fibrosis) are at risk of further progression to HCC and decompensated cirrhosis, albeit this may reflect a degree of lead-time bias. Such studies often involve formal assessment of well-phenotyped patients at inclusion but are, by design, selective and may not represent the ‘real-world’ situation for the majority of patients with NAFLD. Paired biopsy data have been reported, although the second biopsy is often performed because of clinical suspicion and not per study protocol, which may bias estimates of progression [ 9 ]. Real-world patients are socially and ethnically diverse, have comorbidities and concomitant medications or simply cannot commit to long-term studies or trials and therefore may not be represented by any of these study designs. Increasingly, real-world data derived from primary care electronic health records (EHR) of a sizeable proportion of the general population [ 10 , 11 ] are being used to address these issues. In many European countries, where healthcare is largely state-funded and there are low or absent primary care co-payments, the population has unrestricted access to healthcare via primary care physicians who act as gatekeepers for referral to secondary care [ 12 ]. People register with primary care centres at birth or when they move to an area in order to access healthcare; therefore, primary care EHR represent data that are as close to the ‘general’ population as possible. If a practice joins the database, all the patients at that practice are registered in the database and, although there is an option for individual patients to opt out, this is minimal (< 1%). In order to gain insights into the NAFLD spectrum of diseases in real-world patients, we extracted data from four large European primary care databases and identified a cohort of patients with a diagnosis of NAFLD or of NASH. Our aim in this study was to estimate the risk for patients with diagnoses of NAFLD or NASH to acquire a new diagnosis of cirrhosis and HCC and to understand the main predictors for this. Methods Databases Databases were accessed via the European Medical Information Framework (EMIF) network: The Health Search Database (HSD) in Italy [ 13 ], The Integrated Primary Care Information (IPCI) in the Netherlands [ 14 ], the Information System for the Development of Research in Primary Care (SIDIAP) in Spain [ 15 ] and The Health Information Network (THIN) in the UK [ 16 ] (Additional file 1 : Table S1). HSD collects electronic medical record data from a network of over 800 Italian GPs who are members of the Italian College of General Practitioners. IPCI is a longitudinal collection of electronic patient records from over 750 Dutch general practitioners, containing data from over 2 million patients. SIDIAP collects data from 274 primary care practices comprising 3414 basic care units [ 17 ], and THIN contains the electronic medical records of 11.1 million patients from 562 general practices in the UK, covering 6.2% of the UK population [ 18 ]. The data custodians for each database provided approval that the protocol of the study complied with local privacy laws. Anonymised data were extracted locally by each data custodian liaising with the EMIF Platform and using a data transformation tool called Jerboa Reloaded [ 10 ]. The data were then uploaded onto a secure remote server maintained by an independent academic centre (Erasmus Medical Centre Private Research Environment, Netherlands) and analysed centrally. Study design We conducted a matched cohort study. All patients with a diagnosis of NAFLD or NASH (termed NAFLD/NASH) prior to 01/01/2016 were identified in the four databases using harmonisation methods previously described [ 10 ]. Patients were included in the analysis if they were aged ≥ 18 at diagnosis and had medical records available for ≥ 12 months from registration with the practice. Exclusion criteria were missing information on age and sex, a record of alcohol abuse at any time prior to diagnosis and a history of liver morbidity within the 12 months prior to diagnosis [ 10 ] (see Additional file 1 : Supplementary Methods for exclusion diagnoses). Each NAFLD/NASH patient was matched with up to 100 ‘non-exposed’ controls who did not have a NAFLD or NASH diagnosis at or prior to the index date (defined as the date of diagnosis of the matched NAFLD/NASH patient). Matching was done by practice site, age at index date ± 5 years, sex and a visit at the practice within ± 6 months of the index date. In the THIN and SIDIAP databases, the terminology of the database (Read code and International Classification of Disease version 10, ICD10, respectively) allowed NAFLD and NASH diagnoses to be distinguished from each other. Therefore, in these databases, a matched control cohort was constructed for each of the diagnoses: NAFLD, NASH and, to enable comparison between all databases, NAFLD/NASH. If a patient had both NAFLD and NASH diagnoses recorded, the earliest event was used to define index date of NAFLD/NASH diagnosis, and the NASH diagnosis deemed an incident event. In HSD (ICD 9) and IPCI (IPCI Dutch), where NAFLD and NASH could not be distinguished, only one cohort (NAFLD/NASH) was defined and controls matched to this. Patients were followed up from the index date until the earliest of occurrence of cirrhosis, hepatocellular carcinoma or NASH (where this could be identified), end of the study period (31/12/2015) and loss of follow-up due to exit out of the database or death. Events of interest were incident diagnosis of cirrhosis, hepatocellular carcinoma or NASH, where this could be identified. See Additional file 1 : Supplementary Methods for variable extraction and data analysis. Results Out of 18,782,281 eligible individuals in the four databases, we identified 136,703 (0.7%) who had a recorded diagnosis of either NAFLD or NASH (coded NAFLD/NASH) and who met the inclusion criteria (Additional file 1 : Table S1). The Spanish (SIDIAP) and UK (THIN) databases contributed 71% of all cases; the remaining 29% of coded NAFLD/NASH cases were from the Dutch (IPCI) and Italian (HSD) databases. In SIDIAP, 2.5% of all coded NAFLD/NASH patients ( n = 1880) had NASH, and in THIN, this was 4.7% ( n = 1212). Due to the coding, NAFLD and NASH could not be distinguished in IPCI and HSD. Therefore, in the initial phase of analysis, we combined all NAFLD and NASH codes from all four databases as coded NAFLD/NASH. Comparing coded NAFLD/NASH patients across the four databases, there were minor differences between databases in mean age, BMI and proportion with diabetes (Table 1 and Additional file 1 : Table S2). BMI data were available in 64.6% of patients with coded NAFLD/NASH and in 45.9% of matched controls (Additional file 1 : Table S3). In the subset of patients for whom data were available, ALT and AST values were highest in THIN, and the proportion of obese patients highest in SIDIAP. Sufficient data were available to calculate the non-invasive fibrosis Fib-4 score (age, AST, ALT and platelets) in 46.7% of patients (range 12.6–62.6%, Table 2 ). THIN (UK) had the smallest proportion of patients with Fib-4 data (12.6%), in whom the proportion of patients with high-risk scores was 10.5%, highest among the four databases. Table 1 Descriptive characteristics of coded NAFLD/NASH patients and matched unexposed cohorts Full size table Table 2 Distribution of Fib-4 scores in coded NAFLD/NASH patients shown for each country database Full size table Patients with a coded diagnosis of NAFLD/NASH had comparable age and sex distribution, smoking rates and duration of follow-up as matched controls (Table 1 ). As expected, however, controls had lower BMI; lower rates of obesity, hypertension or diabetes; and lower serum levels of ALT and AST. Risk of incident cirrhosis and HCC is higher in NAFLD/NASH patients compared to controls Combining all four databases, the median duration of follow-up was 3.3 years (IQR 1.8–5.3) totalling 531,452 person-years for patients with coded NAFLD/NASH and 43,385,495 person-years for controls. Among all coded NAFLD/NASH patients, the incidence of cirrhosis diagnosis was 0.76 per 1000 person-years, (95% confidence interval (CI) 0.46 to 2.32), and the incidence of hepatocellular carcinoma diagnosis was 0.3 per 1000 person-years, (0.26 to 0.60; Additional file 1 : Table S4). Patients with coded NAFLD/NASH were at significantly higher risk of acquiring a new diagnosis of cirrhosis compared to controls with a pooled HR of 4.73 (95%CI 2.43–9.19) after adjustment for age, smoking status and BMI (Fig. 1 ). Fig. 1 Association of coded NAFLD/NASH, NAFLD and NASH with cirrhosis. Hazard ratios and 95% confidence interval for acquiring a new diagnosis of cirrhosis in each database and combined across databases (subtotal) Full size image Similarly, the risk of incident HCC diagnosis was significantly higher in coded NAFLD/NASH patients compared to controls. The pooled HR across the four databases for an incident diagnosis of HCC was 3.51 (95%CI 1.72–7.16 Fig. 2 ). There were no significant differences in the HRs when categorising patients into those with and without obesity, smoking, diabetes or hypertension; male sex and older age (Additional file 1 : Figure S1). There were no significant differences in the HRs for cirrhosis and HCC diagnoses following adjustment for age and smoking alone in all coded NAFLD/NASH patients compared to patients with available BMI data (Additional file 1 : Figures S2 and S3). This is despite the fact that patients with BMI data were more likely to be smokers (19.5% vs 11.2%), diabetic (26.9% vs 7.0%) and hypertensive (50.1% vs 27.9%, Additional file 1 : Table S5). Fig. 2 Association of coded NAFLD/NASH, NAFLD and NASH with hepatocellular carcinoma (HCC). Hazard ratios and 95% confidence interval for acquiring a new diagnosis of HCC in each database and combined across databases (subtotal) Full size image Fib-4 predicts disease progression in patients with NAFLD/NASH In the subset of coded NAFLD/NASH patients in whom we could calculate Fib-4 ( n = 63,971, Additional file 1 : Table S3), the incidence of a new diagnosis of cirrhosis was significantly higher for the high-risk compared to low-risk category (HR 33.24, 95%CI 8.82–125.34), adjusting for age and smoking status and more modest, albeit still significant, for the intermediate compared to low-risk group (HR 5.04, 95%CI 2.30–11.04 Additional file 1 : Figure S4A). Similarly, compared to patients with low-risk scores, the incidence of an HCC diagnosis was higher in patients with indeterminate (HR 3.74, 95%CI 1.76–7.96) or high-risk scores (HR 25.2, 95%CI 7.83–80.66, Additional file 1 : Figure S4B). Distinguishing NAFLD from NASH diagnoses when estimating risk of cirrhosis and HCC The pooled HR for incident NASH diagnosis in patients with a coded diagnosis of NAFLD compared to controls was 7.75 (95%CI 2.56–23.51, p = 0.008) although this estimate is based on a very small number of individuals ( n = 130 of whom only seven were in SIDIAP, Additional file 1 : Figure S5). In the subset of patients with a coded diagnosis of NASH, the incidence of diagnoses of liver outcomes was higher than in those with NAFLD albeit confidence intervals overlapped: 3.25 per 1000 person-years (95%CI 2.41–4.10) for cirrhosis and 1.16 per 1000 person-years (95%CI 0.67–1.65) for HCC (Figs. 1 and 2 ). Short time interval to cirrhosis diagnosis in patients with NAFLD and NASH In SIDIAP, 174 out of 75,415 patients with coded NAFLD were coded as having cirrhosis (incidence rate 0.66 per 1000 person-years (95%CI 0.56–0.76) with a median time to the new diagnosis of 2.9 years whereas 38 out of 1880 patients with NASH acquired a diagnosis of cirrhosis (incidence rate 2.83 per 1000 person-years (95%CI 2.0–3.88, Additional file 1 : Table S4) with a similar median time to diagnosis of 3.0 years (Additional file 1 : Table S6). In THIN, the incidence of cirrhosis was higher and the interval between diagnoses was shorter for both stages of disease. One hundred three out of 24,743 patients with coded NAFLD acquired a cirrhosis diagnostic code (incidence rate 2.17 per 1000 person-years (95%CI 1.86–2.51) with median time to diagnosis of 2.0 years, compared to 26 out of 1212 patients with coded NASH (incidence rate 5.81 per 1000 person-years (95% CI 3.8–8.52) with median time to diagnosis of 0.5 years. Diabetes predicts disease progression In coded NAFLD/NASH patients, the strongest association with incident liver outcomes was observed in patients who also had a past diagnosis of diabetes at baseline (HR 2.3, 95% CI 1.9–2.78). In matched controls without coded NAFLD/NASH, smoking was also associated with liver outcome (HR 1.5, 95% CI 1.41–1.6) in addition to the independent risk attributed to diabetes, which was higher than in patients with coded NAFLD/NASH (HR 2.92, 95% CI 2.76–3.08, Table 3 ). Table 3 Association between covariates and risk of liver outcomes: cirrhosis or hepatocellular carcinoma. Using a 1-step Cox model stratified by database Full size table Discussion To our knowledge, this is the largest study to date that has used EHR data to investigate rates of new diagnoses of advanced liver disease in patients with NAFLD. Our patients were well-matched to a very large number of controls according to sex, age, GP practice and most recent visit, thus limiting bias due to geographical and socioeconomic diversity and behaviours relating to health service utilisation. Patients with coded NAFLD/NASH are at significantly increased risk of acquiring a diagnosis of cirrhosis or HCC, compared to matched controls. The risk is greater in patients with a coded diagnosis of NASH compared to NAFLD and in those with high-risk Fib-4 fibrosis scores compared to indeterminate or low-risk scores. Diabetes is an independent risk factor for progression to either HCC or cirrhosis diagnoses in both coded NAFLD/NASH patients and matched controls. We applied minimal selection criteria and therefore were able to include over 78% of all adults registered in the databases, hence the ‘real-world’ nature of the study. The overall proportion of people with coded NAFLD/NASH diagnoses is lower than expected as reported previously [ 10 ], is in keeping with other primary care work [ 19 ] and may reflect levels of awareness of NAFLD/NASH in primary care [ 20 , 21 ]. Hence, our data, by definition, can only represent the visible part of the clinical iceberg. Despite this, we find that patients with coded NAFLD/NASH acquire diagnoses of life-threatening liver disease within a relatively short follow-up period (median 3.3 years). It is not feasible that the short time intervals between coded diagnosis of NAFLD/NASH and advanced liver disease reflect true rates of disease progression, estimated to be one fibrosis stage per 7 years [ 22 ]. The acquisition of a new code in the healthcare record does not necessarily mean that pathological progression has occurred at that time, nor that the stage did not exist at baseline. Our interpretation of these data is that patients in Europe are being diagnosed at the later stages of disease, which are associated with greater risk of liver-related mortality [ 23 , 24 , 25 ]. Less than 50% of patients had sufficient data to calculate Fib-4, the components of which are also needed to calculate many other non-invasive fibrosis scores [ 26 ]. There was marked national variation in fibrosis assessment; 73.1% of patients in whom we could calculate Fib-4 were from the Spanish database. We have no way of determining whether these scores were actually calculated by clinicians and whether they influenced decision-making. This is despite the fact that such risk stratification is central to most guidelines [ 27 , 28 , 29 ], used to determine clinical management, select patients for clinical trials and probably triage patients for future therapy. In the databases where NAFLD/NASH codes could not be distinguished (HSD and IPCI), even those with low-risk Fib-4 scores were at increased risk of cirrhosis and HCC compared to controls. This further suggests that primary care records under-estimate disease severity and that some patients with NAFLD/NASH diagnoses actually have advanced fibrosis or cirrhosis already. Apart from a diagnosis of NAFLD/NASH, diabetes was the strongest independent risk factor for acquiring a diagnosis of cirrhosis or HCC. In the matched control population, the HR for diabetes was even higher than the coded NAFLD/NASH cohort, which may reflect a significant number of individuals with undiagnosed NAFLD/NASH among the controls. The importance of diabetes is consistent with a review of patients who had undergone more than one biopsy in the course of their routine clinical care in the UK, which showed that diabetes was a risk factor for progression of fibrosis [ 9 ]. Obesity is an important risk factor for many cancers including HCC [ 30 ], but we did not find that in our study. If patients are diagnosed late in the disease spectrum, it is unlikely that patients will have undergone surveillance and HCC may be diagnosed at late stages when symptoms including weight loss are manifest. Taken together, these findings emphasise the need to recognise risk factors for progressive disease and to detect disease at early stages when interventions can be more effective. This study is subject to limitations. The nature of real-world data is such that we cannot ascertain the origin of codes nor the motivation for adding diagnoses to the patient record. Although the study is based in primary care, it is likely that a large proportion of diagnoses will have been made with some involvement of secondary care. It would be inaccurate to assume that all patients who carry the code ‘NASH’ have had a liver biopsy and histological assessment and it might be that the diagnosis was assumed and recorded based on, for example, ultrasound evidence of fatty liver and elevated serum transaminases or increased stiffness on transient elastography. Similarly, it was not possible to confirm that the matched controls did not have NAFLD/NASH. However, the clinical features of patients with coded NAFLD/NASH are consistent with the diagnostic codes, although if patients with NAFLD/NASH do exist in the control group then the effect sizes reported here are underestimates of the real risk. This means that there are individuals living with diabetes in primary care who have not been diagnosed with NAFLD/NASH but are at significantly increased risk of developing liver cirrhosis and cancer. The estimated size of the NAFLD problem has raised fears of large unmanageable patient numbers who are not at immediate threat of disease. Notwithstanding our expectation that many cases have not been identified in this study, we have shown that 0.6% of patients with an existing coded diagnosis of NAFLD/NASH acquire a diagnosis of cirrhosis and/or HCC within a 3-year follow-up period. This gives us insight into the rate at which advanced disease is discovered, even if this is not the natural history in the general population. The clinical impact of our data is that they highlight the large gaps in diagnosis and risk assessment of NAFLD and NASH with variable rates of risk stratification, staging of disease and seemingly late diagnosis. Conclusions Our knowledge of NAFLD/NASH is being based on small, highly selected cohort studies. These have been accurate in telling us the potential scale of the prevalence and progression of disease, but the reality for many in the general population is some way from that. In order to affect population health and make an impact on the overall health burden of advanced liver disease, we cannot simply rely on introducing effective therapies to the small number of people with established diagnoses. The current approach to opportunistically investigate those in whom abnormalities in liver tests arise is clearly not working. While better biomarkers are needed that identify those at risk more precisely, the current tools are not being used, leaving many patients unclear as to the stage of their disease and its significance to their health. Therefore, making an impact on advanced liver disease will need co-ordinated efforts to identify those with NAFLD, to stage their disease and target those at risk of progression. Abbreviations ALT: Alanine transaminase AST: Aspartate transaminase BMI: Body mass index CI: Confidence interval EHR: Electronic Health Record EMIF: European Medical Information Framework GP: General Practitioner HCC: Hepatocellular carcinoma HSD: Health Search Database IPCI: Information System for Research in Primary Care LFT: Liver function tests NAFLD: Non-alcoholic fatty liver disease NASH: Non-alcoholic steatohepatitis SIDIAP: Information System for Research in Primary Care THIN: The Health Improvement Network UK: United Kingdom US: United States | None | [] | [] | [] | SciNews | Medicine | Myriam Alexander et al, Risks and clinical predictors of cirrhosis and hepatocellular carcinoma diagnoses in adults with diagnosed NAFLD: real-world study of 18 million patients in four European cohorts, BMC Medicine (2019). DOI: 10.1186/s12916-019-1321-x Journal information: BMC Medicine | http://dx.doi.org/10.1186/s12916-019-1321-x | https://medicalxpress.com/news/2019-05-diabetes-patients-higher-deadly-liver.html | A study of 18 million European adults found that many patients with non-alcoholic fatty liver disease (NAFLD) and liver cancer are being diagnosed at late, advanced stages of disease. The study, led by Queen Mary University of London and the University of Glasgow, suggests that people living with type 2 diabetes are at particular risk of developing NAFLD, which affects up to a quarter of people in the West and is closely associated with obesity and type 2 diabetes. The researchers found that NAFLD patients are more likely to develop aggressive liver disease, cirrhosis, and liver cancer, and that patients with type 2 diabetes are more than twice as likely to develop aggressive liver disease. The study also found that patients are being diagnosed at later stages of disease, which are associated with greater risk of liver-related mortality, and that primary care records underestimate disease severity. The researchers are calling for increased awareness and monitoring of NAFLD, particularly in patients with type 2 diabetes, to prevent life-threatening disease progression.
Many patients with potentially deadly liver cirrhosis and liver cancer are being diagnosed at late advanced stages of disease, according to a study led by Queen Mary University of London and the University of Glasgow. The study of 18 million people across Europe also suggests the people living with type 2 diabetes are at particular risk of this 'silent disease' and should be monitored closely to prevent life-threatening disease progression. Non-alcoholic fatty liver disease (NAFLD) affects up to a quarter of people in the West and is the most common cause of liver disease around the world. It is closely associated with obesity and type 2 diabetes and its rise mirrors the social problems of poor diets and sedentary lifestyles. GPs are often unaware of the condition and patients often go undiagnosed. For the majority, NAFLD is a benign condition, but one in six people will go on to develop the aggressive form of disease, called non-alcoholic steatohepatitis (NASH), leading to liver injury, scarring and eventually in some to cirrhosis, liver failure and even liver cancer. By identifying which patients might go on to develop the more aggressive disease, interventions and treatments could be targeted to those at greatest need. In the largest study of its kind, published in the journal BMC Medicine, the team combined the healthcare records of 18 million European adults from the UK, Netherlands, Italy and Spain. They matched each NAFLD patient to 100 patients who did not have a recorded diagnosis, and looked to see who developed liver cirrhosis and liver cancer over time. Lead researcher Dr. William Alazawi from Queen Mary University of London said: "We were surprised that the number of patients with recorded diagnoses of non-alcoholic fatty liver was much less than expected, meaning that many patients are actually undiagnosed in primary care. Even over the short time frame of the study, some patients progressed to more advanced, life threatening stages of disease, suggesting that they are being diagnosed very late. "The public, doctors and policy makers need to be aware of this silent disease and strategies need to be put in place to tackle the root causes and avoid progression to life-threatening stages. "People living with diabetes are at increased risk of more advanced, life threatening stages of disease, suggesting that we should be focusing our efforts in educating and preventing liver disease in diabetes patients." Naveed Sattar from the University of Glasgow added: "Doctors treating patients with diabetes already have a lot to check on—eyes, kidneys, heart risks—but these results remind us that we should not neglect the liver, nor forget to consider the possibility of NASH. They also remind us that perhaps more efforts are needed to help our patients with diabetes lose weight and cut alcohol." More than 136,000 patients were identified with NAFLD/NASH and were more likely to have type 2 diabetes, hypertension and obesity than matched controls. The strongest association was observed in NAFLD/NASH patients who had a diagnosis of type 2 diabetes—they were more than twice as likely to develop aggressive liver disease. This suggests that diabetes could be a good predictor of liver disease progression. Looking at particular types of advanced liver disease, NAFLD/NASH patients were almost five times more likely to be diagnosed with cirrhosis and more than three and a half times more likely to be diagnosed with liver cancer. The study also found that NAFLD/NASH patients acquired diagnoses of life-threatening liver disease within a relatively short time (around 3.3 years). The researchers say that it is not feasible that this reflects true rates of disease progression. The acquisition of a new diagnosis in the healthcare record does not necessarily mean that disease progression has occurred at that time, nor that the advanced disease did not exist at the time of the initial diagnosis. This suggests that patients in Europe are being diagnosed at the later stages of disease, which are associated with greater risk of liver-related mortality. The results also suggests that primary care records under-estimate disease severity and that some patients with NAFLD diagnoses actually have advanced cirrhosis already. The research was funded by the European Union's Innovative Medicines Initiative and Dr. William Alazawi was funded by the Medical Research Council. |
10.1038/ncomms8808 | One size does not fit all when it comes to marrow fat, scientists say | While most of us worry about the fat cells building up on the fleshy parts of our bodies, scientists have started to pay serious attention to another kind of fat cell deep inside our bones, in what's called the marrow. Today, they've published new important clues about this little-understood kind of fatty tissue - including the discovery that there are two different types. Their results pave the way for more research on how marrow fat influences the rest of the body, and its role in a range of diseases including osteoporosis. In a paper published in Nature Communications, the team from the University of Michigan and other institutions describes research in rodents, and a small group of women, that led them to conclude that there are two kinds of fat cells in what scientists call marrow adipose tissue, or MAT. The findings deepen understanding of MAT, which makes up about 70 percent of the marrow in the adult human skeleton. They also make it clear that researchers need to take different MAT types into account when studying its role in disease. Why MAT matters Scientists have come to realize MAT plays a key role in our body's metabolism. MAT levels rise in many different diseases, from anorexia to type 1 diabetes, and also go up as we age and as bones get brittle and break down in osteoporosis. "Reducing marrow fat has been mentioned as a target for osteoporosis therapy, but before such approaches go further we need to get a more targeted understanding of MAT and the effects of potential intervention," says Erica Scheller, Ph.D., DDS, who is transitioning from a postdoctoral fellowship at the U-M Medical School to a faculty position at Washington University in St. Louis. Scheller worked with senior author and U-M physiology professor Ormond MacDougald, Ph.D., and others to determine that MAT actually exists in two forms: regulated and constitutive. Their detailed analysis shows that the two kinds of cells store different types of fat molecules, that their genetic profile differs in very specific ways, and that they develop at different times in the life cycle and interact in different ways with the blood cell formation process that also happens in the marrow. Though the researchers can't yet see whether what they saw in mice holds completely true for humans, their study includes data from five women that agreed to let the researchers study the fat composition of their leg bone marrow using special scanners. Just as in the mice, the further down the leg bone, the more unsaturated fat there was inside the marrow. This is the first evidence in humans that two types of MAT exist, and the team will continue to study the bones of human. "We're definitely finding that MAT is more complex than anyone originally thought, and that we have a long way to go in understanding it," says MacDougald, who is the John A. Faulkner Collegiate Professor of Physiology in the Department of Molecular and Integrative Physiology, and a professor of Internal Medicine in the Metabolism, Endocrinology & Diabetes division. "We have a lot of it, and we need to do more to understand why it's there and what it's doing, and how it changes in different diseases." From here to tomorrow MacDougald, Scheller and their colleagues will continue to study the two forms of MAT in further studies in mice, and in bones removed from patients having hip replacement surgery and limb amputations. Getting healthy bone samples is harder, but over time they hope to flesh out the full picture of how the two forms of MAT form and act. The techniques they developed in their lab, which enable scientists to detect the characteristics of MAT, should be useful to scientists around the world studying bone marrow. And, the findings they've made should make MAT composition a key marker for scientists that study blood cell formation, bone biology and metabolism. | Scientists have discovered that there are two types of fat cells in the bone marrow, known as marrow adipose tissue (MAT), which makes up about 70% of the adult human skeleton. The two types, regulated and constitutive, store different types of fat molecules and have distinct genetic profiles. The discovery was made through research in rodents and a small group of women, and the findings suggest that MAT plays a key role in metabolism and is involved in various diseases, including anorexia, type 1 diabetes, and osteoporosis. The researchers believe that understanding the two types of MAT is crucial for developing targeted therapies for these diseases, and they plan to continue studying the composition of MAT in further studies. | None | Abstract Marrow adipose tissue (MAT) accumulates in diverse clinical conditions but remains poorly understood. Here we show region-specific variation in MAT adipocyte development, regulation, size, lipid composition, gene expression and genetic determinants. Early MAT formation in mice is conserved, whereas later development is strain dependent. Proximal, but not distal tibial, MAT is lost with 21-day cold exposure. Rat MAT adipocytes from distal sites have an increased proportion of monounsaturated fatty acids and expression of Scd1/Scd2 , Cebpa and Cebpb . Humans also have increased distal marrow fat unsaturation. We define proximal ‘regulated’ MAT (rMAT) as single adipocytes interspersed with active haematopoiesis, whereas distal ‘constitutive’ MAT (cMAT) has low haematopoiesis, contains larger adipocytes, develops earlier and remains preserved upon systemic challenges. Loss of rMAT occurs in mice with congenital generalized lipodystrophy type 4, whereas both rMAT and cMAT are preserved in mice with congenital generalized lipodystrophy type 3. Consideration of these MAT subpopulations may be important for future studies linking MAT to bone biology, haematopoiesis and whole-body metabolism. Introduction Marrow adipose tissue (MAT) is a functionally distinct adipose depot, located within the skeleton, with the potential to contribute to both local and systemic metabolism 1 , 2 . Further accumulation of MAT occurs in a diverse range of clinical conditions including osteoporosis, ageing, gonadal dysfunction, type 1 diabetes and anorexia 2 , 3 . MAT formation is also induced with therapeutic interventions including radiation, chemotherapy, glucocorticoids and thiazolidinediones 1 , 3 . Despite these clinical findings, the regulation and function of MAT remains largely unclear. In many cases, MAT accumulation has been correlated with low bone mineral density, decreased bone formation and bone loss (reviewed in ref. 2 ). However, the presence of a direct relationship between MAT and bone remains controversial. For example, despite a clear correlation, increased MAT is not necessary for bone loss at the proximal tibia in rodent models of type 1 diabetes or ovariectomy-induced osteopenia 4 , 5 , 6 . In addition, histomorphometric studies in rats demonstrate that sites of high MAT have decreased ovariectomy-induced trabecular bone loss, with trabecular width in rat tibial metaphyses being greater at sites of high MAT (distal tibia) than at sites of low MAT (proximal tibia) 7 , 8 , 9 . The hypothesis that MAT is necessary for skeletal equilibrium is also supported by phenotypes of patients with congenital generalized lipodystrophy (CGL). A high proportion of patients with CGL1 or CGL2 (who lack MAT) develop pathological osteosclerosis and skeletal cysts between ages 10 and 20 years—the time in humans when MAT generally undergoes robust formation in a developmentally defined pattern in the affected skeletal regions 2 . In contrast, those with CGL3 or CGL4 (who retain MAT) fail to develop this pathology. These apparent contradictions emphasize the complex, context-specific relationship between MAT and bone, and likely the relationship between MAT and peripheral metabolism 1 , 10 . Although it is generally assumed that all marrow adipocytes are equivalent, a study by Tavassoli 11 in 1976 suggested that characteristics of red marrow adipocytes may differ to those of adipocytes within yellow marrow. In humans, formation of adipocytes within the yellow marrow occurs at or slightly before birth, regardless of prematurity, and accelerates between 4 and 8 weeks of age 2 , 12 . Early MAT formation occurs in distal skeletal regions including the hands, feet, distal tibia and tail (in rodents). Histologically, once this early MAT matures, the densely packed adipocytes resemble peripheral white adipose tissue (WAT) and are relatively devoid of active haematopoiesis. For the purposes of discussion in this paper, we define these areas as constitutive MAT (cMAT). After the initial peak, MAT accumulation continues in areas of red, haematopoietic marrow throughout life 13 . We refer to this population as regulated MAT (rMAT) and define it histologically as single adipocytes interspersed with sites of active haematopoiesis. It is important to note that, especially in larger species, both histological patterns may exist side by side. In rats and mice, however, these regions appear to be more spatially distinct. We hypothesized that the later-forming rMAT adipocytes would have characteristics distinct from the cMAT adipocytes that arise early in development. Herein, we address this hypothesis using mouse models to examine MAT formation and regulation during development and with cold exposure; lipidomics and proton magnetic resonance (MR) spectroscopy ( 1 H-MRS) to measure MAT lipid composition in rats and humans; MAT isolated from rats to quantify molecular differences in gene expression; and CGL3 and CGL4 mouse models that reveal a genetic basis for development of distinct rMAT and cMAT subpopulations. In sum, this evidence distinguishes rMAT from cMAT—a fundamental finding that may help to explain previous inconsistencies in the literature and inform future research on the relationship between MAT, bone, haematopoiesis and whole-body metabolism. Results Strain-specific MAT development in mice The postnatal development of MAT remains poorly characterized on a spatiotemporal level. We used osmium tetroxide staining to visualize and quantify MAT in the whole tibia of male C57BL/6J (B6) and C3H/HeJ (C3H) mice at 1, 4, 12 and 56 weeks of age ( Fig. 1 ). At 1 and 4 weeks, the initial phase of MAT development was similar in both strains. In the distal tibia, MAT formation and maturation accelerated rapidly after birth until the marrow space filled with adipocytes at 4 weeks of age. The amount of MAT distal to the junction of the tibia and fibula was similar between B6 and C3H strains through 12 weeks and remained relatively stable until 56 weeks in C3H animals ( Fig. 1a,b ). A parallel pattern of development occurred in the caudal vertebrae of the tail, with mature MAT filling the marrow space by 4 weeks of age ( Fig. 1c ). At this time, MAT in the tail vertebrae matched the histological appearance of cMAT as defined above. Figure 1: Quantification of MAT development in C57BL/6J and C3H/HeJ mice from 1 to 56 weeks of age. ( a ) Osmium-stained tibiae were scanned by μCT and were reconstructed with the decalcified bone overlaid. Representative images of the data are presented in ( b ). Marrow fat is dark grey and bone is light grey. ( b ) Region-specific quantification of MAT volume (biological replicate N =5 (1 week old), 7 (4 weeks old), 9 (C3H 12 weeks old) and 11 (B6 12- and 56 weeks old)). Regions as defined in a include the proximal epiphysis (Prox Epi), the growth plate to the tibia/fibula (Tib/Fib) junction (GP to T/F J) and the tibia/fibula junction to the end of the bone (T/F J to end). a Two-tailed t -test, P <0.05 for total tibial MAT volume compared between strains at a given age. ( c ) Representative histology of caudal vertebrae (biological replicate N = 5), × 10 objective (scale bars, 200 μm). All graphs represent mean ± s.d. Full size image In contrast to the distal tibia and tail, rMAT within the middle and proximal tibia was highly variable in both volume and rate of development ( Fig. 1a,b ). By 12 weeks, MAT development diverged, with robust expansion in the proximal tibial marrow of C3H, but not B6, mice. Thus, C3H mice had nearly twice as much total MAT than B6 at this age. Surprisingly, by 56 weeks these differences in total MAT disappeared ( Fig. 1b ); however, the distribution of the cells within the tibia remained divergent, with C3H mice having increased MAT volume in the proximal regions of the tibia ( Fig. 1b ). These distinct developmental characteristics suggest discrete MAT populations, designated cMAT (distal tibia and tail vertebrae) and rMAT (mid- to proximal tibia). To examine the developmental relationship between MAT and bone, we also analysed the tibiae of 12- and 56-week-old animals both before decalcification and again after osmium staining. Osmium-based localization of MAT in three dimensions demonstrated its asymmetric distribution within the tibial marrow cavity ( Fig. 2a,b ). In both B6 and C3H mouse strains, MAT accumulation with age in the proximal metaphysis occurred most robustly in the medial marrow space. In the mid-diaphysis, B6 MAT continued to approximate the medial endocortical surface, whereas C3H MAT closely followed the posterior cortex ( Fig. 2d,e ). Development of trabecular and cortical bone was similar to what has been reported previously ( Fig. 2c,f ; ref. 14 ). In addition to increases in MAT with age in both strains, trabecular number decreased and thickness increased. Thus, in the proximal metaphysis across the 12- and 56-week-old groups, MAT volume correlated negatively with trabecular number (linear regression B6, P =0.007; C3H, P =0.005) but positively with trabecular thickness (linear regression B6, P =0.010; C3H, P <0.001). Figure 2: Trabecular and cortical development in C57BL/6J and C3H/HeJ mice at 12 and 56 weeks of age. ( a , b ) Representative images of the proximal tibial metaphysis both before decalcification and after osmium staining of the data presented in c . Marrow fat is in white. ( c ) Quantification of trabecular parameters and MAT volume in the proximal tibial metaphysis (biological replicate N =9 (C3H 12 weeks old) and 11 (B6 12- and 56 weeks old)). ( d , e ) Representative images of the mid-tibial diaphysis both before decalcification and after osmium staining of the data presented in panel f . Marrow fat is in white. ( f ) Quantification of cortical parameters (biological replicate N =9 (C3H 12 weeks old) and 11 (B6 12- and 56 weeks old)). Scale bars=500 μm. *Two-tailed t -test, P <0.05. All values represent mean±s.d. Full size image Differential loss of MAT with cold exposure Cold exposure in rodents elevates sympathetic tone and results in extensive remodelling of WAT, which enhances thermogenesis and allows maintenance of body temperature and survival at 4 °C (ref. 15 ). However, the response of MAT to cold temperatures is unknown. To quantify changes in MAT after 21-day cold exposure (4 °C), we analysed MAT in the whole tibia of male C3H mice at 12 and 56 weeks of age. The C3H strain was used based on the robust proportion of MAT in both proximal and distal regions of the tibia ( Fig. 1 ), allowing for simultaneous analysis of rMAT and cMAT populations within the same bone. After cold exposure in 12-week-old mice, the amount of rMAT decreased by 76% in the tibial epiphysis and 71% in the proximal tibia, between the growth plate and the tibia/fibula junction ( Fig. 3a,b ). In 56-week-old mice, rMAT decreased by 56 and 71%, respectively ( Fig. 3c ). In contrast, cMAT in the distal tibia, below the fibular attachment, remained unchanged ( Fig. 3a–c ). MAT loss at the proximal tibial metaphysis was most prominent in the centre of the marrow space with a relative preservation of the adipocytes that were directly adjacent to the endocortical surface ( Supplementary Fig. 1A,B ). This is the reverse of the developmental pattern of proximal MAT accumulation ( Fig. 2a,b ). Despite the robust loss of rMAT, trabecular and cortical parameters in the tibia remained largely unchanged ( Supplementary Fig. 1C,D ); indeed, the only significant finding was a slight decrease in the relative cortical bone volume in the 12-week-old C3H mice. Figure 3: Differential loss of MAT with cold exposure. ( a ) Representative osmium-stained tibiae scanned with μCT of the data presented in b and c . Marrow fat in dark grey, decalcified bone overlaid in light grey. ( b ) Region-specific quantification of tibial MAT (as defined in Fig. 1a ) from 12-week-old mice normalized to marrow volume (biological replicate N =11 (4 °C) and 11 (22 °C)). ( c ) Region-specific quantification of tibial MAT from 56-week-old mice normalized to marrow volume (biological replicate N =9 (4 °C) and 11 (22 °C)). ( d ) Adipocyte size distribution from the proximal tibial metaphysis (proximal) or ( e ) distal diaphysis below the tibia/fibula junction (distal) as measured by nanoCT in the 12-week-old mice (biological replicate N = 5). Histogram bin size 250. ( f ) Representative histology, based on quantification in d , of osmium-stained samples at the proximal tibia shows a decrease in adipocyte size. Scale bars, 50 μm. ( g ) Estimation of region-specific adipocyte number was performed by dividing the total adipocyte volume (from μCT) by the average adipocyte volume (nanoCT) in the proximal tibia (growth plate to tibia/fibula junction) and the distal tibia (tibia/fibula junction to the distal end). *( b , c , g ) Two-tailed t -test, ( d , e ) two-way analysis of variance with Sidak’s multiple comparisons test, P <0.05. RT, room temperature. All graphs represent mean±s.d. Full size image For osmium-based MAT analysis, we use a micro-computed tomography (μCT) voxel size of 12 μm, which allows rough outlines of the marrow adipocytes to be observed (the average MAT cell diameter is 30–40 μm). However, at this resolution, μCT might be unable to detect more subtle changes in regions of densely packed adipocytes, such as those in the distal tibia. To test this, we re-scanned the bones from the 12-week-old mice at a voxel size of 2 μm using nano-computed tomography (nanoCT). The resolution of these scans was sufficient to clearly identify individual adipocytes ( Supplementary Fig. 2 ). Using a digital histology approach, we quantified adipocytes sizes in two-dimensional nanoCT DICOM slices ( Supplementary Fig. 2 ; ref. 16 ). To determine the adipocyte size distribution, we measured the two-dimensional area of 300–400 individual adipocytes in the proximal tibial metaphysis and at the midpoint of the distal tibia ( Supplementary Fig. 2 ). Consistent with our μCT results for total MAT volume, adipocytes in the proximal tibia decreased in size, whereas those in the distal tibia remained unchanged ( Fig. 3d–f ). This confirmed our μCT interpretation and the validity of the osmium/μCT method for total MAT volume quantification, even in adipocyte-dense regions such as the distal tibia 17 . Together, the μCT and nanoCT data revealed that in response to cold exposure, proximal rMAT adipocytes decrease in both size and number, whereas the adipocytes in the distal tibia are unchanged ( Fig. 2g ). Average adipocyte size of rMAT and cMAT adipocytes Adipocyte size is a parameter that has historically been used to track metabolic responsiveness of individual cells. Analysis of the 12-week-old C3H animals at room temperature revealed that cMAT adipocytes are significantly larger than rMAT adipocytes ( Fig. 4a ), with average diameters of 37.8±1.2 and 32.5±2.4 μm, respectively (two-tailed t -test, P =0.002). This 16% increase in cMAT adipocyte diameter extrapolates to an estimated 54.6% increase in cMAT adipocyte volume. cMAT adipocytes were also larger in rats ( Fig. 4b,c ), with cMAT adipocytes in tail vertebrae being 24 or 17% larger in diameter than tibial rMAT adipocytes in males or females, respectively (male cMAT versus rMAT, 38.9±1.9 versus 31.4±1.6 μm, two-tailed t -test, P <0.001; female cMAT versus rMAT, 38.9±1.6 versus 33.1±3.2 μm, two-tailed t -test, P =0.003). The cell size distributions for each group are presented as histograms in Fig. 4 . Figure 4: Quantification of rMAT and cMAT adipocyte size. Adipocyte size quantification from ( a ) rMAT in the proximal tibia and cMAT in the distal tibia of 12-week-old C3H mice (biological replicate N =5), ( b ) rMAT in the proximal tibia and cMAT in the caudal vertebrae of 16-week-old male Sprague–Dawley rats (biological replicate N =12), and ( c ) rMAT in the mid-tibia and cMAT in the caudal vertebrae of 19-week-old female Sprague–Dawley rats (biological replicate N =5 rMAT and 6 cMAT). *Two-way analysis of variance with Sidak’s multiple comparisons test, P <0.05. All graphs represent mean±s.d. Full size image Region-specific fatty-acid content of MAT The mechanisms underlying site-specific regulation of marrow adipocytes with cold exposure could be related to differences in the local microenvironment and/or between adipocyte subpopulations. With the exception of Tavassoli 11 , previous work on marrow fat has assumed that all MAT adipocytes are equivalent. To begin testing the validity of this assumption and thus determine whether the microenvironment is the sole mediator, we characterized the lipidomic profile of the proximal rMAT and distal cMAT adipocytes. We started with lipidomics because the work of Tavassoli 11 suggests that marrow adipocytes may have a region-specific lipid composition. In addition, our ‘marrow fat consortium’ group previously developed techniques to estimate the lipid unsaturation of MAT in the human skeleton using 1 H-MRS 18 ( Supplementary Fig. 3 ). We applied this method to measure marrow lipid unsaturation in four regions of the human appendicular skeleton, including the femur (proximal metaphysis and mid-diaphysis) and the tibia (mid-diaphysis and distal metaphysis). We found that, in humans, the distal tibia had an increased unsaturation index relative to the proximal femur ( Fig. 5a ), implying that distal marrow adipocytes contain more unsaturated lipids than those in proximal/central skeletal regions. Figure 5: Region-specific lipid saturation of human and rat MAT. ( a ) Marrow unsaturation at four sites in the human leg was compared using 1 H-MRS. Marrow at the metaphysis of the distal tibia (T) had a higher unsaturation index than marrow in the proximal femur (F) (biological replicate N =5). Mean±s.e. ( b ) Adipocytes were isolated from four regions of the rat skeleton. Red outlines indicate rMAT sites including femur/proximal (Prox) tibia and lumbar vertebrae. Grey and black outlines indicate cMAT sites including distal tibia and caudal vertebrae. Histology of intact rat rMAT and cMAT before adipocyte isolation representative of the animals from Fig. 4b (biological replicate N =12). Objective × 40, scale bars, 50 μm. ( c ) Principal components analysis of normalized fatty acids from three independent experiments (23 fatty acids and 44 unique biological samples). Raw data presented in Supplementary Data 1 . Visceral WAT (vWAT) includes 5 gonadal and 3 perirenal; subcutaneous WAT (scWAT) includes 12 inguinal; rMAT includes 3 lumbar vertebrae and 6 femur/proximal tibia; cMAT includes 3 distal tibia and 12 caudal vertebrae (samples were derived from 13 unique animals). Dashed line = 95% confidence interval. ( d ) Proportion of fatty acids with one or more double bonds relative to total lipid in adipocytes from perirenal visceral WAT (vWAT), inguinal scWAT, rMAT from femur/proximal tibia (rMAT T/F), rMAT from lumbar vertebrae (rMAT Vert), cMAT from distal tibia (cMAT tibia) and cMAT from tail vertebrae (cMAT Vert). Representative data presented as mean±s.d. (as presented, biological replicate N =3). Experiment repeated with similar results in three animal cohorts with a total of 44 samples from 13 rats as outlined in Supplementary Data 1 . *One-way analysis of variance with Tukey’s multiple comparisons test, P <0.05. Full size image Since the human model relies on indirect analysis of intact marrow, we developed a modified collagenase digestion protocol to purify adipocytes from the rat bone marrow for direct lipidomic analyses 19 . Adipocytes from WAT (perirenal, gonadal and inguinal) were used as a control. The rMAT regions included the femur/proximal tibia and lumbar vertebrae, whereas the cMAT regions included the distal tibia and caudal vertebrae ( Fig. 5b ). Adipocytes were isolated from a diverse population of rats including (experiment no. 1) 1-year-old female high-capacity runner rats 20 , (experiment no. 2) 16-week-old male Sprague–Dawley rats and (experiment no. 3) 8-month-old female Sprague–Dawley rats. After isolating adipocytes, we extracted total lipid with methanol–choloroform and then used gas chromatography (GC) for lipidomic analysis of esterified fatty acids. In the adipocyte, the vast majority of fatty acids are derived from triacylglycerols with minor contributions from species such as phospholipids. Palmitate, stearate and their unsaturated derivatives were the most common—accounting for >90% of the total lipid. To standardize between experiments, we expressed each fatty-acid subtype as a per cent of the total lipid. The raw data for all experiments are presented in this format as Supplementary Data 1 . This standardized data set was used to perform principal component analysis of the 23 fatty-acid subtypes across three independent experiments. In total, 44 unique lipidomic profiles of purified adipocytes from MAT (8 rMAT and 15 cMAT), visceral WAT (5 gonadal and 3 perirenal) and subcutaneous WAT (scWAT; 12 inguinal) were compared ( Supplementary Data 1 ). Despite the diversity in the animal cohorts, all forms of WAT were tightly clustered while there was a clear separation of cMAT from rMAT and WAT ( Fig. 5c ). Consistent with the human data, the per cent of unsaturated fatty acids relative to total lipid was highest in the cMAT adipocytes purified from the distal tibia and the tail vertebrae ( Fig. 5d ). The increased proportion of unsaturated fatty acids in the rat cMAT adipocytes and separation from rMAT/WAT adipocytes on the principal component plot was primarily driven by decreases in palmitate and stearate, and corresponding increases in their monounsaturated derivatives palmitoleate and oleate ( Supplementary Data 1 ). This resulted in a robust increase in the monounsaturated-to-saturated ratio for these fatty acids ( Fig. 6a,b ). This change was greater in cMAT adipocytes from the tail vertebrae when compared with the cMAT from the distal tibia, indicating that the distal tibia may be a region of mixed MAT. Consistent with the increased proportion of the unsaturated fatty acids palmitoleate and oleate, expression of stearoyl-CoA desaturase-1 ( Scd1 ) was elevated in both male and female cMAT adipocytes relative to adipocytes isolated from scWAT ( Fig. 6c,d ). Elevated expression of desaturases including Fads1 and Fads2 was also noted in both males and females, with inconsistent elevations in Scd2 (males only) and Fads3 (females only; Fig. 6c,d ). Expression of mitochondrial glycerol-3-phosphate acyltransferase ( Gpam ), an enzyme that preferentially incorporates saturated fatty acids during synthesis of glycerolipids, was similar between scWAT and cMAT in both cohorts. Figure 6: Gene expression of desaturases in isolated adipocytes. ( a ) Proportion of C16:1n7-palmitoleate relative to C16:0-palmitate. ( b ) The proportion of C18:1n9-oleate relative to C18:0-stearate. Representative data presented as mean±s.d. (as presented, biological replicate N =3). Repeated with similar results in three animal cohorts with samples from 13 total rats as detailed in Supplementary Data 1 . Transcript expression in isolated constitutive MAT (cMAT) and subcutaneous WAT (scWAT) adipocytes normalized to scWAT from ( c ) 16-week-old male Sprague–Dawley rats (biological replicate N =6 cMAT (two animals pooled per sample) and N =12 scWAT) and ( d ) 8-month-old female Sprague–Dawley rats (biological replicate N =3 cMAT (two animals pooled per sample) and N =5 scWAT). Presented as mean±s.d. *( a , b ) One-way analysis of variance with Tukey’s multiple comparisons test, ( c , d ) two-tailed t -test, P <0.05. Full size image Region-specific transcription factor expression Differentiation of adipocytes from precursor cells is tightly regulated by a defined transcriptional cascade (see ref. 22 for review). The transcription factors CCAAT/enhancer-binding protein (C/EBP)-β and -δ are induced during early adipogenesis. These factors then activate expression of the essential adipogenic transcription factors peroxisome proliferator-activated receptor-γ and C/EBPα (ref. 23 ). Sterol regulatory element-binding protein-1 (encoded by Srebf1 ) serves as a transcriptional activator that is required for lipid homeostasis in mature adipocytes. Unexpectedly, in cMAT adipocytes, expression of both Cebpa and Cebpb was elevated relative to rMAT and/or scWAT adipocytes from male and female rats ( Fig. 7a–c ). Expression of Srebf1 was elevated in cMAT and rMAT adipocytes of males, but not females. In contrast, Pparg was similar between cMAT/rMAT/WAT in males but increased in cMAT relative to scWAT in females. The similar or elevated expression of Pparg in MAT reinforces the notion that these cells are of the adipocyte lineage, but the selective elevation of Cepba and Cebpb in cMAT adipocytes suggests potential for alternative transcriptional regulation and function in this unique adipocyte population. Figure 7: Gene expression of transcription factors in isolated adipocytes. Transcript expression in isolated adipocytes from subcutaneous WAT (scWAT), constitutive MAT (cMAT) and/or regulated MAT (rMAT) normalized to scWAT from ( a ) 16-week-old male Sprague–Dawley rats (biological replicate N =6 cMAT (two animals pooled per sample) and 12 scWAT), ( b ) 8-month-old female Sprague–Dawley rats (biological replicate N =3 cMAT (two animals pooled per sample) and 5 scWAT), and ( c ) 16-week-old male Sprague–Dawley rats (biological replicate N =5, four animals pooled per sample). Presented as mean±s.d. *( a , b ) Two-tailed t -test, ( c ) one-way analysis of variance, P <0.05. Full size image Knockout of PTRF inhibits formation of rMAT adipocytes Patients with CGL lose a majority of their peripheral WAT; however, magnetic resonance imaging (MRI) scans indicate that MAT is preserved in those with mutations in CAV1 (CGL3) and PTRF (CGL4) (reviewed in ref. 2 ). Caveolin-1 (encoded by CAV1 ) is a key structural component of caveolae, 50–100 nm invaginations of the plasma membrane that account for up to 50% of the surface of peripheral white adipocytes 24 . PTRF encodes for cavin-1, a protein required for stabilization of caveolins and formation of caveolae 25 , 26 , 27 , 28 . Caveolae and their associated proteins coordinate many diverse signalling pathways and have been identified as key regulators of insulin sensitivity, lipid trafficking and adipocyte precursor differentiation 29 , 30 . To explore the preservation of MAT in CGL3 and CGL4, we quantified region-specific changes in MAT of adult male and female Cav1 and Ptrf knockout mice at 16–17 weeks of age. The metabolic and peripheral adipose tissue phenotypes of these animals have been reported previously 26 , 31 , 32 , 33 . Consistent with the CGL3 human phenotype 34 , Cav1 knockout mice did not lose MAT ( Fig. 8a–c and Supplementary Fig. 4 ), despite a significant decrease in the amount of peripheral WAT ( Supplementary Fig. 5A,B ). As with MAT, trabecular bone at the proximal tibial metaphysis and cortical bone at the mid-diaphysis remained unchanged in the Cav1 knockout animals ( Supplementary Fig. 5C,D ). Figure 8: Differential loss of MAT in mice with knockout of Cav1 or Ptrf . ( a ) Representative osmium-stained tibiae scanned with μCT based on data in b and c . Marrow fat in dark grey, decalcified bone overlaid in light grey. ( b , c ) Region-specific quantification of tibial MAT volume by μCT (biological replicate N =6–9 as indicated on the graph). Box plot centre line represents median, box extends from the 25th to 75th percentile, whiskers indicate range. ( d ) Representative histology of caudal vertebrae based on data in e , × 10 objective (scale bars, 200 μm). ( e ) Adipocyte size distribution of the caudal marrow adipocytes as measured by histology (biological replicate N =5). Histogram bin size 250. Presented as mean±s.d. *( b ) Non-parametric Mann–Whitney test, ( c ) two-tailed t -Test, ( e ) two-way analysis of variance with Sidak’s multiple comparisons test, P <0.05. KO, knockout; WT, wild type. Full size image In addition to loss of WAT ( Supplementary Fig. 6 ), in male mice knockout of Ptrf caused nearly complete loss of proximal tibial rMAT adipocytes with a relative preservation of cMAT in the distal tibia ( Fig. 8a–c ). Based on the three-dimensional reconstructions of the tibiae from Ptrf knockout animals, only the most distal portion of the MAT in the tibia was maintained while there was mixed preservation moving towards the tibia/fibula junction ( Fig. 8a ). This finding, similar to the lipidomic data in the rats, suggests a possible mixture of rMAT and cMAT adipocytes in the distal tibia. In contrast, the tail vertebrae of male Ptrf knockout mice remained completely filled with MAT ( Fig. 8d ), and these vertebral cMAT adipocytes were of the same size as those of wild-type animals ( Fig. 8e ). Except for a 4.6% increase in cortical bone mineral content, trabecular and cortical parameters did not differ between the Ptrf knockout males and their wild-type counterparts ( Fig. 9c and Supplementary Fig. 6 ). Figure 9: Trabecular morphology versus MAT in the tibia and L4 vertebrae of Ptrf KO mice. ( a , b ) Representative images of the proximal tibial metaphysis both before decalcification and after osmium staining based on data in c. Marrow fat is in white. Scale bars, 500 μm. ( c ) Quantification of trabecular parameters and MAT volume in the proximal tibial metaphysis (biological replicate N =5 (female KO), 6 (female WT), 7 (male KO) and 9 (male WT)). ( d ) Quantification of trabecular parameters in the vertebral body of L4 (biological replicate N =5 (WT) and 4 (KO)). KO, knockout; WT, wild type. All values represent mean±s.d. *Two-tailed t -test, P <0.05. a Non-parametric Mann–Whitney test, P <0.05. Full size image Similar decreases in rMAT and WAT were observed in the female Ptrf knockout mice ( Fig. 9a and Supplementary Figs 4 and 6 ). Unlike males, the female Ptrf knockout mice had a significant 14.3% increase in trabecular number and corresponding 8.5% decrease in trabecular spacing ( Fig. 9c ). In addition, relative to males, both control and Ptrf knockout females had increased MAT volume in the proximal tibia ( Fig. 9 ) that was inversely correlated with trabecular number at the proximal tibial metaphysis ( P =0.021). Interestingly, the trabecular bone phenotype of the females was even more striking in the L4 vertebral body, with a 21.1% increase in trabecular number, 21.5% decrease in spacing, 22.9% increase in bone volume fraction and 21.0% increase in bone mineral content ( Fig. 9d ). Consistent with previous reports 10 , we did not observe MAT adipocytes in the lumbar vertebrae in either the wild-type or knockout females. As with the tibia, the trabecular phenotype of the vertebrae was unaffected by Ptrf knockout in males. Discussion Our results demonstrate that there are region-specific differences in development, regulation, adipocyte size, lipid composition, gene expression and genetic determinants of marrow adipocytes that have implications for understanding the marrow niche and its relationship to skeletal and whole-body metabolism. Localization of osmium-stained adipocytes in three dimensions demonstrated that MAT in mice develops asymmetrically from distal to proximal. A similar pattern of early development occurs in vertebrae species including rats, rabbits and humans. However, the absolute rate of formation decreases with increasing lifespan/size of the animal. For example, the ‘adult’ distribution of MAT in humans occurs around age 25 years, in rabbits by 6 months and in mice as early as 8 weeks—likely with some relationship to sexual maturation 12 , 13 , 35 , 36 . The amount of MAT that forms during this phase also varies between species; larger animals have more MAT that extends farther into the skeleton than smaller animals (humans > rabbits > rats > mice). In addition, we found that MAT forms in two distinct temporal waves that are spatially separated in mice and correspond histologically to rMAT in red marrow and cMAT in yellow marrow ( Fig. 10 ). Conversely, MAT loss with cold exposure is the opposite of development—the last to form is the first to go. The cMAT in the distal tibia and tail, in particular, is highly resistant to dissolution. While the microenvironment likely plays a major role in these site-specific responses, our lipidomic and gene expression data identify cell autonomous differences between the rMAT and cMAT adipocytes that might also contribute to their distinct behaviours. Figure 10: rMAT versus cMAT summary. In the human and mouse tibia, cMAT is present in the distal portion of the bone. The marrow shifts to red towards the proximal tibia, occurring near the tibia/fibula junction in the rodent and in the proximal tibial metaphysis or femur in the human. The red marrow contains rMAT adipocytes. In some cases, especially in larger species such as humans, the histological patterns that correspond to rMAT and cMAT adipocytes may be present in the same region. The bones have been shaded in orange to indicate this possibility. cMAT is the first to develop and histologically appears as sheets of confluent adipocytes that are relatively devoid of haematopoiesis. Isolated cMAT adipocytes form shortly after birth, have an increased proportion of unsaturated fatty acids and are larger in size. rMAT develops throughout life and is histologically defined as single cells interspersed with areas of active haematopoiesis. Isolated rMAT adipocytes have a lipid saturation profile that is similar to WAT adipocytes and are more saturated, and smaller in size, than cMAT. These cells are negatively regulated by 21-day cold exposure. They also fail to form in mice with genetic knockout of Ptrf , but not Cav1 . NC, no change. Scale bar, 50 μm. Full size image Tavassoli 11 , in 1976, demonstrated the presence of two different types of adipocytes in rabbit bone marrow—those that stain with performic acid Schiff (PFAS) and those that do not. The stain reaction is thought to rely on oxidation of the ethylenic linkages in unsaturated fats to aldehyde and processing with Schiff’s reagent to generate a red/purple colour, although this mechanism is controversial 37 . Inspired by Tavassoli’s 11 PFAS stain, we found that rMAT and cMAT have distinct lipidomic profiles ( Fig. 5 and Supplementary Data 1 ). In addition, despite the histological similarities between WAT and cMAT—the lipid composition of WAT more closely mirrors that of rMAT, suggesting that lipid metabolism in WAT and rMAT adipocytes may be similar. Coordinate regulation by cold exposure and similarities in Pparg , Cebpa and Cebpb gene expression between rMAT and WAT further support this hypothesis. Of note, the increase in cMAT unsaturation is actually the opposite of what we expected based on the proposed mechanism of the PFAS stain. This is likely due to the historic debate surrounding the stain, which in one paper from 1970 was characterized as ‘useless in lipid histochemistry’ 37 . Regardless, it led us to uncover a highly conserved difference between rMAT and cMAT in rats that, based on indirect evaluation with 1 H-MRS, appears to extend to human MAT. These findings have implications for diseases including osteoporosis, which has been associated with a decrease in MAT unsaturation 38 . A shift in marrow fat composition to higher levels of saturated lipid has also been correlated with fragility fractures in postmenopausal women 39 . For example, palmitate is lipotoxic to osteoblasts and impairs mineralization 40 . As a proportion of total lipid, palmitate is enriched in rMAT relative to cMAT. In contrast, palmitoleate, which is enriched in cMAT, has been identified as a secreted adipose tissue-derived lipid hormone with the capacity to stimulate muscle insulin action and suppress hepatosteatosis 41 . Our current analysis does not discriminate between lipid types (for example, triacylglycerols versus phospholipids); hence, future work is needed to examine subcellular localization and secretion of fatty acids, in addition to other mediators, by rMAT and cMAT, and to quantify their impact on local and distant tissues. In the introduction, we highlighted the unresolved controversy that exists regarding the relationship between MAT and bone. It is notable that the reports that correlate MAT accumulation with low bone mineral density, decreased bone formation and bone loss generally analyse rMAT-enriched sites, including the proximal femur, hip and lumbar spine (reviewed in ref. 2 ). In contrast, studies demonstrating resistance to bone loss at sites of high MAT are all based on cMAT-enriched areas including the distal tibia and tail vertebrae 7 , 8 , 9 . In this manuscript, we explored changes in trabecular and cortical architecture and compared our findings with MAT volume and its three-dimensional distribution. During development in B6 and C3H mice, we observed polarization of rMAT towards the medial marrow space in the proximal tibia and to the medial/posterior endocortical surface at the mid-tibia ( Fig. 2 ). MAT accumulation from 12 to 56 weeks of age correlated negatively with trabecular number and positively with trabecular thickness in both strains. Conversely, extensive loss of MAT in the proximal tibia in mice undergoing 21-day cold exposure failed to uniformly impact trabecular or cortical parameters ( Supplementary Fig. 1 ). In our genetic models, knockout of Cav1 left both MAT and bone unchanged. In contrast, developmental inhibition of rMAT accumulation in the female Ptrf knockout mice was correlated with increased trabecular number in the proximal tibia. With this phenotype, it is tempting to conclude that rMAT loss is necessary for trabecular gain; however, in this same model, the increase in trabecular number was actually more pronounced in the lumbar vertebrae—a skeletal site in the mouse that has little to no MAT. What then can we conclude about the relationship between MAT and bone? It is certainly of note that developmental polarization of MAT along the medial and posterior surfaces of the cortical bone implies that rMAT may be related to cortical drift patterns during development. Similarly, logic dictates that accumulation of MAT in the proximal tibia must occur at the expense of either haematopoiesis or bone, since the size of the space within the skeleton is finite. It would not be unreasonable to subsequently assume that these components have an inherent ability to regulate one another. What we truly need, however, are animal models in which we can specifically regulate rMAT and cMAT in vivo . Identification of Ptrf knockout as a selective mediator of rMAT loss ( Fig. 8 ) is one step towards generation of a genetic model of rMAT ablation. Future quantification of MAT, expanded beyond conventional methods to include both rMAT and cMAT, in currently available genetic models will undoubtedly reveal additional targets. Although depletion of MAT has been proposed as a strategy to combat osteoporosis 42 , the function of rMAT and cMAT must be clarified before removal of MAT populations. The highly defined accumulation of cMAT early in vertebrate development and its robust resistance to dissolution implies an important function for this adipose depot—one that may go beyond the skeleton 1 . In contrast, rMAT adipocytes are more closely situated in areas of high bone turnover and are better positioned to actively influence haematopoiesis and/or skeletal remodelling. Although our data provide a working definition of rMAT and cMAT, and highlight the need to explore these cells in more detail ( Fig. 10 ), there are many questions that remain. Our ability to definitively address fundamental differences in marrow adipocytes and their role locally in the skeletal microenvironment, or systemically, as a component of whole-body metabolism, depends on future development of targeted animal models and continued clinical investigation. Methods Rodents Where they were utilized, animal procedures were approved by the animal use and care committees at the University of Michigan, Maine Medical Center Research Institute and/or Boston University. Animals were housed at 22 °C on a 12-h light/dark cycle unless otherwise indicated. Development . Male C57BL/6J (Jackson Labs, stock: 000664) and C3H/HeJ (Jackson Labs, stock: 000659) mice were euthanized at 1, 4, 12 or 56 weeks of age and tissues were collected for analysis. The 12- and 56-week-old C3H animals are the same as the control groups for the C3H cold exposure experiment outlined below. Cold exposure . At 8 or 52 weeks of age, 10 male C3H/HeJ (Jackson Labs, stock: 000659) mice were placed individually into pre-cooled cages with bedding, food and water in a room held at 18 °C. Littermate control mice ( n =10 per strain) were held in identical conditions at room temperature ( ∼ 22 °C). After 1 week at 18 °C, the cold room was adjusted to 4 °C and maintained at this temperature for an additional 3 weeks. Control mice were held at 22 °C for a total of 4 weeks (concurrently). Rectal core body temperature of control and cold-exposed mice was monitored daily using a Type T thermocouple rectal probe (RET-3, Physitemp Instruments, Inc., Clifton, NJ, USA) with a MicroTherma 2T hand-held thermometer (ThermoWorks, Inc., Lindon, UT; cat. no. THS-227-193). After 4 weeks, all mice were killed and tissues were collected for analysis. Given the length of the intervention (21 days), we pre-scanned the non-decalcified tibia bones to calculate the marrow volume. After decalcification and osmium stain, the bones were re-scanned and the MAT volume was normalized to marrow volume in each region of interest to correct for any changes in the size of the tibiae between groups. Cav1 and Ptrf knockout mice . Cav1 and Ptrf knockout mice were generated previously 26 , 31 . Homozygous Cav tm1Mls/J mice with knockout of Cav1 on a mixed background (Jackson Labs, stock: 004585) were crossed with B6129SF2/J controls (Jackson Labs, stock: 101045). The resulting Cav1 +/− heterozygotes were crossed and the male homozygous offspring were euthanized for analysis at 16 weeks of age. The homozygous male and female Ptrf −/− and their wild-type control littermates were generated from breeding Ptrf +/− heterozygotes and used for the present study at 16–17 weeks of age. The Ptrf −/− mice had previously been backcrossed to C57BL/6J mice for at least nine generations. Marrow fat quantification by osmium staining and CT Mouse bones were stained with osmium tetroxide for analysis of marrow fat, with slight modification from ref. 17 , as follows. Bones were fixed in 1.5 or 2.0 ml microtubes for 24–48 h in 10% neutral-buffered formalin (VWR, Radnor, PA; cat. no. 16004-128), washed with water and decalcified in 14% EDTA, pH 7.4, for 14 days. After washing again with water, 600 μl Sorensen’s phosphate buffer (pH 7.4) was added to one bone (femur or tibia) in a 1.5-ml microtube. (Note: all subsequent steps must be performed in the fume hood.) Four per cent osmium tetroxide (200 μl) solution (Electron Microscopy Services, Hatfield, PA; cat.no. 19170) was added to each tube to make a 1% solution. Bones were stained in the fume hood 48 h at room temperature. Osmium solution was carefully removed to a small liquid waste container that had been filled with corn oil to ∼ 25% of the volume. Any used pipet tips were ‘rinsed’ of active osmium tetroxide by pipeting corn oil. All tips and tubes were discarded as osmium solid waste. Bones were washed, in the same tube, by incubating in 1 ml of Sorensen’s buffer for 3 h at room temperature. This was repeated twice and the last wash was left in the hood overnight. This waste was disposed of as indicated above. Stained bones were then moved to a fresh set of 1.5 ml microtubes containing 1 ml Sorensen’s buffer each. The used tubes were discarded as solid osmium waste. At this point, the bones and tubes were removed from the fume hood and used for CT. MicroCT . Specimens were embedded in 1% agarose and placed in a 19-mm diameter tube. The length of the bone was scanned using a μCT system (μCT100 Scanco Medical, Bassersdorf, Switzerland). Scan settings are as follows: voxel size 12 μm (all except Fig. 1a , 1 week bones at 10 μm), medium resolution, 70 kVp, 114 μA, 0.5 mm AL filter and integration time 500 ms. Density measurements were calibrated to the manufacturer’s hydroxyapatite phantom. Analysis was performed using the manufacturer’s evaluation software and a threshold of 400 for MAT. NanoCT . Samples were scanned at 2 μm voxel size, 90 kV, 90 μA and 1,500 ms exposure time with a total scan time of 73 min on a nanotom-s (phoenix|X-ray, GE Measurement & Control; Wunstorf, Germany). DICOM image files were opened in ImageJ 43 for size analysis of individual adipocytes in two dimensions using a ‘virtual histology’ approach ( Supplementary Fig. 1 ). The area of 300–500 adipocytes was measured per sample 44 . A bin size of 250 was used to generate a size distribution histogram for each adipocyte type. The average adipocyte volume was estimated based on the average adipocyte area and compared with the total MAT volume (as determined by μCT) to determine the number of adipocytes in a region of interest. Histology Samples were fixed in 10% neutral-buffered formalin and decalcified in 14% EDTA, pH 7.4, before paraffin embedding and haematoxylin and eosin stain. Where indicated, osmium-stained bones (prepared as detailed above) were submitted and processed in the same way. Human marrow unsaturation This study was approved by the Partners Healthcare Institutional Review Board and complied with Health Insurance Portability and Accountability Act guidelines. Written informed consent was obtained from all subjects after the nature of the procedure had been fully explained. We studied five women (mean age: 33±10 years) with a mean body mass index (BMI) of 24.8±10 kg m 2 . All subjects underwent proton MRS ( 1 H-MRS) of the proximal femoral metaphysis, the mid-femoral and tibial diaphyses, and the distal tibial metaphysis to determine MAT content and composition using a 3.0-T MR imaging system (Siemens Trio, Siemens Medical Systems, Erlangen, Germany). Single-voxel 1 H-MRS data were acquired using point-resolved spatially localized spectroscopy pulse sequence without water suppression as previously described 18 . Coefficient of variation for bone marrow fat quantification was 5%. Fitting of all 1 H-MRS data was performed using LCModel (version 6.3-0K) as previously described 18 . A customized fitting algorithm for bone marrow analysis provided estimates for total marrow lipid content (lipid peaks at 0.9, 1.3, 1.6, 2.0 and 5.3 p.p.m. combined). Unsaturation index was determined by obtaining a ratio between the olefinic resonance at 5.3 p.p.m., an estimate of fatty-acid unsaturation bonds, and total lipid content. Adipocyte isolation for lipidomics Adipocytes were isolated from rat WAT and MAT using a modified collagenase digestion protocol as described below 19 . Older female rats were obtained from the University of Michigan rat recycling programme and including 1-year-old female high-capacity runner rats ( N =3; ref. 20 ) and ∼ 8-month-old female Sprague–Dawley rats ( N =5). Sixteen-week-old male Sprague–Dawley rats were obtained from Charles River Laboratories (strain code: 400). Rats were anaesthetized with isofluorane in a drop jar and euthanized by decapitation. The processing for each sample type is outlined in detail below. All protocols were performed simultaneously and the adipocytes from each tissue underwent methanol–choloroform extraction for total lipid at the same time (±5 min). White adipose tissue . Adipose tissues were removed and placed in warm Krebs-Ringer HEPES (KRH) buffer, pH 7.4 (10 mM HEPES, 120 mM NaCl, 1.2 mM KH 2 PO 4 , 1.2 mM MgSO 4 , 2.5 mM CaCl 2 , 15 mM NaHCO 3 , 4.8 mM KCl, 1.0 g l -1 D -glucose and 500 nmol adenosine), that had been pre-equlibrated overnight in an incubator at 37 °C, 5% CO 2 and re-pHed to 7.4. Washed adipose tissue pieces totalling ∼ 1 g were minced in 10 ml KRH containing 1 mg ml -1 collagenase type I (Worthington Biochemical Corp., Lakewood, NJ; cat. no. 4197) and 3% fatty-acid-free BSA (Calbiochem; cat. no. 126575) in a 50-ml conical tube and placed in a shaking water bath at 100 r.p.m., 37 °C for 45–60 min. Digested tissue was pulled gently through a 10-ml polypropylene Luer-lock syringe (no needle) three times to complete disruption and then filtered through a 100-μm cell strainer (Fisherbrand, Pittsburgh, PA; cat. no. 22363549) into a fresh 50-ml polypropylene conical tube. Tibial cMAT . Both tibiae were removed and cleaned of muscle and tendon using gauze. A rotary power tool (Dremel, Robert Bosch Tool Co, Addison, IL) with a Dremel 545 Diamond cutting wheel was used to horizontally bisect the tibia at the base of the tibia/fibula junction. The distal portion was inverted into a 1.5-ml polypropylene microtube containing a hollow spacer and centrifuged at 3,000 g to extrude the marrow. The bone was removed and discarded, and the distal tibial marrow was then processed in the same manner as the WAT, described above. Femur/tibia rMAT . Both femurs were isolated and cleaned, and the ends were removed with the rotary tool to expose the marrow cavity. The femurs and the proximal tibiae were inverted into 1.5 ml microtubes and centrifuged at 3,000 g to separate the marrow. The bones were discarded. Gentle pipetting was used to combine and resuspend the proximal marrow in KRH containing 1 mg ml -1 collagenase and 3% BSA in a 50-ml conical tube. The suspension was then incubated in a shaking water bath at 100 r.p.m., 37 °C for 15–20 min to liberate the rMAT adipocytes. Vertebral cMAT . The most proximal 10 tail vertebrae were separated and some of the surrounding muscle and tendon were removed with gauze. The vertebrae were added to a 50-ml conical tube with 2 × the volume of KRH+1 mg ml -1 collagenase and 3% BSA. The tube was then incubated in a shaking water bath at 100 r.p.m., 37 °C for 20 min, with vigorous shaking by hand every 5 min to help dislodge remaining tissue on the outside of the vertebrae. After 20 min, the vertebrae solution was poured into a 10-cm dish. The vertebrae were quickly cleaned with gauze to remove any remaining soft tissue. Each vertebrae was then bisected longitudinally with a diagonal cutter and put into a fresh 50-ml conical tube containing 2 × the volume of KRH/collagenase/BSA solution. The bisected vertebrae were incubated in a shaking water bath at 100 r.p.m., 37 °C for an additional 20–30 min to liberate the cMAT adipocytes. Vertebral rMAT . Eight lumbar vertebrae were isolated and cleaned with gauze. The processing then continued as described for the vertebral cMAT above. Final processing for all adipocyte types. After filtration, the conical tubes were centrifuged at 400 g for 1 min to pellet the stromal vascular fraction and float the adipocytes. The pellet and the majority of the infranatant was carefully removed with a glass pipet and suction bulb. A plastic 1,000 ml pipet tip was used to resuspend the adipocytes and transfer 300 μl of liquid containing 0.1–1.0 mg of cells to a 24-well plate size transwell insert with 8 μm pores (Corning Inc., Corning, NY; cat. no. 3422). Approximately 90% of the liquid was removed by pressing the transwell membrane on a piece of dry paper towel. The cells in the insert were then washed twice in this manner with fresh KRH (no collagenase, no BSA). After the final wash and liquid depletion, the cells in the insert were collected in 300 μl of water and transferred immediately to a borosilicate glass tube for lipid extraction as described below. Lipidomic analyses of rat adipocytes Lipid extraction . Lipids from the adipocyte samples were extracted following essentially the Bligh and Dyer 45 method of solvent partition. A typical extraction procedure consists of suspending the cells in a borosilicate glass tube in 0.3 ml of water followed by adding 1.125 ml of a mixture of chloroform–methanol (1:2). The mixture was then vortexed to disrupt the cells. The samples were further treated with 0.375 ml each of chloroform and NaCl (0.9%) solution followed by vortexing and centrifugation at 4 °C, 6,500 g , for 7 min. The lower organic (chloroform) layer containing the total lipids was separated out and saved at −20 °C for further use. Preparation of methyl ester with boron trifluoride–methanol and purification . The fatty-acid components of the lipids were derivatized into their methyl esters via trans-esterification with boron trifluoride–methanol 46 after slight modification as follows. The solvents of the above lipid extract were removed under nitrogen. To the dry residue, 2 ml of boron trifluoride–methanol (14% solution) and 10 μl of 4 mM heptadecanoic acid (C 17 ) as an internal standard were added, and the tubes containing the mixture were closed under nitrogen and incubated at 68 °C for 3–3.5 h. The methyl esters were extracted by adding 2 ml of hexane and 1 ml of water, mixing followed by centrifugation. The hexane layer containing methyl esters was transferred into a separate tube. The solvent was then removed under nitrogen, the methyl esters were re-dissolved in to a small volume of hexane and purified by thin-layer chromatography (TLC) using n -hexane-diethyl ether–acetic acid (50:50:2, v/v) as the developing solvents 47 , applying authentic standard fatty-acid methyl ester (FAME) side by side on the TLC plate. After development, the plates were dried and sprayed with Premulin 48 . The products were identified with respect to the retention flow of the standard (retention flow=0.67). The methyl esters were extracted from the TLC powder with diethyl ether, the volumes were concentrated under nitrogen, re-dissolved in 100 μl of hexane and the fatty-acid compositions of the lipids were analysed by GC as follows. GC of FAME . Analysis of FAMEs was performed with 1 μl of sample injection, by GC on an Agilent GC machine, model 6890N equipped with flame ionization detector, an auto sampler and a Chemstation software for data analysis. The GC column used was Agilent HP 88, 30 m, 0.25 mm I.D. and film thickness 0.20 μm. Hydrogen was used as a carrier gas as well as for flame ionization detector and nitrogen was used as a makeup gas. Analyses were carried out with a temperature programming of 125–220 °C. The fatty-acid components in unknown samples were identified with respect to the retention times of standard methyl ester mixtures run side by side. The fatty-acid components were quantified with respect to the known amount of C 17 internal standard added and the calibration ratio derived from each fatty acid of a standard methyl esters mixture and the methyl heptadecanoate (C 17 ) internal standard. Adipocyte isolation for quantitative PCR Adipocytes were isolated from two cohorts of animals, 16-week-old male Sprague–Dawley rats and ∼ 8-month-old female Sprague–Dawley rats as described above. Adipocytes, including rMAT from the proximal tibia and femur, were then isolated from a third cohort of 16-week–old male Sprague–Dawley rats by adding 50 U ml -1 heparin to the collagenase solution. Adipocyte preparations were lysed and processed using Stat60 reagent (Amsbio, Cambridge, MA, USA) to isolate total RNA. Pelleted RNA was resuspended in water and 100 μg of total RNA was reverse-transcribed to cDNA using TaqMan RT reagents (Applied Biosystems, Carlsbad, CA, USA). Quantitative PCR was performed using qPCRBIO SyGreen 2 × mix, Hi-Rox, on an Applied Biosystems real-time PCR detection system (Applied Biosystems). Gene expression was calculated based on a cDNA standard curve within each plate and normalized to the expression of TATA-binding protein (TBP) messenger RNA. Primer sequences are presented in Supplementary Table 1 . Statistics Graphpad Prism software was used to perform statistical tests. Tests including a two-tailed, homoscedastic t -test, a non-parametric Mann–Whitney test, two-way analysis of variance with Sidak’s multiple comparisons test and one-way analysis of variance with Tukey’s multiple comparisons test were applied as indicated in the figure legends. Principal components analysis was performed using MetaboAnalyst 21 . When possible, sample size was determined based on the effect size of preliminary data, and data analysis was performed by an investigator that was blinded to the sample groups. Additional information How to cite this article: Scheller, E. L. et al . Region-specific variation in the properties of skeletal adipocytes reveals regulated and constitutive marrow adipose tissues. Nat. Commun . 6:7808 doi: 10.1038/ncomms8808 (2015). Change history 08 December 2016 A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper. | None | [] | [] | [] | SciNews | Medicine | Nature Communications 6, Article number: 7808 DOI: 10.1038/ncomms8808 Journal information: Nature Communications | http://dx.doi.org/10.1038/ncomms8808 | https://medicalxpress.com/news/2015-08-size-marrow-fat-scientists.html | Scientists have discovered that there are two types of fat cells in the bone marrow, known as marrow adipose tissue (MAT), which makes up about 70% of the adult human skeleton. The two types, regulated and constitutive, store different types of fat molecules and have distinct genetic profiles. The discovery was made through research in rodents and a small group of women, and the findings suggest that MAT plays a key role in metabolism and is involved in various diseases, including anorexia, type 1 diabetes, and osteoporosis. The researchers believe that understanding the two types of MAT is crucial for developing targeted therapies for these diseases, and they plan to continue studying the composition of MAT in further studies.
While most of us worry about the fat cells building up on the fleshy parts of our bodies, scientists have started to pay serious attention to another kind of fat cell deep inside our bones, in what's called the marrow. Today, they've published new important clues about this little-understood kind of fatty tissue - including the discovery that there are two different types. Their results pave the way for more research on how marrow fat influences the rest of the body, and its role in a range of diseases including osteoporosis. In a paper published in Nature Communications, the team from the University of Michigan and other institutions describes research in rodents, and a small group of women, that led them to conclude that there are two kinds of fat cells in what scientists call marrow adipose tissue, or MAT. The findings deepen understanding of MAT, which makes up about 70 percent of the marrow in the adult human skeleton. They also make it clear that researchers need to take different MAT types into account when studying its role in disease. Why MAT matters Scientists have come to realize MAT plays a key role in our body's metabolism. MAT levels rise in many different diseases, from anorexia to type 1 diabetes, and also go up as we age and as bones get brittle and break down in osteoporosis. "Reducing marrow fat has been mentioned as a target for osteoporosis therapy, but before such approaches go further we need to get a more targeted understanding of MAT and the effects of potential intervention," says Erica Scheller, Ph.D., DDS, who is transitioning from a postdoctoral fellowship at the U-M Medical School to a faculty position at Washington University in St. Louis. Scheller worked with senior author and U-M physiology professor Ormond MacDougald, Ph.D., and others to determine that MAT actually exists in two forms: regulated and constitutive. Their detailed analysis shows that the two kinds of cells store different types of fat molecules, that their genetic profile differs in very specific ways, and that they develop at different times in the life cycle and interact in different ways with the blood cell formation process that also happens in the marrow. Though the researchers can't yet see whether what they saw in mice holds completely true for humans, their study includes data from five women that agreed to let the researchers study the fat composition of their leg bone marrow using special scanners. Just as in the mice, the further down the leg bone, the more unsaturated fat there was inside the marrow. This is the first evidence in humans that two types of MAT exist, and the team will continue to study the bones of human. "We're definitely finding that MAT is more complex than anyone originally thought, and that we have a long way to go in understanding it," says MacDougald, who is the John A. Faulkner Collegiate Professor of Physiology in the Department of Molecular and Integrative Physiology, and a professor of Internal Medicine in the Metabolism, Endocrinology & Diabetes division. "We have a lot of it, and we need to do more to understand why it's there and what it's doing, and how it changes in different diseases." From here to tomorrow MacDougald, Scheller and their colleagues will continue to study the two forms of MAT in further studies in mice, and in bones removed from patients having hip replacement surgery and limb amputations. Getting healthy bone samples is harder, but over time they hope to flesh out the full picture of how the two forms of MAT form and act. The techniques they developed in their lab, which enable scientists to detect the characteristics of MAT, should be useful to scientists around the world studying bone marrow. And, the findings they've made should make MAT composition a key marker for scientists that study blood cell formation, bone biology and metabolism. |
10.1038/sdata.2015.59 | Micro-map of hippocampus lends big hand to brain research | A new detailed map of the hippocampal region of the brain, compiled by researchers at the Montreal Neurological Institute and Hospital-The Neuro at McGill University, is helping the scientific community accelerate research and develop better treatments for patients suffering from epilepsy and other neurological and psychiatric disorders. The team of researchers, led by Dr. Neda Bernasconi, a neuroscientist specializing in the neuroimaging of epilepsy and co-founder of the Neuroimaging of Epilepsy Laboratory (NOEL) at The Neuro, set out to build and share a detailed model of the substructures making up one of the key centres of the brain involved in epilepsy: the hippocampus. The goal of their project, published on November 10 in Scientific Data, is to improve the tools available to researchers and clinicians working in the field around the globe. Epilepsy is a neurological disorder characterized by a sudden, brief change in the brain, expressed as a seizure. According to Epilepsy Canada, approximately one percent of Canadians suffer from the condition and more than 30% of patients with epilepsy do not respond to anti-epileptic drugs. For these individuals, the surgical removal of the brain tissue causing seizures is the only known effective treatment for controlling the condition and improving quality of life. In order to compile this hippocampal atlas, researchers used MRI imagery from a sample of 25 healthy individuals. They then used their expertise in brain anatomy to label all the substructures composing the region, providing a model of an average, healthy hippocampus. The end result is analogous to a Google street view of this particular part of the brain. With this tool, researchers will be better able to assess the pathology of their patients by comparing their data to the atlas and will more clearly be able to locate the areas in need of surgical intervention. A tool for brain diseases experts of all levels "Our primary purpose was epilepsy. We wanted to be able to detect and identify different substructures in the hippocampus to enable us to be a lot more precise in our diagnosis and to pinpoint the affected region to better target treatments", said Dr. Bernasconi. "With this new submillimetric dataset, made available through open science, we are not just sharing MRI images, we are also transferring anatomical knowledge and providing a statistical map that can be used by researchers and clinicians of different levels of expertise anywhere in the world." These tools hold promising therapeutic implications for epilepsy, but also for other neurological and psychiatric disorders such as Alzheimer's disease, schizophrenia and depression. Crucially, the atlas provides researchers with a non-invasive way to assess the impact of therapies targeting this region of the brain and to thus develop better treatments to improve the quality of life for their patients. | Researchers at the Montreal Neurological Institute and Hospital-The Neuro at McGill University have created a detailed map of the hippocampal region of the brain, which is helping to accelerate research and develop better treatments for patients with epilepsy and other neurological and psychiatric disorders. The map, compiled by a team led by Dr. Neda Bernasconi, is a detailed model of the substructures making up the hippocampus, a key center of the brain involved in epilepsy. The atlas is based on MRI imagery from 25 healthy individuals and provides a statistical map that can be used by researchers and clinicians of all levels to diagnose and treat patients. The tool holds promising therapeutic implications for epilepsy, as well as other disorders such as Alzheimer's disease, schizophrenia, and depression, and provides a non-invasive way to assess the impact of therapies targeting this region of the brain. | None | Abstract The hippocampus is composed of distinct anatomical subregions that participate in multiple cognitive processes and are differentially affected in prevalent neurological and psychiatric conditions. Advances in high-field MRI allow for the non-invasive identification of hippocampal substructure. These approaches, however, demand time-consuming manual segmentation that relies heavily on anatomical expertise. Here, we share manual labels and associated high-resolution MRI data (MNI-HISUB25; submillimetric T1- and T2-weighted images, detailed sequence information, and stereotaxic probabilistic anatomical maps) based on 25 healthy subjects. Data were acquired on a widely available 3 Tesla MRI system using a 32 phased-array head coil. The protocol divided the hippocampal formation into three subregions: subicular complex, merged Cornu Ammonis 1, 2 and 3 (CA1-3) subfields, and CA4-dentate gyrus (CA4-DG). Segmentation was guided by consistent intensity and morphology characteristics of the densely myelinated molecular layer together with few geometry-based boundaries flexible to overall mesiotemporal anatomy, and achieved excellent intra-/inter-rater reliability (Dice index ≥90/87%). The dataset can inform neuroimaging assessments of the mesiotemporal lobe and help to develop segmentation algorithms relevant for basic and clinical neurosciences. Design Type(s) repeated measure design • digital curation Measurement Type(s) nuclear magnetic resonance assay Technology Type(s) MRI Scanner Factor Type(s) Sample Characteristic(s) Homo sapiens • hippocampal formation Machine-accessible metadata file describing the reported data (ISA-Tab format) Background & Summary The hippocampus has been a focus of neuroscience research for decades. Highly segregated connectional properties have promoted its use as a model system. The hippocampus plays an important role in multiple cognitive processes, particularly declarative memory 1 , 2 ; its structural compromise is a hallmark of prevalent neurological and psychiatric disorders, such as temporal lobe epilepsy 3 , Alzheimer’s disease 4 , 5 , depression 6 , and schizophrenia 7 . Prior to the advent of sophisticated histological staining techniques 8 , the hippocampal formation was described as a single entity despite its complex histo-morphology. Since the description by Ramon y Cajal 9 , several histological subdivisions have been proposed 10 – 12 . Similarly, neuroimaging studies have generally considered the hippocampus as a single structure, constrained by limited spatial resolution 13 . Developments in high-field MRI at 3 Tesla and beyond, together with the use of phased-array head coils, offer new opportunities to appraise its internal structure by unveiling strata rich in white matter, and improved identification of the hippocampal sulcus, which separates Cornu Ammonis (CA) and subiculum from the dentate gyrus (DG). Paralleling advances in hardware, a number of studies have provided MRI-based guidelines to manually segment hippocampal subfields 14 – 23 . While substantial progress has been made, challenges remain, particularly when attempting to separate individual CA subfields from one another, which compromises reliability within and across analysts. From a practical perspective, manual segmentations require anatomical expertise and are often prohibitively time-consuming. Here, we share a dataset containing manual segmentations of hippocampal subfields together with submillimetric multi-spectral images in 25 healthy individuals. To facilitate local implementation and independent verification, we share detailed MR sequence information as well; importantly, all data were acquired in a clinically-feasible scan time on a widely available 3 Tesla MRI system. Opting for high reliability, segmentations were based on a protocol that divided the hippocampal formation into consistently identifiable subregions, guided by intensity and morphology of the densely myelinated molecular layer, together with few geometry-based boundaries flexible to overall mesiotemporal anatomy. Specifically, we combined presubiculum, parasubiculum, and subiculum proper into a single label (subiculum), joined CA1, 2, and 3 (CA1-3), and merged CA4 with the DG (CA4-DG). While segmentation relied primarily on T1-weighted (T1w) data, T2-weighted (T2w) images offered additional guidance. We provide the full set of multispectral images in high-resolution native and stereotaxic (MNI152) space, the manual labels, together with a probabilistic atlas that can inform functional and structural imaging assessments of the hippocampal formation. Moreover, our datasets can be used to develop new protocols, validate existing ones and design automated algorithms relevant for basic as well as clinical neurosciences. Methods Participants We studied 25 healthy individuals (12 males; 21–53 years, mean±s.d. age=31.2±7.5 years; Table 1 ), recruited through advertisement. All participants had normal or corrected-to-normal vision; none of them suffered from neurological, psychiatric, or somatic diseases. The Ethics Committee of the Montreal Neurological Institute and Hospital approved the study and written informed consent was obtained from all participants in accordance with the standards of the Declaration of Helsinki. Participants gave their written informed consent prior to scanning and received a monetary compensation. Table 1 Samples, subjects and data outputs. Full size table Scan parameters MRI data were acquired on a 3 Tesla Siemens TimTrio scanner using a 32-channel head coil. We obtained two sets of T1w images: a 3D magnetization-prepared rapid-acquisition gradient echo (MPRAGE) with millimetric resolution (repetition time (TR)=2,300 ms; echo time (TE)=2.98 ms; inversion time (TI)=900 ms; flip angle=9°; matrix size=256×256; field-of-view (FOV)=256×256 mm 2 ; 176 sagittal slices with 1 mm slice thickness resulting in 1×1×1 mm 3 voxels; iPAT=2, acquisition time=5.30 min), and a submillimetric 3D MPRAGE (TR=3,000 ms; TE=4.32 ms; TI=1,500 ms; flip angle=7°; matrix size=336×384; FOV=201×229 mm 2 ; 240 axial slices with 0.6 mm slice thickness resulting in 0.6×0.6×0.6 mm 3 voxels; acquisition time=16.48 min; to increase the signal-to-noise ratio, two identical scans were acquired, motion corrected, and averaged into a single volume). T2w images were obtained using a 2D turbo spin-echo sequence (TR=10,810 ms; TE=81 ms; flip angle=119°; matrix size=512×512; FOV=203×203 mm 2 , 60 coronal slices angled perpendicular to the hippocampal long axis, slice thickness=2 mm, resulting in 0.4×0.4×2.0 mm 3 voxels; acquisition time=5.47 min). Pre-processing MRI data files were converted from DICOM to MINC (*.mnc) format using dcm2mnc with dicom header anonymization. Images underwent automated correction for intensity non-uniformity and intensity standardization 24 . Millimetric and submillimetric T1w MRI volumes were linearly registered to the high-resolution MNI-ICBM152 template 25 , 26 . T2w images were linearly registered to the millimetric T1w MRI in native space; the resulting transformation matrix was concatenated with the matrix that mapped the millimetric T1w image to the MNI space, thereby linearly mapping T2w images to this template. During the final registration of submillimetric T1w and T2w data to MNI space, images were resampled to a resolution of 0.4×0.4×0.4 mm 3 , yielding a voxel volume of 0.064 mm 3 . To reduce interpolation artifacts given the anisotropic resolution of the T2w data, we applied a non-local up-sampling method that recovers high frequency information using a data-adaptive patch-based reconstruction together with a subsampling coherence constraint 27 . MNI-space structural scans were subsequently anonymized by zeroing out the voxels in the vicinity of the facial surface, teeth, and auricles following a previously described procedure 28 . For data sharing, images were converted to NIfTI (*.nii) format using mnc2nii. Please see Fig. 1 for a schematic overview of the preprocessing steps and data quality. Figure 1: Dataset: schematic illustration of image acquisition, native data, image processing and final processed data. Full size image Protocol description A single rater (JKY), blinded to case identities, carried out all segmentations using a 3D viewer ( ). Subfield segmentation took approximately 16 h per individual (8 h per hemisphere). Boundaries were based on anatomical descriptions of the hippocampus by Duvernoy 29 and Insausti 30 . As spatial relationships between subfields vary along the hippocampal long axis, landmarks are separately described for the hippocampal head ( Fig. 2a–e ), body ( Fig. 2f ), and tail ( Fig. 2g–j ). These segments were defined as in our previous protocol 31 . Figure 2: Anatomical boundaries of hippocampal subfileds on T1- and T2-weighted MRI. Sections displaying critical landmarks are shown. ( a , j ) are the most rostral and caudal coronal sections. ( a ) The rostral-most tip of the hippocampus is composed of the subiculum; at this level, the alveus surrounds the subiculum, separating it from the overlying amygdala (AM). ( b ) When CA1 first becomes visible, it runs parallel to the subiculum; the structures are separated by the subicular molecular layer (arrow). ( c ) Vertical digitations of CA1-3 (arrowheads point to cavities within the hippocampal sulcus; x indicates the ambient cistern); the supero-lateral subicular interface with CA1 is drawn along a line following the hippocampal sulcus, directed towards the fundus of the collateral sulcus (asterisk). ( d ) The rostral-most portion of CA4-DG is set at the section where the medial portion of the DG, the margo denticulatus, becomes visible (arrowhead). ( e ) Junction between head and body, at the level of the uncal apex (asterisk). ( f ) Hippocampal body; the arrow points to the molecular layer of the subiculum. ( g ) Rostral portion of hippocampal tail: the crus fornix is fully visible and well demarcated from the thalamus (Th). ( h ) The caudal slice of the subiculum is set to the posterior-most section on which the thalamus can be identified. ( i ) Middle segment of the tail. The subiculum is replaced by CA1-3, at the level at which the crus fornix fuses with the splenium (Sp) of the corpus callosum. ( j ) Terminal segment of the tail. ( k) Sagittal hippocampal section displaying planes of the coronal cuts. ( l ) 3D surface rendering of hippocampal subfields with a coronal cut at the level of the body. On coronal sections, the orientation of the hippocampal body varies across individuals, modifying the spatial relationships between subiculum and CA1. In d , the hippocampus is oriented clockwise. In e , it is oriented counter-clockwise and in f it has a horizontal position. The red line follows the slope of the superior border of the subiculum, the solid white line represents the horizontal axis, and the dashed white line is placed at the boundary between subiculum and CA1. Full size image Segmentations were primarily performed on coronal T1w images, with cross-referencing to sagittal/axial views. T2w data eased the identification of the densely myelinated and thick molecular layer of the subiculum (forming its superior border). This layer is hyperintense on T1w and hypointense on T2w images ( Fig. 2b–i, k ); it is contiguous with, but distinct from the thinner molecular layer of CA1 ( ref. 30 ). The second landmark is the molecular layer of the DG and that of CA fused across the vestigial hippocampal sulcus; this ensemble is visible as a T1w-hyperintense/T2w-hypointense band ( Fig. 2c–i ). The molecular layers, along with residual vascular cavities that follow the sulcal route, consistently appear on T2w images and separate the DG from the subiculum (inferiorly and medially) and the CA (inferiorly, laterally, and superiorly). We included alveus and fimbria in the CA1-3 label. a) Hippocampal head The hippocampal head includes the subiculum, CA1-3, and small portions of the DG. Its rostral-most section is composed of the subiculum only 30 ( Fig. 2a ). Here, the alveus surrounds the subiculum, separating it from the overlying amygdala; cross-referencing to the sagittal view confirmed this boundary. The inferior subicular boundary is formed by parahippocampal white matter running along the entire rostro-caudal extent of the hippocampus. Perforant projections from the entorhinal cortex to the subiculum occasionally blurred this boundary; in this case, we identified the subiculum by cross-referencing to axial/sagittal views. As the exact boundary between subiculum and infero-medial entorhinal cortex cannot be visualized on MRI, it was defined by extending a straight line along the gray-white matter border at the crown of the parahippocampal gyrus until it reached the cerebro-spinal fluid in the ambient cistern 32 . When CA1 first becomes visible, it runs parallel to the subiculum; for a few slices, the molecular layer of the subiculum separates both structures, with CA1-3 on the top ( Fig. 2b ). More posteriorly, given the overlap (rather than sharp transition) between the pyramidal layers of CA1 and subiculum 30 , we drew a line along the hippocampal sulcus pointing towards the fundus of the collateral sulcus ( Fig. 2b,c ). This often-oblique line has been previously used to describe this boundary 19 . The hippocampal head exhibits 3–4 digitations before turning medially to form the posterior uncus. Each digitation encapsulates an extension of the DG. At the level of the head, however, the DG molecular layer that would have allowed for its identification cannot be visualized. For consistency, we merged CA and DG at this level ( Fig. 2c ). We could reliably segment CA4-DG at the junction of head and body, where the medial surface of the DG (known as margo denticulatus) becomes visible ( Fig. 2d ). b) Hippocampal body Head and body interface at the caudal end of the uncus 31 ( Fig. 2e,f ). Here, the margo denticulatus of the DG has a characteristic toothed appearance and is separated from the overhanging fimbria by the fimbriodentate sulcus. Coronally, the orientation of the hippocampal body varies along its rostro-caudal direction both across and within individuals. The term malrotation has been coined to describe this abnormal shifting/rotations of the long hippocampal axis relative to the horizontal plane 33 , 34 , which likely affects the relative boundary between subiculum and CA1. To determine this border, we adapted our guidelines based on the position of the hippocampus on coronal slices: (1) if the left hippocampus was oriented counter-clockwise (clockwise for the right hippocampus), the boundary was defined as the extension of the line corresponding to the superior subicular border ( Fig. 2e ); (2) if the hippocampus was horizontally positioned, the border was defined as a line drawn from the lateral-most point of the subicular molecular layer at a 45 degrees angle until it reached the underlying white matter ( Fig. 2f ); (3) if the left hippocampus was oriented clockwise (counter-clockwise for the right), the border followed a line drawn from the lateral-most point of the subicular molecular layer towards the fundus of the collateral sulcus ( Fig. 2d ). Inferior and medial boundaries of the subiculum remained the same as in the head. CA and DG form two U-shaped interlocking laminae, one fitting into the other, and separated from each other by the hippocampal sulcus. For consistency, voxels corresponding to the fused molecular layers of the CA1-3 and DG were assigned to CA4-DG. As the CA3-CA4 boundary cannot be resolved on MRI, the superior border of CA4-DG was drawn as the horizontal continuation of the hippocampal sulcus, from its most medially visible point towards the fimbriodentate sulcus. c) Hippocampal tail The junction between body and tail was set as the rostral-most slice at which the crus fornix becomes fully visible ( Fig. 2g ) 31 . In the initial segment of the tail, the CA1-subiculum boundary was determined to be the infero-lateral extension of the superior subicular border ( Fig. 2g,h ). Inferior and medial borders of the subiculum were defined as in the body. In the initial portion of the tail, CA1 is deeply located, hidden by the subiculum; more posteriorly, it appears at the surface of the parahippocampal gyrus, progressively replacing the subiculum. The exact posterior subicular border is not visible on MRI: we consistently chose it to be the posterior-most coronal slice on which the thalamus could be seen ( Fig. 2h,i ), verified on sagittal view. We excluded the isthmus of the cingulate gyrus, which replaces the subiculum in the middle and terminal segments of the tail, by excluding grey matter inferior to the hippocampal sulcus, best visualized sagittally. The hippocampal sulcus separates the DG from the subiculum in the initial segment, and from CA1-3 in the initial and middle segments. Furthermore, the fused molecular layers of CA and DG allowed us to visualize the caudal border of the DG on the sagittal view. The posterior hippocampal end belongs to CA1-3 ( Fig. 2j ) and faces the cerebrospinal fluid of the lateral ventricle medially and of the atrium laterally. This boundary was best seen sagittaly ( Fig. 2k ). While fimbria and alveus were included in CA1-3, we excluded the crus fornix ( Fig. 2g ). The latter joins the splenium of the corpus callosum. Code availability All MRI preprocessing employed standard routines (non-uniformity correction, intensity normalization, image registration). We used minc tools that are freely available on github ( ). Similar processing can also be achieved using tools provided by other freely available packages, such as FreeSurfer ( ) or FSL ( ). The patch-based up-sampling technique for T2w-images is available on P. Coupé’s website ( ). Defacing was based on publically available code ( ). Data Records The submillimetric 3 Tesla dataset are highly suitable for the development and cross-validation of future manual or automatic segmentation protocols. MRI data and subfield segmentations of all participants, detailed scan parameters, as well as stereotaxic probabilistic maps are available on Dryad ( Data Citation 1 ) and NITRC ( Data Citation 2 ). A README file with a detailed description of the content of all downloads is available there as well. MRI data files were converted from to DICOM to MINC format (using dcm2mnc) before processing, and to NIfTI (using mnc2nii) after processing. For every subject, high-resolution T1w and T2w data are available in 0.4 mm isotropic MNI152 space as well as in their native spaces. For registration purposes, the 1×1×1 mm 3 T1w data is also provided in native and stereotaxic space. Labels in NIfTI format of the subiculum, CA1-3 and CA4-DG are provided in the high-resolution MNI152 space. We furthermore provide probabilistic anatomical maps of each subfield in 1×1×1 mm 3 MNI152 space. To anonymize data, centre-specific study and participant codes have been removed using an automated procedure. MRI data have been de-faced. All participants were given sequential integer IDs with an ‘S’ prefix. Technical Validation Contrast-to-noise ratio To obtain a quantitative index of MRI data quality, we estimated Contrast-to-Noise ratio (CNR), similar to the approach carried out in a recently published study 21 . In short, an eroded mask of the CA1-3 was compared with an equivalently-sized mask of the temporal lobe white matter inferior to it. The CNR was estimated using the following formula: C N R = W M ¯ − G M ¯ v a r ( W M ) + v a r ( G M ) where W M ¯ and G M ¯ are the mean intensities in the WM and GM masks; var(.) is the intensity variance. We calculated the CNR for each subject in native T1w and T2w space, as well as in the MNI space on which segmentations were performed. For native T1w and T2w data, mean±s.d. (range) CNR estimates across the sample were: 3.04±0.23 (2.73–3.48) and 4.42±0.64 (3.29–5.83). For supersampled and MNI space data, corresponding values were 4.74±0.86 (3.27–7.73) and 4.53±0.59 (3.63–5.71). Please see Table 2 for a subject-by-subject listing. Table 2 Contrast-to-noise estimates. Full size table Intra- and inter-rater reliability JKY segmented subfields of 10 hippocampi (5 left, 5 right) from 10 different subjects twice, 6 months apart. We assessed inter-rater reliability comparing subfield delineations of 10 hippocampi segmented by JKY and another observer (KL), blinded to each other’s segmentation. Reliability was quantified using Dice overlap indices between two labels 35 , D=2×|M 1 ∩M 2 |/(|M 1 |+|M 2 |)×100% , where M 1 is the 1st label, M 2 the 2nd label; M 1 ∩ M 2 is the intersection of M 1 and M 2 ; |.| is the volume operator. We also calculated intra-class correlations (ICC). The Dice index quantifies the overlap of two labels geometrically, whereas ICC calculates statistical similarity. To approximate the actual distribution of reliability values, we employed 1,000 bootstrap-based subsamplings and computed 95% confidence intervals. Table 3 displays mean±s.d. as well as bootstrap confidence interval of Dice indices for individual subfileds. Overall, indices were ≥90 and 87% for intra- and inter-rater reliability, respectively. The ICC ranged from 0.91 to 0.96 within and from 0.73 to 0.91 between raters. Table 3 Intra-and inter-rater reliability assessment. Full size table Probabilistic anatomical maps For each MNI152-space subfield label, we generated statistical anatomical maps that outline the probability of subfield location across participants ( Fig. 3 ). Figure 3: Statistical probabilistic atlas of hippocampal subfields overlaid on the MNI152 template. Full size image Usage Notes The procedures we employed in this study resulted in a high-resolution 3 Tesla dataset containing submillimetric MRI data in native and MNI152 space, together with manual labels of three hippocampal subfields in MNI152 space. Data are shared in documented standard formats, such as NIfTI or plain text files, to enable further processing in arbitrary analysis environments with no imposed dependencies on proprietary tools. Exam card printouts from the scanner are also available for local implementation of the image acquisition protocol. All processing performed on the released data article were produced by openly accessible software on standard computer workstations. Data are available on a curated open access repository ( Data Citation 1 ) and on NIRTC ( Data Citation 2 ). Additional Information How to cite this article: Kulaga-Yoskovitz, J. et al. Multi-contrast submillimetric 3 Tesla hippocampal subfield segmentation protocol and dataset. Sci. Data 2:150059 doi: 10.1038/sdata.2014.59 (2015). | None | [] | [] | [] | SciNews | Medicine | Jessie Kulaga-Yoskovitz et al. Multi-contrast submillimetric 3 Tesla hippocampal subfield segmentation protocol and dataset, Scientific Data (2015). DOI: 10.1038/sdata.2015.59 | http://dx.doi.org/10.1038/sdata.2015.59 | https://medicalxpress.com/news/2015-12-micro-map-hippocampus-big-brain.html | Researchers at the Montreal Neurological Institute and Hospital-The Neuro at McGill University have created a detailed map of the hippocampal region of the brain, which is helping to accelerate research and develop better treatments for patients with epilepsy and other neurological and psychiatric disorders. The map, compiled by a team led by Dr. Neda Bernasconi, is a detailed model of the substructures making up the hippocampus, a key center of the brain involved in epilepsy. The atlas is based on MRI imagery from 25 healthy individuals and provides a statistical map that can be used by researchers and clinicians of all levels to diagnose and treat patients. The tool holds promising therapeutic implications for epilepsy, as well as other disorders such as Alzheimer's disease, schizophrenia, and depression, and provides a non-invasive way to assess the impact of therapies targeting this region of the brain.
A new detailed map of the hippocampal region of the brain, compiled by researchers at the Montreal Neurological Institute and Hospital-The Neuro at McGill University, is helping the scientific community accelerate research and develop better treatments for patients suffering from epilepsy and other neurological and psychiatric disorders. The team of researchers, led by Dr. Neda Bernasconi, a neuroscientist specializing in the neuroimaging of epilepsy and co-founder of the Neuroimaging of Epilepsy Laboratory (NOEL) at The Neuro, set out to build and share a detailed model of the substructures making up one of the key centres of the brain involved in epilepsy: the hippocampus. The goal of their project, published on November 10 in Scientific Data, is to improve the tools available to researchers and clinicians working in the field around the globe. Epilepsy is a neurological disorder characterized by a sudden, brief change in the brain, expressed as a seizure. According to Epilepsy Canada, approximately one percent of Canadians suffer from the condition and more than 30% of patients with epilepsy do not respond to anti-epileptic drugs. For these individuals, the surgical removal of the brain tissue causing seizures is the only known effective treatment for controlling the condition and improving quality of life. In order to compile this hippocampal atlas, researchers used MRI imagery from a sample of 25 healthy individuals. They then used their expertise in brain anatomy to label all the substructures composing the region, providing a model of an average, healthy hippocampus. The end result is analogous to a Google street view of this particular part of the brain. With this tool, researchers will be better able to assess the pathology of their patients by comparing their data to the atlas and will more clearly be able to locate the areas in need of surgical intervention. A tool for brain diseases experts of all levels "Our primary purpose was epilepsy. We wanted to be able to detect and identify different substructures in the hippocampus to enable us to be a lot more precise in our diagnosis and to pinpoint the affected region to better target treatments", said Dr. Bernasconi. "With this new submillimetric dataset, made available through open science, we are not just sharing MRI images, we are also transferring anatomical knowledge and providing a statistical map that can be used by researchers and clinicians of different levels of expertise anywhere in the world." These tools hold promising therapeutic implications for epilepsy, but also for other neurological and psychiatric disorders such as Alzheimer's disease, schizophrenia and depression. Crucially, the atlas provides researchers with a non-invasive way to assess the impact of therapies targeting this region of the brain and to thus develop better treatments to improve the quality of life for their patients. |
10.1186/s13059-015-0607-3 | Some genes 'foreign' in origin and not from our ancestors | Many animals, including humans, acquired essential 'foreign' genes from microorganisms co-habiting their environment in ancient times, according to research published in the open access journal Genome Biology. The study challenges conventional views that animal evolution relies solely on genes passed down through ancestral lines, suggesting that, at least in some lineages, the process is still ongoing. The transfer of genes between organisms living in the same environment is known as horizontal gene transfer (HGT). It is well known in single-celled organisms and thought to be an important process that explains how quickly bacteria evolve, for example, resistance to antibiotics. HGT is thought to play an important role in the evolution of some animals, including nematode worms which have acquired genes from microorganisms and plants, and some beetles that gained bacterial genes to produce enzymes for digesting coffee berries. However, the idea that HGT occurs in more complex animals, such as humans, rather than them solely gaining genes directly from ancestors, has been widely debated and contested. Lead author Alastair Crisp from the University of Cambridge, UK, said: "This is the first study to show how widely horizontal gene transfer (HGT) occurs in animals, including humans, giving rise to tens or hundreds of active 'foreign' genes. Surprisingly, far from being a rare occurrence, it appears that HGT has contributed to the evolution of many, perhaps all, animals and that the process is ongoing, meaning that we may need to re-evaluate how we think about evolution." The researchers studied the genomes of 12 species of Drosophila or fruit fly, four species of nematode worm, and 10 species of primate, including humans. They calculated how well each of their genes aligns to similar genes in other species to estimate how likely they were to be foreign in origin. By comparing with other groups of species, they were able to estimate how long ago the genes were likely to have been acquired. A number of genes, including the ABO blood group gene, were confirmed as having been acquired by vertebrates through HGT. The majority of the other genes were related to enzymes involved in metabolism. In humans, they confirmed 17 previously-reported genes acquired from HGT, and identified 128 additional foreign genes in the human genome that have not previously been reported. Some of those genes were involved in lipid metabolism, including the breakdown of fatty acids and the formation of glycolipids. Others were involved in immune responses, including the inflammatory response, immune cell signalling, and antimicrobial responses, while further gene categories include amino-acid metabolism, protein modification and antioxidant activities. The team were able to identify the likely class of organisms the transferred genes came from. Bacteria and protists, another class of microorganisms, were the most common donors in all species studied. They also identified HGT from viruses, which was responsible for up to 50 more foreign genes in primates. Some genes were identified as having originated from fungi. This explains why some previous studies, which only focused on bacteria as the source of HGT, originally rejected the idea that these genes were 'foreign' in origin. The majority of HGT in primates was found to be ancient, occurring sometime between the common ancestor of Chordata and the common ancestor of the primates. The authors say that their analysis probably underestimates the true extent of HGT in animals and that direct HGT between complex multicellular organisms is also plausible, and already known in some host-parasite relationships. The study also has potential impacts on genome sequencing more generally. Genome projects frequently remove bacterial sequences from results on the assumption that they are contamination. While screening for contamination is necessary, the potential for bacterial sequences being a genuine part of an animal's genome originating from HGT should not be ignored, say the authors. | A recent study published in Genome Biology has challenged conventional views on animal evolution by revealing that many animals, including humans, have acquired essential "foreign" genes from microorganisms co-habiting their environment through a process called horizontal gene transfer (HGT). The study found that HGT has contributed to the evolution of many, perhaps all, animals and is ongoing, with tens or hundreds of active "foreign" genes identified in the genomes of 12 species of fruit fly, four species of nematode worm, and 10 species of primate, including humans. The researchers confirmed 17 previously-reported genes acquired from HGT in humans and identified 128 additional foreign genes, including those involved in metabolism, immune responses, and antioxidant activities, which were likely acquired from bacteria, protists, viruses, and fungi. The study suggests that HGT is a widespread phenomenon in animals and has implications for genome sequencing, as bacterial sequences may not always be contamination, but rather a genuine part of an animal's genome originating from HGT. | None | Abstract Background A fundamental concept in biology is that heritable material, DNA, is passed from parent to offspring, a process called vertical gene transfer. An alternative mechanism of gene acquisition is through horizontal gene transfer (HGT), which involves movement of genetic material between different species. HGT is well-known in single-celled organisms such as bacteria, but its existence in higher organisms, including animals, is less well established, and is controversial in humans. Results We have taken advantage of the recent availability of a sufficient number of high-quality genomes and associated transcriptomes to carry out a detailed examination of HGT in 26 animal species (10 primates, 12 flies and four nematodes) and a simplified analysis in a further 14 vertebrates. Genome-wide comparative and phylogenetic analyses show that HGT in animals typically gives rise to tens or hundreds of active ‘foreign’ genes, largely concerned with metabolism. Our analyses suggest that while fruit flies and nematodes have continued to acquire foreign genes throughout their evolution, humans and other primates have gained relatively few since their common ancestor. We also resolve the controversy surrounding previous evidence of HGT in humans and provide at least 33 new examples of horizontally acquired genes. Conclusions We argue that HGT has occurred, and continues to occur, on a previously unsuspected scale in metazoans and is likely to have contributed to biochemical diversification during animal evolution. Background The acquisition of genes from an organism other than a direct ancestor (that is, horizontal gene transfer (HGT) also called lateral gene transfer) is well known in bacteria and unicellular eukaryotes, where it plays an important role in evolution [ 1 ], with recent estimates suggesting that on average 81% of prokaryotic genes have been involved in HGT at some point [ 2 ]. However, relatively few cases have been documented in multicellular organisms [ 3 - 7 ]. Reports of HGT in animals are usually limited to the description of the transfer of only one or a few genes, making the extent of horizontal gene transfer in animals unclear. Examples include the transfer of fungal genes for carotenoid biosynthesis to the pea aphid, which results in a red pigmentation and is thought to be beneficial to the aphid [ 8 ] and the transfer of a cysteine synthase from a bacterium into the arthropod lineage (likely two independent transfers into a phytophagous mite ancestor and a lepidopteran ancestor), which allows the detoxification of cyanide produced by host plants [ 9 ]. This activity is also found in nematodes, where it may have been acquired by HGT from plants [ 9 ]. Other examples of putatively adaptive HGT have been characterised in plant-parasitic nematodes, which produce cell-wall degrading enzymes from a number of horizontally transferred genes [ 3 ], and the coffee berry borer beetle, where a mannanase has been transferred from bacteria allowing the hydrolysation of coffee berry galactomannan [ 10 ]. In exceptional cases, high levels of HGT in animals have been reported, but this has been attributed to the lifestyles of the recipient organisms. For example, in bdelloid rotifers, which are desiccation-tolerant asexuals, up to approximately 10% of transcripts derive from horizontally acquired genes [ 11 - 13 ]. Desiccation results in both DNA breakage [ 14 , 15 ] and loss of membrane integrity (reviewed in [ 16 ]), both of which may potentiate HGT. Another unusual example is the transfer of the entire genome (>1 Mb) of the bacterium Wolbachia into the fruit fly Drosophila ananassae , although relatively few Wolbachia genes are transcribed in this case [ 17 ]. Genes from Wolbachia are frequently transferred to invertebrates [ 17 , 18 ], probably because the long-term association (either parasitic or mutualistic) between the bacterium and its hosts maintains their genomes in close proximity. Furthermore, as Wolbachia frequently infects the testes and ovaries of its hosts, it has access to their germlines, a prerequisite for the transmission of the acquired genes to the next generation. These studies have led to the perception that HGT occurs very infrequently in most animals, especially in vertebrates [ 5 , 6 ]. Furthermore, there are concerns over the validity of the examples of HGT reported in humans [ 19 - 22 ]. The original report on the human genome sequence [ 19 ] described prokaryote-to-vertebrate HGT discovered by aligning human sequences to those of a small number of species (not many genomes were available at the time), including only two metazoans, D. melanogaster and Caenorhabditis elegans . Any proteins aligning to bacteria but not to these two metazoans, or to the other two eukaryotic proteomes used ( Arabidopsis thaliana and Saccharomyces cerevisiae ), were considered to be a result of prokaryote-to-vertebrate HGT. However, these four eukaryotic species do not contain orthologs of all ‘native’ human genes (that is, those not horizontally acquired), leading to incorrect identification of HGT (false positives) and the subsequent rejection of many cases by phylogenetic analyses [ 20 - 22 ]. The problem (the availability of a limited number of eukaryotic genomes for comparison in studies of HGT) has lessened in the intervening decade; thousands of proteomes (including several primates) are now available in UniProt, allowing prediction of HGT using alignment to hundreds of species and subsequent phylogenetic validation, as shown in recent work in invertebrates (for example, [ 12 , 23 , 24 ]). In the human, however, there have been no follow-up studies since the original genome paper, and the true scale of HGT in humans, and metazoans generally, remains unclear. To remedy this, we initially identified non-metazoan to metazoan HGT in multiple Drosophila , Caenohabditis and primate (including human) species. Due to the controversy surrounding the human studies [ 19 - 22 ], we then took our analysis a step further by comparing multiple closely related species and combining information on horizontally transferred (‘foreign’) genes found in more than one species in the group, thereby reducing mis-identification of HGT caused by spurious alignments. In this way, we identified up to hundreds of active foreign genes in animals, including humans, suggesting that HGT provides important contributions to metazoan evolution. Results Drosophila species, Caenorhabditis species and primates have up to hundreds of active foreign genes To determine the scale of HGT across well-characterised taxonomic groups, we examined 12 Drosophila species, four Caenorhabditis species and 10 primates (Figure 1 ) for which high quality genomes and transcriptomes are available. For each transcribed gene, we calculated the HGT index, h (the difference between the bitscores of the best non-metazoan and the best metazoan matches), which gives a relative quantitative measure of how well a given gene aligns to non-metazoan versus metazoan sequences, with positive numbers indicating a better alignment to non-metazoan sequences [ 12 ]. For example, the C. elegans gene gut-obstructed 1 ( gob-1 ), which encodes a trehalose-6-phosphate phosphatase, has a best non-metazoan match with a bitscore of 135 and a best metazoan match with a bitscore of 39.3 resulting in an HGT index of 95.7. As we were interested in more than just very recent HGT, we excluded members of the test species’ phylum from the metazoan matches. This allowed us to identify HGT over evolutionary periods encompassing hundreds of millions of years, as opposed to only identifying HGT that occurred since the test species’ divergence from its most closely related species (likely up to tens of millions of years). Hereafter, when we refer to matches to metazoan sequences, we mean these subsets. Figure 1 Phylogenetic relationships of the main taxonomic groups studied. The blue numbers indicate the ortholog groups mapping to each branch (HGT events). Events may have occurred anywhere along the branch, not just where the number is indicated. Events found at the base of the tree have occurred anywhere between the origin of the phylum and the base of the tree. Trees are not drawn to scale with each other. Full size image We first identified a base level of HGT (called class C) by using conservative thresholds of h ≥30 (as in [ 12 ]) (meaning that the gene aligns much better, and is therefore much more similar, to non-metazoan genes) and bitscore of best non-metazoan match ≥100 (thereby excluding bad alignments to non-metazoans). The example given above ( gob-1 ) passes these thresholds and is therefore at least class C HGT. This per-species information was then combined for each taxon ( Drosophila , Caenorhabditis and primates) to construct ortholog groups. For each ortholog group we calculated the average h value of all members ( h orth ) and defined the genes with h orth ≥30 as class B, a subset of class C. These genes are, on average, predicted as HGT in all tested species they are found in. The gene gob-1 has homologs in C. brenneri , C. briggsae and C. japonica , with values of h = 102, h = 97.1 and h = 86.4 respectively, giving an average h ( h orth ) of 95.3 and as such gob-1 (ands its homologs) are also class B HGT. Finally, we applied a still more stringent filter to define class A foreign genes (a subset of class B), which had only very poor alignments to metazoan sequences and whose orthologs, as used to define class B, also had similarly poor alignments to metazoan sequences. To do this, we identified those sequences whose best match to a metazoan had a bitscore <100 and whose ortholog groups contain no genes with metazoan matches of bitscore ≥100 (Figure 2 A). The gene gob-1 has no metazoan matches with bitscore ≥100 (best metazoan match = 39.3) and the same is true for its homologs (best matches of 37, 38.9 and 36.6, respectively), as such it is also class A HGT. Figure 2 HGT genes by class. ( A ) The left panel shows a schematic representation of the HGT classes: class B and C genes have h index ≥ 30 and bitscore of the best non-metazoan blastx hit ≥ 100 (they are distinguished by h orth , which is not shown on this figure), while class A genes must additionally have bitscore <100 for the best metazoan blastx hit. The right panel shows the scores for all genes in H. sapiens , colour-coded according to their classification (class A: red, class B: orange, class C: blue, native genes: grey). ( B ) Box-plot of the number of genes in each class, for the three main taxa analysed ( Drosophila spp. Caenorhabditis spp., primates species), colour-coded according to the same scheme (class A: red, class B: orange, class C: blue). Full size image We then performed phylogenetic analyses for all genes of each of the above classes and found that an average of 55% of all class C genes, 65% of all class B genes and 88% of all class A genes were phylogenetically validated as foreign. This validation and further manual analysis (Additional files 1 and 2 ) suggested that, while false positives are minimised as C → B → A, some true positives are also lost. Therefore, class A genes represent a minimum estimate of the level of HGT for a given species. We found that Caenorhabditis species have, on average, 173, 127 and 68 genes in HGT classes C, B and A, respectively. In contrast, Drosophila species have fewer active foreign genes with, on average, 40 genes in class C, 25 in class B, and only four in class A. Primate HGT levels fall between those of the invertebrate taxa, with an average of 109, 79 and 32 genes per species in classes C, B and A, respectively (Figure 2 B, Additional files 2 and 3 ). Identified foreign genes are unlikely to be explained by alternative hypotheses To verify that the foreign genes we identified do indeed belong to the species under study and are not contamination (this is a problem in a number of animal genome sequences; see ‘Phylogenetic validation’ in Additional file 1 ), we tested whether they were found on the same genomic scaffolds as (that is, were linked to) genes of metazoan origin (native genes). Across all species we found an average of only nine class C genes per species (6.6% of foreign genes) that were not linked to native genes (Additional file 2 ), with correspondingly low proportions for class B and A genes. Demonstration of such high levels of linkage was only possible due to the high quality of the assemblies of these genomes. Although most species showed a high degree of linkage, there were three outliers (see Additional file 1 ), but even if all unlinked genes were contamination, which is not necessarily the case, this would have a minimal impact on the levels of HGT detected. An alternative hypothesis to explain our data is that the genes we label as foreign in any single species are actually the result of conventional vertical descent, but have been lost in all other animal lineages. The parsimony principle tells us that we should choose the simplest explanation, which might be determined by comparing the rate of gene loss and the rate of gene gain by HGT. However, while the rate of gene loss over time can be estimated, at this point we cannot accurately estimate the rate of HGT over anything less than the time since the common ancestor of all metazoans, due to limited data. The rates that should actually be compared are the rates of gene loss and HGT at the earliest branches of the eukaryotic tree, but these rates are especially difficult to determine as the very long periods of time involved mean that ortholog determination (necessary to find which genes have been lost/gained) is hard. Furthermore, published estimates of the rate of gene loss typically treat all genes as equal, but the actual rate varies between types of genes and types of organisms (for example, parasites have higher loss rates [ 25 , 26 ]). As HGT involves the transfer of only a subset of genes (see section ‘ Many horizontally acquired genes code for enzyme activities ’, below), a generic gene loss rate is not comparable to the HGT rate. Given these difficulties we attempted to differentiate between the two hypotheses with a different method. We looked at the functions of foreign genes and compared them to those of native genes that are known to have been lost in all other animal lineages, but were not predicted as foreign (genes for which the alternate hypothesis is true) and found significant differences between the foreign genes we identified and native genes fulfilling these criteria (see section ‘ Many horizontally acquired genes code for enzyme activities ’, below). Therefore, while we cannot entirely discount the gene loss hypothesis, it seems an unlikely explanation for the tens or hundreds of foreign genes per genome that we observe. Identification of new foreign genes and confirmation of previously reported examples The first report of the human genome sequence highlighted 223 protein sequences (of which 113 were confirmed as present in the genome by PCR) that were proposed to originate from bacteria by HGT [ 19 ]. While some of these genes were later confirmed as foreign, many were rejected [ 20 - 22 ]. At the time of writing, it is difficult to assess all of these sequences because some early identifiers have not been maintained, but we have been able to confirm or reclaim 17 previously reported examples as foreign (some also confirmed by other studies; Additional file 4 ). Furthermore, we identified up to 128 additional foreign genes in the human genome (128 class C, of which 93 are class B and 33 class A), giving a total of 145 class C genes, of which 110 are class B and 39 class A. Among these examples, we reclaim those encoding the hyaluronan synthases (HAS1-3). These were originally proposed as examples of prokaryote-to-metazoan HGT [ 19 ], but later rejected [ 20 ]; however, neither study considered foreign taxa other than bacteria. We were able to identify all three hyaluronan synthases as class A HGT, originating from fungi, an assessment supported by our phylogenetic analysis (Figure 3 ). The HAS genes appear in a wide variety of chordates, but not in non-chordate metazoans, suggesting they result from the transfer of a single gene around the time of the common ancestor of Chordata, before undergoing duplications to produce the three genes found in primates. As the original rebuttal paper [ 20 ] only focused on recent HGT, and did not look for eukaryotic matches outside Chordata, they could not detect this ancient HGT. Figure 3 Phylogenetic tree for the human gene HAS1. For each branch the species name and UniProt accession is shown. The human gene under analysis is shown in orange, proteins from chordates are in red, other metazoa in black, fungi in pink, plants in green, protists in grey, archaea in light blue and bacteria in dark blue. Numbers indicate aLRT support values for each branch where higher than 0.75 (on short terminal branches the support values are not shown). Full size image We also identify cases of HGT reported more recently that have not been analysed in detail despite the potentially interesting consequences of such a finding. For example, the fat mass and obesity associated gene (FTO, in Additional file 5 : Figure S1A) seems to be present only in marine algae and vertebrates [ 27 , 28 ], which is a highly unusual distribution. Another gene proposed to have been horizontally transferred is the ABO blood group gene (ABO, in Additional file 5 : Figure S1B), which is suggested to enhance mutualism between vertebrates and bacteria [ 29 ]. We identified both these genes as class A HGT with phylogenetic validation (Additional file 3 ). In the invertebrates, Dunning Hotopp et al. [ 17 ] reported the horizontal transfer of nearly the entire Wolbachia genome into the D. ananassae genome and the expression of 44 formerly Wolbachia genes, as evidenced by PCR amplification of their transcripts in flies that had been cured of Wolbachia and partial re-sequencing of the D. ananassae genome. These genes are still not included in the official genome of D. ananassae , likely because eukaryote genome sequencing projects routinely screen for and exclude bacterial sequences on the assumption that these represent contamination. Consequently, they were not in the dataset tested in this study and therefore do not appear among the foreign genes identified in D. ananassae . However, we did find one gene in D. ananassae , GF19976, which has not been identified previously as foreign and that may originate from Wolbachia . Parkinson and Blaxter [ 30 ] identified four horizontally acquired genes in C. elegans . We identified all of these, three as class B HGT and the fourth as class A (highlighted in yellow in Additional file 3 ), but we also discovered a further 135 class C genes, of which 113 were class B and 71 class A (Additional file 3 ). This discrepancy with Parkinson and Blaxter [ 30 ] arises largely because these authors aligned C. elegans sequences with only a single non-metazoan ( S. cerevisiae ). Accordingly, we identified three of the four known foreign genes as fungal in origin, with the fourth also aligning well to fungal proteins (although we find it originated from a protist). Overall, however, only 4% to 15% of C. elegans HGT (depending on class) is of fungal origin (Additional file 2 ), with rather more (52% to 72%) deriving from bacteria (not assessed in ref. [ 30 ]). As mentioned in the Background section, there is phylogenetic evidence that the Cysteine Synthase Like genes found in nematodes, including C. elegans ( cysl1 - 4 ), may have been acquired from plants [ 9 ]. Our analysis supports this conclusion with all four genes being class B HGT of plant origin and three being phylogenetically validated. HGT also occurs in a number of other nematodes, in particular the parasitic root-knot nematodes, in which as many as approximately 3% of all genes may be horizontally acquired [ 3 , 24 ]. Many horizontally acquired genes code for enzyme activities In prokaryotes, horizontally acquired genes tend to be operational, typically encoding enzymes, rather than informational, that is, genes involved in transcription and translation [ 31 ]. It has recently been suggested that network connectivity is a more important consideration than function [ 32 ], but nevertheless most identified foreign genes are concerned with metabolism. Consistent with this, 83% of foreign genes in the bdelloid rotifer encode enzymes [ 12 ]. To determine whether this applies to HGT throughout metazoans, we first inspected Gene Ontology (GO) terms that were found at unexpectedly high levels among class A foreign genes (‘enriched GO terms’), then determined which GO terms indicated enzyme activities, and finally calculated the percentage of enzyme activities for both enriched and un-enriched terms. In almost all Caenorhabditis and primate species, enriched GO terms were significantly more likely (chi-squared test: 3E-9 ≤ P ≤ 0.05; Additional file 2 ) to describe enzyme activities than un-enriched terms (on average, 42% of enriched terms relate to enzyme activities vs. 26% of un-enriched terms; Additional file 2 ). In Drosophila species, insufficient terms were enriched to perform the calculation. Enriched GO terms in class B genes were also more likely to relate to enzyme activities. The second largest group of foreign genes codes for membrane-bound proteins, another category of operational genes. Therefore, like in prokaryotes [ 22 ], HGT is biased towards operational genes in metazoans. A possible alternative explanation for the genes suggested to result from HGT is that they are actually the product of vertical descent in the concerned species, but have been lost in all other animal lineages (as discussed above in ‘ Identified foreign genes are unlikely to be explained by alternative hypotheses ’). This explanation is more likely in the primates as their HGT is predominately ancient (see section ‘ Horizontal gene transfer is both ancient and ongoing ’ below), reducing the number of times the gene must be lost. To test this hypothesis, the same GO analysis was performed on native genes that are found in chordates and not in non-chordate metazoans (that is, genes that have been lost in all non-chordate metazoans; a possible alternative explanation for the putative foreign genes we identify). In all primate species, enriched GO terms for these genes (when compared to those from all other native genes) were significantly less likely (chi-squared test: P ≤0.05; Additional file 2 ) to describe enzyme activities than un-enriched terms (on average, 4% vs. 20%; Additional file 2 ). This is the opposite of the result for foreign genes, suggesting that an alternative hypothesis of gene loss does not explain our findings. Foreign gene functions Many foreign genes are, like many native genes, currently uncharacterised, even in intensively studied model organisms; for example, the human (foreign) gene ENSG00000136830 is annotated ‘family with sequence similarity 129, member B’, but there is no information on its role. Where foreign genes have meaningful annotation, it is clear they code for a wide variety of different functions across a broad range of categories, some of which may overlap. Here we describe the six most noteworthy categories, from largest to smallest, across C. elegans , D. melanogaster and the human (Additional file 3 ). In C. elegans , the largest category includes genes connected to the innate immune response (16 genes), including genes that specifically target bacterial cell walls ( lys family), genes targeting fungi (for example, endochitinases) and other more general immune system genes (for example, thn-3 and thn-4 ). The second largest category comprises eight genes involved in lipid metabolism, including the breakdown of fatty acids by beta-oxidation (for example, fatty acid CoA synthetase family, acs-11 ), as well fatty acid synthesis (for example, fatty acid desaturase, fat-1 ). The third category includes four genes involved in macromolecule modification, which encompasses activities such as phosphorylation, methylation and ubiquitination. The fourth category governs stress responses and includes a heat shock protein ( dnj-16 ), an LEA protein ( lea-1 ) and two genes involved in the trehalose synthesis pathway: trehalose-6-phosphate phosphatase ( gob-1 ) and trehalose-phosphate synthase ( tps-1 ). Trehalose production allows C. elegans dauer larvae to survive extreme desiccation [ 33 ], while LEA proteins are also linked to tolerance of water stress in C. elegans [ 34 ] and other invertebrates, as well as plants and microorganisms (reviewed in [ 35 ]). The fifth category consists of antioxidant activities (one gene; glutathione peroxidase, gpx-2 ) and the sixth category is amino-acid metabolism, also consisting of a single gene, coding for asparagine synthesis ( asns-2 ). There are far fewer foreign genes in D. melanogaster , but we do see genes belonging to some of the same categories as in C. elegans , namely macromolecule modification (two genes), the innate immune response (three genes) and stress response (three genes). The three D. melanogaster immune response genes all belong to the same family of proteins, which is involved in the phagocytosis of bacteria, while the three stress response genes are all involved in the trehalose synthesis pathway: two trehalose phosphatases (FBgn0031907, FBgn0031908) and a trehalose-phosphate synthase (Tps1). While this last gene has the same name as a C. elegans trehalose-phosphate synthase gene (Tps1/ tps-1 ), alignment shows they are dissimilar, especially outside the catalytic domain, suggesting they do not originate from the same HGT event (in Additional file 5 : Figure S2). Likewise the trehalose phosphatases are not conserved across species. In the human we find genes in five of the six categories: amino-acid metabolism (two genes), macromolecule modification (15 genes), lipid metabolism (13 genes), antioxidant activities (five genes) and innate immune response (seven genes). The lipid metabolism genes include genes with similar functions to the C. elegans genes, such as the breakdown of fatty acids by beta-oxidation (for example, enoyl-CoA, hydratase/3-hydroxyacyl CoA dehydrogenase, EHHADH), as well as a wide variety of other functions including the formation of glycolipids via chain extension (for example, globoside alpha-1,3-N-acetylgalactosaminyltransferase 1, GBGT1) and transmembrane transporters required for lipid homestasis (for example, ATP-binding cassette, sub-family G (WHITE), member 5, ABCG5). The innate immune response genes include genes involved in the inflammatory response (for example, immunoresponsive 1 homolog, IRG1), genes for immune cell signalling (for example, phosphatidylinositol-4,5-bisphosphate 3-kinase, catalytic subunit gamma, PIK3CG) and antimicrobial genes (for example, epididymal peptidase inhibitor, EPPIN). We do not find any of the same foreign genes in common across the three species because our method precludes this: such genes would have been present in a common ancestor and would be screened out as metazoan. However, we do find shared functions, such as the trehalose synthesis pathway in the invertebrates. Few genes are found in shared pathways. This may indicate that transfers happen one gene at a time, with each gene being separately integrated into the metabolic networks of the organism. Broadly speaking we do not see differences between the species in the functions encoded by foreign genes, except in the immune response category: the majority of the invertebrate genes encode enzymes that break down bacterial and fungal cell walls, which would seem to confer a clear adaptive advantage, while the human genes are more likely to code for signalling and regulation of the immune response and have less obvious advantages to the organism. This likely reflects the differences in age between the vertebrate and invertebrate HGT (see section ‘ Horizontal gene transfer is both ancient and ongoing ’, below), with the more recently acquired foreign genes in the invertebrates having a clearer role than the ancient foreign genes in the vertebrates, which have had longer to integrate into networks. Foreign genes predominately originate from bacteria and protists When calculating h , the likely taxon of origin of a foreign gene was taken to be the taxon of the best-matching protein. Bacteria and protists are the most common donors in all groups (Figure 4 ), which might reflect the relative abundance of the respective donor species in the environments of the recipient organisms. The phylogenetic validation of the foreign genes occasionally indicated a different origin than the original calculation (based on alignments and h index), but both methods agreed on average 92% of the time; performing the analysis shown in Figure 4 using phylogenetically predicted origins instead shows the same pattern of donors (data not shown). The identity of the actual donor species is much harder to determine, as the identified ‘donor’ is almost certainly just the most closely related species currently sequenced. This is especially the case for older HGT events where the same foreign gene appears in more than one species, that is, where horizontal transfer predates the divergence of species. However, we did find a number of recent transfers (present in only a single studied species) that were identified as originating specifically from Wolbachia , with one example each in D. ananassae , C. briggsae and C. japonica (GF19976, CBG07424 and Cjp-ubc-6, respectively). Figure 4 Mean origin of class C foreign genes for each taxon. Numbers show percentage contribution within each taxon (row). The same analyses for Class B or A genes show very similar patterns. The colour scheme is as in Figure 3 : origin from archaea is light blue, from bacteria is dark blue, from protists is grey, from plants is green and from fungi is pink. Full size image Our method also identified putative HGT from viruses: while rare in both Drosophila and Caenorhabditis , up to 50 more foreign genes of viral origin per species were identified in the primates (‘Class V’: Additional files 2 and 3 ). The majority of such genes only align to viral and metazoan sequences, making the direction of transfer unclear, and therefore we excluded them from the rest of our analysis. Foreign genes are as likely to contain introns as native genes The many foreign genes that originate from bacteria would originally have lacked introns, but may have gained them while becoming adapted to the recipient species (domestication). To test this we looked at whether bacterial-origin foreign genes have introns. The Drosophila species generally have too few foreign genes to perform the analysis, but in three Caenorhabditis species (all except C. japonica ) and all primates the percentage of bacterial-origin foreign genes with introns is around 95%. For all three classes of foreign gene (C, B and A), there was no significant difference between the proportion of bacterial-origin foreign genes with introns and the proportion of native genes with introns (as measured by a chi-squared test; Additional file 2 ). The same was true for foreign genes as a whole (all origins; Additional file 2 ). This observation also makes it unlikely that the detected HGT is actually contamination of the genome with bacterial sequences, as these would lack introns. The exception, C. japonica , has significantly fewer bacterial-origin foreign genes with introns than native genes in all three classes ( P < 8E-6), averaging only 29% of bacterial-origin foreign genes with introns. It also has significantly fewer class A foreign genes with introns than native genes with introns ( P < 0.001) as discussed below. Horizontal gene transfer is both ancient and ongoing To determine whether the detected HGT is ancient (prior to the divergence of the studied taxon), or has occurred throughout the evolution of a particular taxon, we mapped the foreign ortholog groups (representing founding HGT events) for each taxon onto the corresponding phylogenetic trees. In Drosophila species, there is a broad correspondence between length of branch (time) and the number of HGT events along each branch, suggesting that HGT has occurred throughout Drosophila evolution and is likely to be ongoing (Figure 1 ). The same can be inferred for the Caenorhabditis species. Interestingly, a much larger number of HGT events have occurred in C. japonica than in the other studied Caenorhabditis species or their common ancestors, and its foreign genes also have different properties: it is the only species studied where significantly fewer multi-exon genes are found among foreign genes of prokaryotic origin than among native genes (Additional file 2 ). Transferred prokaryotic genes presumably require some time to acquire introns, and the lower proportion of intron-containing foreign genes is consistent with comparatively recent HGT events. An alternative explanation is that the C. japonica genome is contaminated, since around twice as many of its foreign genes are unlinked to native genes as in other species (Additional file 2 ). However, even if all unlinked genes are considered to be contamination and are discounted, there would still be more HGT events unique to C. japonica than unique to the other studied Caenorhabditis species. The distribution of transfer events is different in the primates, with most foreign groups mapping to the base of the tree (a common ancestor of primates), suggesting that the majority of HGT in primates is ancient. In these cases we are not inferring that the HGT event occurred in the most recent common ancestor of all primates, but that it occurred sometime between the common ancestor of Chordata and the common ancestor of the primates, that is, prior to the time period shown in Figure 1 . For example, in the case of HAS1 (Figure 3 ), which is found in a wide variety of chordates, the HGT event likely occurred soon after the common ancestor of Chordata arose. Foreign genes undergo duplication and are distributed throughout the genome Horizontally acquired genes can undergo duplication and diversification: for example, the three hyaluronan synthases in Homo sapiens belong to the same ortholog group and probably result from a single transfer event, followed by duplications. We observed the same scenario for other genes in H. sapiens (for example, the four peptidyl arginine deiminases and the nine PRAME family members; Additional file 3 ), and also in other species. In an extreme case (the O-acyltransferases belonging to the same ortholog group in C. elegans ; Additional file 3 ) as many as 30 genes probably derive from a single HGT event. To ask whether there are ‘hotspots’ undergoing more frequent HGT in the studied genomes, we plotted the locations of foreign genes on the chromosomes or scaffolds of the respective genomes (Additional file 5 : Figure S3). We found no evidence for ‘hotspots’, but the limited number of HGT events per species and the frequent occurrence of chromosomal rearrangements during evolution, which complicate cross-species comparisons, make it difficult to draw reliable conclusions. HGT is a general feature of chordate genomes Because there is limited information on HGT in the chordates, we also identified foreign genes for 14 other vertebrate species (Additional file 2 ). We find 60 to 240 class C genes (approximately 0.4% to 1.3%) across all of these species, in line with our findings for Drosophila , Caenorhabditis and primates, suggesting that HGT is not restricted to a few animal groups. We did not try to identify class A and B genes, as our method does not produce reliable ortholog groups for species separated by large evolutionary distances. Discussion HGT occurs at low, but appreciable, levels across all the animal species we examined; it has occurred over time and is still occurring; it mainly originates from bacteria and protists; and the genes concerned frequently code for enzyme activities. Interestingly, overall levels of HGT do not appear to be conspicuously different in vertebrates and invertebrates. This is surprising given the difference in complexity between the groups, but may be explained by the observed older HGT in primates, suggesting that the vertebrate HGT may have occurred at an earlier stage of vertebrate evolution. All animal genomes we examined contain expressed foreign genes and therefore unusual circumstances, such as an asexual lifestyle and desiccation tolerance, are not required for transfer, although such characteristics might accelerate HGT rates, as in the bdelloid rotifer. We have been able to improve on earlier studies of HGT in model organisms [ 19 , 30 ] for two main reasons. The first is the much larger number of species, including many more metazoans, now represented in the sequence databases used for comparison. This reduces the false discovery rate, by increasing the number of species in which a gene must be lost before it is incorrectly called HGT in the species in which it is found. In the original human genome paper [ 19 ] only two metazoans were used, D. melanogaster and C. elegans , while we used hundreds of species. We also examined transfer from multiple taxa, rather than just bacteria [ 19 ] or fungi [ 30 ], which reduces the false negative rate: one of the rebuttals of the original human paper [ 20 ] correctly rejected hyaluronan synthases as prokaryote-to-vertebrate transfer, but failed to identify it as fungus-to-vertebrate transfer because fungal sequences were not considered as potential donors. The second major improvement is the use of multiple closely related animal species when testing for HGT, allowing the construction of ortholog groups. This reduces the false discovery rate by controlling for spurious alignments that could incorrectly identify a gene as foreign in the minority of species in a group. In particular, this increases our accuracy when searching for older HGT, which is likely to be found in more than one species. The clearest demonstration of HGT would depend on reliable identification of the donor species, but the source of a foreign gene can only be traced to the most similar extant organism with a sequenced genome. The identification of the donor species for anything but the most recent HGT is further complicated by the sequence evolution of both recipient and donor; as a consequence, absolute certainty in the assignment of most HGT is unachievable. To accommodate this issue, we have defined a set of HGT classes that display differing levels of phylogenetic validation. While class A HGT has the highest degree of validation (88%), the levels of class C HGT are more directly comparable to those of recent studies – because in most cases closely related species are not available, so ortholog groups cannot be constructed – yet even this least stringently selected class of genes is 55% validated (amounting to an additional eight to 58 validated genes per species on top of those in class A). Although phylogenetic validation is seen as the ‘gold standard’ for HGT discovery, it is important to note that many class A foreign genes (68%) have no metazoan alignments (bitscore <50) so need not be, indeed cannot be, validated as HGT in this way. In these cases, the lack of matches with metazoan genes, together with clear matches to non-metazoan sequences, is sufficient to demonstrate HGT, while phylogenetics can be used to suggest the origin of such sequences. Another issue with phylogenetic validation arises when it is used in an automated manner: for the highest degree of accuracy, trees should only contain orthologs, but determining orthologs from very large sets of sequences requires manual annotation. Phylogenetic trees for HGT validation are also sensitive to contamination, which is widespread in genomes deposited in UniProt (see Additional file 1 ). We find around 13% of our not validated trees would be validated if a single metazoan protein, grouping within another taxon without other metazoan proteins, were actually non-metazoan contamination, increasing the level of validated HGT proportionally. As many of these sequences likely are contamination it is clear that without very high quality databases phylogenetic approaches lose reliability. While most studies of HGT look at isolated species, our use of multiple closely related species to define ortholog groups, and thereby class B HGT, reduces the problem of potential contamination by asking that candidate foreign sequences are present in the whole taxon. We only see a modest increase in phylogenetic validation between class C (where ortholog data were not used) and class B HGT (55% to 65%), but this is based on the use of high quality genomes and we would expect a bigger increase when using lower quality genomes. Our analysis probably underestimates the true extent of HGT in animals for several reasons. First, we set a conservative threshold for the HGT index, that is, h ≥ 30, to minimise the false positive rate, but there are probably genes below this threshold that are also horizontally acquired. Second, although hard to detect with available data, metazoan-to-metazoan HGT remains plausible and is known for some host-parasite relationships [ 36 ]. Some of these transfers may be mediated by viruses, and in our study we specifically excluded potential virus-mediated HGT due to ambiguity in the direction of transfer. Third, eukaryotic genome projects routinely remove bacterial sequences from their assemblies on the assumption that they are contamination; for instance, this has resulted in the removal of all previously reported HGT from the D. ananassae genome. As a result, we may have missed further examples of bacterial HGT in our study, and such screening may explain the lower levels of HGT seen in the Drosophila species. While some screening for contamination is clearly necessary, the potential for apparently bacterial sequences to originate from HGT should not be ignored during genome assembly; this observation emphasises the importance of using high quality genome assemblies, as we did here, when searching for HGT. It is important to consider the likelihood of other explanations for our results. The most obvious is the possibility that the observed foreign genes were inherited by vertical descent, but have been lost from all other observed metazoan species outside the phylum of interest. Increasing the number of metazoan species with high quality genomes and transcriptomes will in future help shed light on this possibility. In the meantime, we observed a striking difference between all classes of HGT and the native genes found in chordates, but not in other metazoans. Thus, genes that are apparently missing in animals other than chordates are significantly less likely to have GO terms for enzyme activities than other native genes (4% vs. 20%), while in contrast the HGT candidates are significantly more likely to have GO terms for enzyme activities (42% vs. 26%). While we cannot completely rule out gene loss as an explanation for our observations, these findings, together with the other lines of evidence presented, suggest that HGT is the more likely explanation. Conclusions Although observed rates of acquisition of horizontally transferred genes in eukaryotes are generally lower than in prokaryotes, it appears that, far from being a rare occurrence, HGT has contributed to the evolution of many, perhaps all, animals and that the process is ongoing in most lineages. Between tens and hundreds of foreign genes are expressed in all the animals we surveyed, including humans. The majority of these genes are concerned with metabolism, suggesting that HGT contributes to biochemical diversification during animal evolution. Materials and methods Data sources Genomes, transcriptomes, proteomes and gffs for Drosophila species were obtained from FlyBase [ 37 , 38 ]. Additional annotation was obtained from FlyMine [ 39 , 40 ]. All data on Caenorhabditis species were obtained from WormBase [ 41 , 42 ], while data on primate and chordate species were obtained from Ensembl [ 43 , 44 ], with the exception of ortholog groups which were obtained from OrthoMCL [ 45 , 46 ]. Genome versions used are shown in Additional file 2 . Determination of HGT index, h The workflow for this step is shown in Additional file 5 : Figure S4. For each studied species, all transcripts were aligned with blastx [ 47 ] to two protein databases derived from complete proteomes in UniProt, one consisting of metazoan proteins (excluding proteins from species in the same phylum as the studied species - Arthropoda, Nematoda or Chordata), the other of non-metazoan proteins. The HGT index, h , was calculated by subtracting the bitscore of the best metazoan match from that of the best non-metazoan match [ 12 ]. The majority of transcripts in all species have h <0, indicating they match better to metazoan proteins, as would be expected from vertical gene transfer through the tree of life, where they have had longer to diverge from non-metazoan proteins than from metazoan ones. Therefore, transcripts with h >0, which are less diverged from non-metazoan proteins than metazoan, should have been acquired by horizontal transfer from non-metazoans. Rather than just take all transcripts with h >0, we require that they align much better to non-metazoan proteins than to metazoan proteins and define candidate (class C) HGT genes as those with h ≥30 that also have a best non-metazoan bitscore ≥100. The threshold of 30 was chosen because detailed analysis in our earlier paper [ 12 ] found this threshold to be the best trade-off between sensitivity and specificity. As bitscore is a logarithmic measure of sequence similarity, 30 is a large difference in alignment quality. For each gene, h was inherited from the transcript with the match with the highest bitscore. For the Drosophila , Caenorhabditis and primate species studied, all proteins in each group were aligned to each other with blastp, using a cutoff of 1E-5. Ortholog groups were determined from this alignment using MCL with I = 15 [ 48 ]. This value was determined by comparing the ortholog groups to preexisting groups (more details in Additional file 1 ). For each class C gene, the average h value of the members of its ortholog group was determined ( h orth ); if this was ≥30 the gene was considered to be a class B gene. Class A genes were defined as a subset of class B genes with no metazoan matches with bitscore ≥100 and no members of their respective ortholog group with metazoan matches with bitscore ≥100. Numbers of each class for each species are shown in Additional file 2 . Phylogenetic validation We phylogenetically validated all foreign genes that had any metazoan matches with bitscore ≥50 using a method based on that previously used, producing unrooted trees [ 12 ]. We used a strict validation, requiring that the trees showed no evidence that the foreign gene was metazoan. The trees were considered validated if the foreign gene was monophyletic either with a single donor taxon or with multiple potential donor taxa and was not monophyletic with the metazoa. In cases where the foreign genes were monophyletic with both the metazoans and the donor(s) the tree was not validated. We did not require the ‘own-phylum’ taxon (Arthropoda, Chordata, Nematoda) to be monophyletic, as in cases of recent HGT the best matches in this taxon are not orthologs to the foreign gene. For further details see Additional file 1 . All phylogenetic trees containing metazoan matches with bitscore ≥50 are available at [ 49 ]. Manual validation The 145 human genes classified as HGT were also subjected to manual validation. The transcript with the best blastx bitscore from the previous analysis was blastx-compared to the non-redundant protein sequence (nr) database, excluding Chordata (taxon id: 7711), Vertebrata (taxon id: 7742) or Metazoa (taxon id: 33208) in turn, using the NCBI website [ 50 , 51 ]. The results were manually inspected and the alignments checked for reliability. The same 145 transcripts were also analysed according to published protocols [ 12 ]; in summary, sequences were compared (using NCBI-blast-2.2.27+) [ 47 ] against the complete proteomes on UniProt. The comparison was done against kingdom-specific databases containing exclusively Metazoa (taxon id: 33208), Eubacteria (taxon id: 2), Archaea (taxon id: 2157), Fungi (taxon id: 4751), plants (taxon id: 3193) and protist (eukaryotes without Metazoa, Fungi and plants) sequences. Bitscores were recorded for the best hit in each taxon and h calculated as described. Results were manually analysed to check for agreement with the analysis using the nr database and the automated analysis. Genome linkage tests For each foreign gene, we identified to which contig/scaffold it mapped and determined whether native genes (for which h <30) were also found on that contig. If so the horizontally transferred gene was considered to be linked to a native gene. Results are shown in Additional file 2 . Discussion of species with lower than average levels of linkage is contained in Additional file 1 . Functional characterisation of genes To determine whether horizontally transferred genes encode enzymes, we examined GO annotation [ 52 ]. A more direct calculation using EC numbers is not possible due to a lack of EC annotation in most of the species studied. GO terms used in the test species were manually annotated to indicate whether they referred to an enzymatic activity. A hypergeometric test was performed to calculate per species which GO terms were enriched in each class of foreign gene (threshold of P ≤0.05). Benjamini-Hochberg multiple testing correction was performed to reduce the false positive rate. We then calculated whether enzymes were significantly over-represented in the enriched versus the non-enriched terms using a chi-squared test (threshold of P ≤0.05). Results are shown in Additional file 2 . Identification of non-HGT genes that are found in chordates and not in non-chordate metazoans The metazoan species used in the analysis (both the 40 studied and those with complete proteomes in the UniProt database) were placed into a phylogenetic, binary tree based on the NCBI taxonomy (Additional file 5 : Figure S5). This tree has six branchpoints between the origin of metazoa and the phyla in which the studied species are found (Arthropoda, Nematoda, Chordata), meaning that a minimum of six gene losses (one at each of these branchpoints) would be required for an HGT event occurring at the base of the phyla to appear to be HGT when in fact it was a result of gene loss. It must also be noted that as not all identified HGTs occur at the base of the phyla, as shown in Figure 1 , the number of required gene losses is greater for much of the HGT. For each studied primate species we identified all non-HGT (that is, native) genes that have been lost at least at these six branchpoints using a BLAST alignment to the metazoan database from UniProt. For each branchpoint where a loss must occur there are a varying number of species; if there were no matches with bitscore ≥100 to any proteins in these species then a gene loss was considered to have occurred in the relevant branch. These non-HGT genes were then analysed based on their GO terms, as done previously for the HGT genes (above), with the comparison made to non-HGT genes that did not have this pattern of loss. Introns To determine whether introns are present at significantly different rates in foreign vs. native genes, we compared the number of native genes with introns to the number of genes of each class of HGT with introns using a chi-squared test (threshold of p ≤0.05). Results are shown in Additional file 2 . Validation and discussion of methods is contained in Additional file 1 . Description of additional data files The following additional data are available with the online version of this paper. Additional file 1 contains validation and discussion of the methods used in this paper, as well as the legends for the other additional files. Additional file 2 is a table of HGT levels and analyses for all species. Additional file 3 is a table of the horizontally acquired genes in H. sapiens , D. melanogaster and C. elegans , listed by class. Additional file 4 is a table of H. sapiens genes previously identified as horizontally transferred. Additional file 5 contains the supplementary figures - Figure S1 shows the phylogenetic trees for the human genes discussed in the section ‘ Identification of new foreign genes and confirmation of previously reported examples ’. Figure S2 shows the amino-acid alignment between the C. elegans trehalose-phosphate synthase gene tps-1 and the D. melanogaster trehalose-phosphate synthase gene Tps1. Figure S3 shows the position of foreign genes on the D. melanogaster and C. elegans chromosomes. Figure S4 shows the workflow used to identify HGT. Figure S5 shows the simplified phylogenetic tree of species used in analysis. Figure S6 shows the phylogenetic trees for the six human genes originally labelled as horizontally acquired, and later rejected, which are reclaimed. | None | [] | [] | [] | SciNews | Biology | Alastair Crisp, Chiara Boschetti, Malcolm Perry, Alan Tunnacliffe and Gos Micklem, Expression of multiple horizontally acquired genes is a hallmark of both vertebrate and invertebrate genomes, Genome Biology 2015. DOI: 10.1186/s13059-015-0607-3 Journal information: Genome Biology | http://dx.doi.org/10.1186/s13059-015-0607-3 | https://phys.org/news/2015-03-genes-foreign-ancestors.html | A recent study published in Genome Biology has challenged conventional views on animal evolution by revealing that many animals, including humans, have acquired essential "foreign" genes from microorganisms co-habiting their environment through a process called horizontal gene transfer (HGT). The study found that HGT has contributed to the evolution of many, perhaps all, animals and is ongoing, with tens or hundreds of active "foreign" genes identified in the genomes of 12 species of fruit fly, four species of nematode worm, and 10 species of primate, including humans. The researchers confirmed 17 previously-reported genes acquired from HGT in humans and identified 128 additional foreign genes, including those involved in metabolism, immune responses, and antioxidant activities, which were likely acquired from bacteria, protists, viruses, and fungi. The study suggests that HGT is a widespread phenomenon in animals and has implications for genome sequencing, as bacterial sequences may not always be contamination, but rather a genuine part of an animal's genome originating from HGT.
Many animals, including humans, acquired essential 'foreign' genes from microorganisms co-habiting their environment in ancient times, according to research published in the open access journal Genome Biology. The study challenges conventional views that animal evolution relies solely on genes passed down through ancestral lines, suggesting that, at least in some lineages, the process is still ongoing. The transfer of genes between organisms living in the same environment is known as horizontal gene transfer (HGT). It is well known in single-celled organisms and thought to be an important process that explains how quickly bacteria evolve, for example, resistance to antibiotics. HGT is thought to play an important role in the evolution of some animals, including nematode worms which have acquired genes from microorganisms and plants, and some beetles that gained bacterial genes to produce enzymes for digesting coffee berries. However, the idea that HGT occurs in more complex animals, such as humans, rather than them solely gaining genes directly from ancestors, has been widely debated and contested. Lead author Alastair Crisp from the University of Cambridge, UK, said: "This is the first study to show how widely horizontal gene transfer (HGT) occurs in animals, including humans, giving rise to tens or hundreds of active 'foreign' genes. Surprisingly, far from being a rare occurrence, it appears that HGT has contributed to the evolution of many, perhaps all, animals and that the process is ongoing, meaning that we may need to re-evaluate how we think about evolution." The researchers studied the genomes of 12 species of Drosophila or fruit fly, four species of nematode worm, and 10 species of primate, including humans. They calculated how well each of their genes aligns to similar genes in other species to estimate how likely they were to be foreign in origin. By comparing with other groups of species, they were able to estimate how long ago the genes were likely to have been acquired. A number of genes, including the ABO blood group gene, were confirmed as having been acquired by vertebrates through HGT. The majority of the other genes were related to enzymes involved in metabolism. In humans, they confirmed 17 previously-reported genes acquired from HGT, and identified 128 additional foreign genes in the human genome that have not previously been reported. Some of those genes were involved in lipid metabolism, including the breakdown of fatty acids and the formation of glycolipids. Others were involved in immune responses, including the inflammatory response, immune cell signalling, and antimicrobial responses, while further gene categories include amino-acid metabolism, protein modification and antioxidant activities. The team were able to identify the likely class of organisms the transferred genes came from. Bacteria and protists, another class of microorganisms, were the most common donors in all species studied. They also identified HGT from viruses, which was responsible for up to 50 more foreign genes in primates. Some genes were identified as having originated from fungi. This explains why some previous studies, which only focused on bacteria as the source of HGT, originally rejected the idea that these genes were 'foreign' in origin. The majority of HGT in primates was found to be ancient, occurring sometime between the common ancestor of Chordata and the common ancestor of the primates. The authors say that their analysis probably underestimates the true extent of HGT in animals and that direct HGT between complex multicellular organisms is also plausible, and already known in some host-parasite relationships. The study also has potential impacts on genome sequencing more generally. Genome projects frequently remove bacterial sequences from results on the assumption that they are contamination. While screening for contamination is necessary, the potential for bacterial sequences being a genuine part of an animal's genome originating from HGT should not be ignored, say the authors. |
nature.com/articles/doi:10.1038/nature19332 | Did fall from tree kill famous human ancestor Lucy? | The famous human ancestor known as Lucy walked the Earth, but it was her tree climbing that might have led to her demise, a new study suggests. An analysis of her partial skeleton reveals breaks in her right arm, left shoulder, right ankle and left knee—injuries that researchers say resulted from falling from a high perch such as a tree. Lucy likely died quickly, said John Kappelman, an anthropologist at the University of Texas at Austin, who published the findings Monday in the journal Nature. "I don't think she suffered," Kappelman said. But several other researchers, including Lucy's discoverer, disagree. They contend most of the cracks in Lucy's bones are well documented and came after her death from the fossilization process and natural forces such as erosion. How Lucy met her end has remained a mystery since her well-preserved fossil remains were unearthed more than four decades ago. Her discovery was significant because it allowed scientists to establish that ancient human ancestors walked upright before evolving a big brain. Lucy was a member of Australopithecus afarensis, an early human species that lived in Africa between about 4 million and 3 million years ago. The earliest humans climbed trees and walked on the ground. Lucy walked upright and occasionally used her long, dangling arms to climb trees. She was a young adult when she died. This Aug. 14, 2007, file photo shows a three-dimensional model of the early human ancestor, Australopithecus afarensis, known as Lucy, on display at the Houston Museum of Natural Science. It's a scientific estimation of what Lucy may have looked like in life. A new study based on an analysis of Lucy's fossil by the University of Texas at Austin suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (AP Photo/Pat Sullivan, File) Tim White, a paleoanthropologist at the University of California, Berkeley, called the study's conclusion a "misdiagnosis." The Texas researchers "appear to have focused only on the cracks that they could attribute to an imagined fall, ignoring the additional abundant cracks," White said in an email. The split highlights the difficulty of pinpointing a cause of death from fossilized remains. Scientists rarely know how early humans died because skeletons are incomplete and bones tend to get crushed under sand and rocks. Over the years, Lucy's discoverer Donald Johanson has tried to solve the mystery. Lucy's skeleton, which is 40 percent complete, was recovered in Ethiopia in what was an ancient lake near fossilized remains of crocodiles, turtle eggs and crab claws. This undated image provided by the University of Texas at Austin shows the skeleton of Lucy, a fossil specimen of an early human ancestor, Australopithecus afarensis. A new study based on an analysis of Lucy's fossil by the university suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (University of Texas at Austin via AP) "There's no definitive proof of how she died," said Johanson of Arizona State University. The Texas team examined Lucy's bones and used high-tech imaging. Kappelman said the scans revealed multiple broken bones and no signs of healing, suggesting the injuries occurred around the time of death. He reconstructed her final moments: The 3-foot-6-inch (1.06-meter) Lucy fell from at least 40 feet and hit the ground at 35 mph. She landed on her feet before twisting and falling. Such an impact would have caused internal organ damage. Fractures on her upper arms suggest she tried to break her fall. Kappelman theorized that Lucy's walking ability may have caused her to be less adept at climbing trees, making her more vulnerable to falling from heights. UT Austin professor John Kappelman with 3-D printouts of Lucy's skeleton illustrating the compressive fractures in her right humerus that she suffered at the time of her death 3.18 million years ago Credit: Marsha Miller Not everyone agrees that her tree-climbing skills were lacking. Other scientists point out that there have been documented falls by chimpanzees and orangutans, which spend more time in trees than Lucy's species. "Without a time machine, how can one know that she didn't just get unlucky and fall?" William Harcourt-Smith of the American Museum of Natural History said in an email. This undated photo provided by the University of Texas at Austin shows the distal radius - a wrist bone - of Lucy, a fossil specimen of an early human ancestor, Australopithecus afarensis, undergoing computed tomographic scanning at the university in Austin, Texas. A new study based on an analysis of Lucy's fossil by the university suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (Marsha Miller/University of Texas at Austin via AP) | A new study suggests that Lucy, the famous human ancestor, may have died from falling from a tree while climbing, based on an analysis of her partial skeleton. The study found breaks in her right arm, left shoulder, right ankle, and left knee, which researchers believe resulted from falling from a high perch. However, several other scientists, including Lucy's discoverer, disagree, arguing that most of the cracks in her bones are well-documented and came after her death from the fossilization process and natural forces. The debate highlights the difficulty of pinpointing a cause of death from fossilized remains, and scientists are divided on whether Lucy's tree-climbing skills were lacking, making her more vulnerable to falling, or if she simply got unlucky and fell. | None | Abstract The Pliocene fossil ‘Lucy’ ( Australopithecus afarensis ) was discovered in the Afar region of Ethiopia in 1974 and is among the oldest and most complete fossil hominin skeletons discovered. Here we propose, on the basis of close study of her skeleton, that her cause of death was a vertical deceleration event or impact following a fall from considerable height that produced compressive and hinge (greenstick) fractures in multiple skeletal elements. Impacts that are so severe as to cause concomitant fractures usually also damage internal organs; together, these injuries are hypothesized to have caused her death. Lucy has been at the centre of a vigorous debate about the role, if any, of arboreal locomotion in early human evolution. It is therefore ironic that her death can be attributed to injuries resulting from a fall, probably out of a tall tree, thus offering unusual evidence for the presence of arborealism in this species. Main It is rare when an early hominin fossil composed of multiple skeletal elements representing a single individual is discovered 1 , 2 , 3 , 4 , 5 , and rarer still when a cause of death can potentially be attributed to its remains 6 , 7 . A.L. 288-1, named Lucy and dated to 3.18 million years in age 8 , is represented by elements of the skull, upper limb, hand, axial skeleton, pelvis, lower limb, and foot, with some bilateral preservation ( Fig. 1a ), and is popularly described as 40% complete 9 . We studied the original fossil and computed tomographic (CT) scans of the skeleton to assess cause of death. Our observation that the skeleton is marked by post-mortem damage largely agrees with the original description 9 ; however, we differ from the original authors in proposing that a subset of fractures are likely to be perimortem and were produced by a vertical deceleration event, or a fall and impact from considerable height, and not by fossilization processes. Figure 1: Perimortem fractures in A.L. 288-1 postcranial skeleton consistent with vertical deceleration event. a , Lucy. b , c , Right humerus ( b , top: stereo, superior, medial up; bottom: lateral; c , stereo, posterior) preserves valgus head-shattering four-part proximal fracture. d , Hinge and spiral fracture elevated, displaced, and fractured right midshaft humeral bone fragment (stereo, lateral; see b ). e , Head of left humerus (stereo, medial) is fractured and compressed inferomedially to override the neck. f , Fracture of right distal radius (posterior, stereo view). g , Fractures in sacrum (stereo, anterior) and left innominate just lateral to sacrum. Fractured superior pubic ramus also visible as is puncture hole (arrow). h , Left-lateral asymmetry of fractured sacrum (stereo, posterior) and fractured, elevated, and bent retroauricular surface of left innominate. i , Left femoral neck fractures (stereo, lateral at top). j , Superoposteriorly fractured epiphysis of left distal femur (stereo, anterior) in discovery state with lateral extent sheared superiorly along lateral edge of shaft. Central portion of anterodistal shaft fractured and secondarily driven into trabeculae. k , Fracture of right tibial plateau (stereo, superior, medial to right) with major fracture across medial condyle that with other fractures ( l ; stereo, anterior, medial to right) depress the plateau and add valgus cant to shaft. m , Proximal portion of right distal tibia (stereo, posteromedial, superior at top) preserves small bone fragments broken loose and driven into medullary canal at spiral shaft fracture. n , Fractures on talar articular surface of right distal tibia (stereo, anterior, medial to right) open onto anterodistal surface of shaft. o , Right talus neck fracture (stereo, superior, medial to right). Together, n and o are consistent with a pilon fracture. Red lines are fractures; green lines in g , h denote sacroiliac joint and transverse lines of sacrum. Specimens in g and h are casts because it was not practical to articulate the fossils, and j is a cast because the original specimen was reconstructed. Scale bars ( a , 50 mm; b – f , i – o , 10 mm; g , h , 20 mm) are approximate, given stereo photo parallax. See Extended Data Figs 1 , 2 , 3 , 4 , Supplementary Note 1 , and Supplementary Videos 1 , 2 , 3 , 4 . PowerPoint slide Full size image Perimortem compressive fractures The most striking feature of the nearly complete right humerus (A.L. 288-1m) is that its proximal end is severely damaged 9 ( Fig. 1b, c ). Close examination shows that it underwent severe valgus head-shattering compression that drove the head fragments into the shaft, fracturing the greater and lesser tuberosities, and fracturing and dislocating a portion of the proximal shaft with the intertubercular groove ( Fig. 1b , bottom). The shaft of the right humerus was found as multiple segments with generally tight fits. The two major segments conjoin near the midshaft, where a fragment of displaced cortical bone reveals that the shaft underwent a spiral fracture that operated in the same direction as the compressive fracture at the head ( Fig. 1b, d , Supplementary Note 1 , and Extended Data Figs 1 , 2 ). Lucy’s right scapula (A.L. 288-1l) was found as three pieces with the major fragment preserving a complete and undamaged glenoid and neck along with a portion of the base of the coracoid process; the other two fragments preserve a short portion of the lateral border and the base of the acromion. This pattern matches that of the most common fractures of the scapula 10 . A fracture of the articular head, lesser tuberosity, greater tuberosity, and shaft of the humerus is classified as a four-part proximal humerus fracture 11 . Under natural conditions, this fracture is commonly caused by an impact following a vertical deceleration event when an accident victim consciously stretches out their arm in an attempt to break their fall. Compressive contact between the hand and the ground impacts the humeral articular head against the glenoid which, with the glenoid acting as anvil 12 , fractures some or all of the components of the proximal humerus. This fracture leaves a unique signature on the humeral head and is common in two distinct populations: elderly people who have suffered a reduction in bone strength when even a fall from standing height onto an outstretched arm can fracture and sometimes compress the head into the greatly weakened shaft; and people with healthy bone strength who experience a fall from considerable height that in turn produces an impact with more powerful forces acting on the outstretched arm 13 . A 3D reconstruction of the right humerus based on CT data illustrates how Lucy’s articular head and shaft were compressively fractured ( Supplementary Note 2 and Supplementary Video 1 ). Lucy’s left proximal humerus (A.L. 288-1r) is largely complete but damage at the head reveals that it too suffered a compressive fracture ( Fig. 1e ). The general pattern is similar to that seen in the much more extensively fractured right humerus but it is less severely damaged ( Supplementary Note 1 ). These humeral fractures were long thought to have occurred post-mortem, but their close match to clinical cases 11 , 12 , 13 ( Table 1 ) suggests instead that they represent perimortem injuries. The fracture edges are sharp and clean, and bone fragments along with tiny bone slivers (<1 mm) of the severely shattered right articular head and shaft ( Fig. 1b–d and Extended Data Figs 1 , 2 ) and left proximal humerus ( Fig. 1e ) are preserved in their post-injury positions. This evidence suggests that the compressive impact event occurred while the periosteum and joint capsule were intact; if these factures had occurred after the periosteum and joint capsule had decomposed and when the bone was dry, it is likely that the slivers and fragments would have been dispersed onto the surface of the ground or into the soil. Additional evidence for perimortem fractures is found in the fact that there is no evidence of healing along any of these sharp fracture edges ( Supplementary Note 3 ). Although compressive bilateral proximal humerus fractures are not common, under natural conditions these fractures are usually associated with high energy trauma resulting from an impact on outstretched arms 14 . The fact that these fractures commonly occur when an accident victim actively abducts and stretches out their arms in an attempt to break their fall suggests that Lucy was conscious at the time of impact, offering additional support to the hypothesis that this was a perimortem event, with Lucy suffering a more severe impact on her right side. Table 1 Fractures in Lucy’s skeleton consistent with a vertical deceleration event Full size table The presence of bilateral proximal humerus fractures leads us to hypothesize that some of the other compressive fractures in Lucy’s skeleton, and especially those at major joints, also occurred perimortem and can be attributed to a severe impact. Compressive fractures in the left femur (A.L. 288-1ap) are especially informative ( Supplementary Note 1 ). There are two subparallel fractures in the neck oriented in a roughly parasagittal plane ( Fig. 1i ). The basicervical (narrower) and transcervical (wider) fractures are both widest in their superior aspect and wrap anteriorly and posteriorly around the neck to terminate inferiorly, slightly offsetting the head in an inferior direction. The location and orientation of these fractures suggest that when the pelvis and femur were articulated, a compressive force acted at the hip to drive the acetabulum and femoral head against one another, thereby fracturing the neck. The fractured fragments of the distal left femur ( Fig. 1j ) were separated from one another to reconstruct this element, so the following description is based on a cast of the bone in its discovery state ( Supplementary Note 1 ). This region was severely compressively fractured. The articular surface of the lateral condyle was shattered with the dislocated fragments forming a step along its lateral edge, while large cracks (subsequently closed in the reconstruction) separated fragments of the medial condyle’s fractured articular surface. The entire femoral epiphysis was separated and compressively driven into the distal shaft in a superoposterior direction. The shaft overrides the superior articular surface of medial condyle, the superior edge of the patellar surface, and the superior extent of the articular surface of the lateral condyle, with the lateral condyle and epicondyle and a portion of the lateral shaft also fractured and compressively sheared superolaterally along the edge of the shaft. The extent of shaft override and lateral shear is apparent in Fig. 1j and especially in the left image of the stereo pair, where the shadow cast by the fractured edge can be seen around the anterodistal circumference of the shaft. The compressive fractures at both the femoral neck and distal femur appear to have occurred perimortem when the foot impacted the ground following a fall, with the force acting along the long axis of the leg. We hypothesize that the tibial plateau acted as a punch when it impacted the epiphysis, in a manner similar to how the glenoid acted as an anvil at the proximal humerus. Although no left tibia was recovered, we tested this idea by articulating the femoral epiphysis with a 3D printout of Lucy’s right proximal tibia (A.L. 288-1aq) mirrored as a left, and the shape of the tibial plateau matches both the contour and dimensions of the indentation preserved on the epiphysis and shows a twist to the right ( Supplementary Note 1 ). The superoposterior orientation of the dislocated epiphysis suggests that the leg was hyperextended during impact. The impact also apparently produced the fractures in the femoral neck. The femoral fragments, and especially the small condylar fragments, remain in their original fractured positions and have sharp, clean edges that show no evidence of healing. As suggested for the humeri, if this fracture had occurred post-mortem on dry bone after the joint capsule and periosteum had decayed, it seems likely that the small fragments would have been dispersed onto the surface of the ground or into the soil. Together, these observations offer additional support for the hypothesis that these compressive fractures occurred perimortem while the tibia and femur were articulated and the joint capsule and periosteum intact. Although no left tibia was recovered, fractures in Lucy’s right tibia ( Table 1 , Fig. 1k–n , and Supplementary Note 1 ) are consistent with an impact following a fall from height and we propose that the left tibia would be in a similar condition if it were to be discovered. Additional compressive and hinge (greenstick) fractures are preserved in the forearms, lower limbs, pelvis, thorax (including the rarely fractured first rib), and skull ( Fig. 1 , Supplementary Note 1 , Table 1 , Extended Data Figs 3 , 4 , and Supplementary Videos 2 , 3 ). As seen with the humeri and femur, small bone fragments with sharp, clean edges and no evidence of healing often remain in their original fractured positions, again suggesting that these fractures occurred perimortem. The pattern is consistent with clinical presentations of fractures produced by a severe impact following a fall. Mechanisms of bone fracture There are other mechanisms, in addition to vertical deceleration events, that can fracture bone; these include collisions between a body and moving or stationary objects during floods, violent contact with animals, and even tetanic muscle contractions produced by seizures or lightning strikes ( Supplementary Note 4 ). However, these other mechanisms are uncommon and do not generally produce fractures by compression along the long axis of the bone (although some of these latter mechanisms can cause a fall that in turn generates compressive fractures). Lucy’s fractures most closely resemble and are consistent with those seen in patients who have suffered a vertical deceleration event from considerable height that in turn produces concomitant fractures across the skeleton 15 , 16 , 17 , 18 . Hadar palaeohabitats and inferred tree use Given this combination of evidence, the question remains as to how Lucy could have achieved the height necessary to produce the high velocity fall and impact required to fracture her skeleton so severely. One of the most vigorously debated questions in palaeoanthropology has been the role, if any, of arboreal locomotion in early hominin evolution and especially in Lucy’s species, Australopithecus afarensis 19 , which demonstrates convincing adaptations to terrestrial bipedalism. Given Lucy’s small size, she, like many small primates, probably sought nightly refuge in trees 20 and possibly foraged there, so it is reasonable to assess whether trees were available to her. Palaeohabitat reconstructions based on fossil mammals 21 , fossil pollen 22 , and palaeopedology and δ 18 O and δ 13 C analysis of palaeosol carbonates 23 have led to the conclusion that Hadar, where Lucy was found, was a grassy woodland with sizable trees. Lucy was found in a sandstone deposited as a distributary crevasse-splay channel over a low-relief area of the floodplain 24 ; this shallow channel was probably associated with one of the larger channel systems in the lower Kada Hadar Member 25 , which contain large root casts and tree trunks 24 ( Supplementary Note 5 ). Channels and crevasse-splays are usually heavily vegetated along their banks, and together these data show that trees were common at Hadar. If Lucy sought out trees for food and nightly nesting sites, other similar-sized primates, such as chimpanzees, offer important information about the heights to which she might have climbed. Chimpanzees forage widely through the canopy, and at Kibale log a daily average of 3–5 climbing bouts to heights of 95–135 m in fruit trees 26 . The typical heights of chimpanzees’ sleeping nests in savanna habitat study sites range from means of 8.3 to 21.0 m (mean 13.71 m, n = 10) whereas in forested habitat study sites the means range from 7.1 to 23.2 m (mean 13.08 m, n = 31) ( Table 2 ) 27 . If a building storey is taken to be about 3 m, these mean heights equate to four- to five-storey buildings, with the range of means varying from nearly three to seven stories, and maximum heights of about 16 stories; these are considerable heights. Table 2 Free fall velocity and energy from tree nest height Full size table Falls from height Chimpanzees and some modern humans move and forage in trees and sometimes experience skeletal trauma and death from falls ( Supplementary Note 6 ). Even though chimpanzee foraging heights frequently exceed sleeping nest heights 28 , nest data offer a conservative approach for assessing whether skeletal trauma and death are likely to result from a fall at these heights. Average velocities ( Table 2 ) for unimpeded free falls from nest height means reach nearly 60 km h −1 , and the associated energies 29 ( Table 2 ) are within the range known to cause fatal impacts in humans 30 , with the generally lower energies estimated for Lucy resulting from her lower body mass 31 ( Supplementary Note 7 ). Impacts from these heights often produce a wide range of concomitant fractures in humans 15 , 16 , 17 , 18 , 30 similar to those preserved in Lucy’s skeleton. Such falls usually severely damage internal organs because they too decelerate upon impact and can be penetrated by broken bones, damaged by compression between the sternum and spine, and experience a ‘hydraulic ram effect’ in which abdominal organs are thrust upwards to produce cardiac damage 16 . Scenario for Lucy’s perimortem fractures The pattern of compressive and hinge fractures, palaeohabitat reconstruction, sedimentology of the discovery site, and consistency with clinical cases lead us to propose that the following scenario is the most likely of the various possible injury mechanisms: Lucy fell out of a tall tree at or in proximity to the distributary crevasse splay channel where her remains were found. Given the severity of the fractures, it is likely that the impact occurred on a hard surface, perhaps the dry bed of the channel itself, which would represent a near-zero stopping distance, thereby maximizing the transfer of energy produced by the fall. The body appears to have experienced minimal transport and rapid burial after death in order to retain the relative positions of the small fractured bone fragments. The location and severity of the fractures suggest that impact progressed from the feet and legs to the hip, arms, thorax, and head ( Fig. 2 and Supplementary Video 4 ). Concomitant fractures and organ damage are witnessed in the most severe clinical cases and together contribute to the death of the victim 15 , 16 , 17 , 18 . Although the fractures in Lucy’s humeri provide evidence that she was conscious when she stretched out her arms in an attempt to break her fall, the severity of the numerous compressive fractures and presumed organ damage suggest that death followed swiftly. Figure 2: Reconstruction of Lucy’s vertical deceleration event. We hypothesize that Lucy fell from a tall tree, landing feet-first and twisting to the right, with arrows indicating the sequence and types of fractures. a , Pilon fracture, tibial plateau fracture, and spiral shaft fracture of right tibia. b , The impact of hyperextended left knee drove the distal femoral epiphysis into the distal shaft, and fractured the femoral neck and possibly the acetabulum, sacrum, and lumbar vertebra. c , The impact of the knee drove the patella into the centre anterodistal surface of the femoral shaft. d , Impact on the right hip drove the right innominate into the sacrum, and the sacrum into the left innominate, dislocating and fracturing the sacrum and left innominate, and elevating the retroauricular surface. e , Lucy was still conscious when she stretched out her arms in an attempt to break her fall and fractured both proximal humeri, the right more severely than the left with spiral fracture near the midshaft, a Colles’ (or Smith’s) fracture of the right radius, and perhaps other fractures of the radii and ulnae. The impact depressed and retracted the right scapula, which depressed the clavicle into the first rib, fracturing both. f , Frontal impact fractured the left pubis and drove a portion of the anterior inferior pubic ramus posterolaterally, and a branch or rock possibly created the puncture mark on the pubis. g , The impact of the thorax fractured many ribs and possibly some thoracic vertebrae. h , The impact of the skull, slightly left of centre, created a tripartite guardsman fracture of the mandible and cranial fractures. See Supplementary Methods and Supplementary Video 4 . PowerPoint slide Full size image Discussion Although most hominin fossils are fragmentary and broken because of a complex post-mortem history, skeletal elements sometimes preserve evidence of antemortem or perimortem fractures and injuries 5 , 6 , 7 . When examining fossil taxa, such as Australopithecus afarensis 3 , 19 , that appear to have practiced both terrestrial and arboreal locomotion, we suggest that the adaptations that facilitated bipedal terrestrial locomotion compromised the ability of individuals to climb safely and efficiently in the trees; this combination of features may have predisposed these taxa to more frequent falls from height. Close inspection of other fossil specimens for antemortem or perimortem fractures ( Supplementary Note 8 ) has the potential to offer important information about their lifestyles through an understanding of the trauma that they suffered and the mechanisms by which they died. Methods CT scanning of A.L. 288-1 All scans were performed at the University of Texas High-Resolution X-ray CT Facility with procedures described in ref. 32 , using a FeinFocus FXE 225 kV X-ray source and image intensifier detector captured by a 1024 × 1024 CCD camera. Samples were held in place by custom foam mounts within Plexiglas containers, and the X-ray signal was calibrated using empty containers. X-ray settings were 180 kV and 0.175–0.180 mA, with an estimated focal spot size of ~40 μm, and no beam filtration was used. Scanning parameters were optimized for each piece based on size, and in some cases multiple pieces were scanned simultaneously. During each turntable rotation, 1,200 views (projections) were acquired to obtain raw data for 25 slices. Raw data were reconstructed as 16-bit TIFF images. Beam hardening corrections were performed during reconstruction using polynomial linearization 33 , with coefficients selected independently for each scan owing to variations in mineralization. Ring artefacts were corrected either pre- or post-reconstruction 34 . Reconstruction scaling of CT numbers was reduced for pieces that had highly attenuating mineralization, probably oxides or sulfides, to avoid information loss from voxel saturation. See Extended Data Table 1 for acquisition parameters, data voxel dimensions, and scaling and artefact processing parameters. 3D reconstruction Data volumes were loaded into Avizo (FEI) in order to produce the 3D element and segment the individual bone fragments along fracture planes, with each fragment saved as an .stl file. These files were imported into Autodesk’s Maya and were repositioned, reoriented, and aligned along fracture planes to reconstruct the element. A 3D scan of the left proximal humerus mirrored as a right was used as an approximate template for reconstructing the right proximal humerus. The keyframe function in Maya was used to move each individual fragment from its reconstructed ‘before’ position to its discovery ‘after’ position in order to recreate the progression of the impact injury ( Extended Data Figs 1 , 2 and Supplementary Video 1 ). Digital photography Digital photographs of the original fossils and replica casts were taken against a black felt or velvet background and imported into Adobe Photoshop C5.1 Extended at full resolution. The element was isolated with the lasso tool and cut and pasted onto a solid colour background, with stereo views composed of elements in different layers. Tracings of the fractures were made in separate layers with Photoshop’s pencil or lasso and fill tools. | None | [] | [] | [] | SciNews | Other | Nature, nature.com/articles/doi:10.1038/nature19332 Journal information: Nature | http://nature.com/articles/doi:10.1038/nature19332 | https://phys.org/news/2016-08-fall-tree-famous-human-ancestor.html | A new study suggests that Lucy, the famous human ancestor, may have died from falling from a tree while climbing, based on an analysis of her partial skeleton. The study found breaks in her right arm, left shoulder, right ankle, and left knee, which researchers believe resulted from falling from a high perch. However, several other scientists, including Lucy's discoverer, disagree, arguing that most of the cracks in her bones are well-documented and came after her death from the fossilization process and natural forces. The debate highlights the difficulty of pinpointing a cause of death from fossilized remains, and scientists are divided on whether Lucy's tree-climbing skills were lacking, making her more vulnerable to falling, or if she simply got unlucky and fell.
The famous human ancestor known as Lucy walked the Earth, but it was her tree climbing that might have led to her demise, a new study suggests. An analysis of her partial skeleton reveals breaks in her right arm, left shoulder, right ankle and left knee—injuries that researchers say resulted from falling from a high perch such as a tree. Lucy likely died quickly, said John Kappelman, an anthropologist at the University of Texas at Austin, who published the findings Monday in the journal Nature. "I don't think she suffered," Kappelman said. But several other researchers, including Lucy's discoverer, disagree. They contend most of the cracks in Lucy's bones are well documented and came after her death from the fossilization process and natural forces such as erosion. How Lucy met her end has remained a mystery since her well-preserved fossil remains were unearthed more than four decades ago. Her discovery was significant because it allowed scientists to establish that ancient human ancestors walked upright before evolving a big brain. Lucy was a member of Australopithecus afarensis, an early human species that lived in Africa between about 4 million and 3 million years ago. The earliest humans climbed trees and walked on the ground. Lucy walked upright and occasionally used her long, dangling arms to climb trees. She was a young adult when she died. This Aug. 14, 2007, file photo shows a three-dimensional model of the early human ancestor, Australopithecus afarensis, known as Lucy, on display at the Houston Museum of Natural Science. It's a scientific estimation of what Lucy may have looked like in life. A new study based on an analysis of Lucy's fossil by the University of Texas at Austin suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (AP Photo/Pat Sullivan, File) Tim White, a paleoanthropologist at the University of California, Berkeley, called the study's conclusion a "misdiagnosis." The Texas researchers "appear to have focused only on the cracks that they could attribute to an imagined fall, ignoring the additional abundant cracks," White said in an email. The split highlights the difficulty of pinpointing a cause of death from fossilized remains. Scientists rarely know how early humans died because skeletons are incomplete and bones tend to get crushed under sand and rocks. Over the years, Lucy's discoverer Donald Johanson has tried to solve the mystery. Lucy's skeleton, which is 40 percent complete, was recovered in Ethiopia in what was an ancient lake near fossilized remains of crocodiles, turtle eggs and crab claws. This undated image provided by the University of Texas at Austin shows the skeleton of Lucy, a fossil specimen of an early human ancestor, Australopithecus afarensis. A new study based on an analysis of Lucy's fossil by the university suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (University of Texas at Austin via AP) "There's no definitive proof of how she died," said Johanson of Arizona State University. The Texas team examined Lucy's bones and used high-tech imaging. Kappelman said the scans revealed multiple broken bones and no signs of healing, suggesting the injuries occurred around the time of death. He reconstructed her final moments: The 3-foot-6-inch (1.06-meter) Lucy fell from at least 40 feet and hit the ground at 35 mph. She landed on her feet before twisting and falling. Such an impact would have caused internal organ damage. Fractures on her upper arms suggest she tried to break her fall. Kappelman theorized that Lucy's walking ability may have caused her to be less adept at climbing trees, making her more vulnerable to falling from heights. UT Austin professor John Kappelman with 3-D printouts of Lucy's skeleton illustrating the compressive fractures in her right humerus that she suffered at the time of her death 3.18 million years ago Credit: Marsha Miller Not everyone agrees that her tree-climbing skills were lacking. Other scientists point out that there have been documented falls by chimpanzees and orangutans, which spend more time in trees than Lucy's species. "Without a time machine, how can one know that she didn't just get unlucky and fall?" William Harcourt-Smith of the American Museum of Natural History said in an email. This undated photo provided by the University of Texas at Austin shows the distal radius - a wrist bone - of Lucy, a fossil specimen of an early human ancestor, Australopithecus afarensis, undergoing computed tomographic scanning at the university in Austin, Texas. A new study based on an analysis of Lucy's fossil by the university suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (Marsha Miller/University of Texas at Austin via AP) |
10.1038/s41550-022-01832-7 | Measuring gamma-ray bursts' hidden energy unearths clues about the evolution of the universe | Gamma-ray bursts are the most luminous explosions in the universe, allowing astronomers to observe intense gamma rays in short durations. Gamma-ray bursts are classified as either short or long, with long gamma-ray bursts being the result of massive stars dying out. They provide hidden clues about the evolution of the universe. Gamma-ray bursts emit gamma rays as well as radio waves, optical lights, and X-rays. When the conversion of explosion energy to emitted energy (i.e., the conversion efficiency) is high, the total explosion energy can be calculated by simply adding all the emitted energy. But when the conversion efficiency is low or unknown, measuring the emitted energy alone is not enough. Now, a team of astrophysicists has succeeded in measuring a gamma-ray burst's hidden energy by using light polarization. The team was led by Dr. Yuji Urata from the National Central University in Taiwan and MITOS Science CO., LTD and Professor Kenji Toma from Tohoku University's Frontier Research Institute for Interdisciplinary Sciences (FRIS). Details of their findings were published in the journal Nature Astronomy on December 8, 2022. When an electromagnetic wave is polarized, it means that the oscillation of that wave flows in one direction. While light emitted from stars is not polarized, the reflection of that light is. Many everyday items such as sunglasses and light shields utilize polarization to block out the glare of lights traveling in a uniform direction. Measuring the degree of polarization is referred to as polarimetry. In astrophysical observations, measuring a celestial object's polarimetry is not as easy as measuring its brightness. But it offers valuable information on the physical conditions of objects. The team looked at a gamma-ray burst that occurred on December 21, 2019 (GRB191221B). Using the Very Large Telescope of the European Southern Observatory and Atacama Large Millimeter/submillimeter Array—some of the world's most advanced optical and radio telescopes—they calculated the polarimetry of fast-fading emissions from GRB191221B. They then successfully measured the optical and radio polarizations simultaneously, finding the radio polarization degree to be significantly lower than the optical one. "This difference in polarization at the two wavelengths reveals detailed physical conditions of the gamma-ray burst's emission region," said Toma. "In particular, it allowed us to measure the previously unmeasurable hidden energy." When accounting for the hidden energy, the team revealed that the total energy was about 3.5 times bigger than previous estimates. With the explosion energy representing the gravitational energy of the progenitor star, being able to measure this figure has important ramifications for determining stars' masses. "Knowing the measurements of the progenitor star's true masses will help in understanding the evolutionary history of the universe," added Toma. "The first stars in the universe could be discovered if we can detect their long gamma-ray bursts." | A team of astrophysicists has successfully measured the hidden energy of a gamma-ray burst by using light polarization, a technique that allows them to calculate the total energy of the explosion even when the conversion efficiency is low or unknown. The team, led by Dr. Yuji Urata and Professor Kenji Toma, used the Very Large Telescope and Atacama Large Millimeter/submillimeter Array to measure the polarimetry of a gamma-ray burst that occurred in 2019. By comparing the polarization degree at different wavelengths, they were able to determine the physical conditions of the emission region and measure the previously unmeasurable hidden energy. The result showed that the total energy was about 3.5 times bigger than previous estimates, which has important implications for determining the masses of stars and understanding the evolutionary history of the universe. | None | Abstract Gamma-ray bursts (GRBs) are the most luminous transients in the universe and are utilized as probes of early stars, gravitational wave counterparts and collisionless shock physics. In spite of studies on polarimetry of GRBs in individual wavelengths that characterized intriguing properties of prompt emission and afterglow, no coordinated multi-wavelength measurements have yet been performed. Here we report the first coordinated simultaneous polarimetry in the optical and radio bands for the afterglow associated with the typical long GRB 191221B. Our observations successfully caught the radio emission, which is not affected by synchrotron self-absorption, and show that the emission is depolarized in the radio band compared with the optical one. Our simultaneous polarization angle measurement and temporal polarization monitoring indicate the existence of cool electrons that increase the estimate of jet kinetic energy by a factor of more than 4 for this GRB afterglow. Further coordinated multi-wavelength polarimetric campaigns would improve our understanding of the total jet energies and magnetic field configurations in the emission regions of various types of GRBs, which are required to comprehend the mass scales of their progenitor systems and the physics of collisionless shocks. Main Gamma-ray burst (GRB) 191221B was detected on 21 December 2019, 20:39:13 UT, and its X-ray afterglow was rapidly identified by the Neil Gehrels Swift Observatory 1 . The optical afterglow was discovered by the MASTER auto-detection system 2 . Optical polarization with a possible time evolution in the early afterglow phase was also detected by the Southern African Large Telescope (SALT) and Very Large Telescope (VLT) (Extended Data Table 1 ) 3 . The redshift was measured as z = 1.148 based on metal absorption lines in the optical afterglow observed by VLT/X-shooter 4 . The isotropic equivalent energy of E γ ,iso = (3.6 ± 0.4) × 10 53 erg and the rest-frame peak energy of the time-integrated spectrum \({E}_{{{{\rm{peak}}}}}^{{{{\rm{src}}}}}=810\pm 65\) keV were derived by the Konus–Wind observation (with the standard cosmological parameters, the Hubble constant H 0 = 67.3 km s −1 Mpc −1 , the matter density parameter Ω M = 0.315 and the dark energy density parameter Ω Λ = 0.685) 5 . The duration of the prompt emission in the 15–350 keV band is 48.0 ± 16.0 s (ref. 6 ). These prompt emission properties obey the empirical \({E}_{{{{\rm{peak}}}}}^{{{{\rm{src}}}}}-{E}_{\gamma ,{{{\rm{iso}}}}}\) correlation (Extended Data Fig. 1 ) and indicate that GRB 191221B is one of the typical long GRBs. The first semi-simultaneous polarimetry for the afterglow between millimetre and optical bands was conducted at 2.5 days after the burst by using Atacama Large Millimetre/submillimetre Array (ALMA) and VLT (Fig. 1 ). The VLT observation measured a linear polarization degree (PD) of 1.3 ± 0.2% (here we employed the systematic errors of 0.1% reported by ref. 7 and the range with 3 σ confidence level is 0.9−1.8%) with a polarization angle (PA) of 61.6 ± 6.3° at the R band. Hereafter, we note 1 σ errors for our measurements without special notification. The low dust extinction and Serkowski law 8 show an intrinsic origin of the polarization ( Methods , Extended Data Figs. 2 – 4 and Extended Data Table 2 ). The PD is consistent with other optical afterglows (an average of 1.6% across 84 polarimetric measurements) 9 . The ALMA observation put the upper limit on PD of 0.6% with a 3 σ confidence level at 97.5 GHz. The detection in Stokes U maps and non-detection in Stokes Q maps (Extended Data Fig. 5 ) constrained the range of PA to be 37.7−52.3° with a 1 σ confidence level. Therefore, this simultaneous polarimetry between optical and radio bands indicates depolarization in the radio band. The significantly low upper limit is also consistent with the first detection of linear polarization in the GRB radio afterglow (that is, 0.2% for the low-luminosity GRB 171205A) 10 . Fig. 1: Spectral flux distribution and polarization spectrum of GRB 191221B afterglow at 2.5 days. a , Spectral flux distribution (red points). The black dotted line is the forward shock model fit to the observed data. b , PDs at 97.5 GHz (3 σ upper limit) and optical R band (red points), and polarization spectrum of the simple one-zone model (grey dashed line), the plasma-scale magnetic field model (purple dash-dotted line) and the cool electron model (green solid line). c , PAs at 97.5 GHz (1 σ range) and optical R band (red points). The observed difference of PAs with ~90% confidence level (that is, 16.6 ± 9.6°) supports the cool electron model. The plasma-scale magnetic field model predicts a constant PA over the frequencies (for example, purple dash-dotted line). All error bars represent 1 σ uncertainties. Full size image The synchrotron self-absorption (SSA) effect, which suppresses polarization below the SSA frequency ( ν a ), is not a reliable explanation for the observed depolarization. ALMA observations at 97.5 GHz, 145 GHz and 203 GHz (Fig. 2 , Table 1 and Extended Data Table 3 ) show that the light curve at the 97.5 GHz band exhibited a broken power-law evolution with power-law indices of α = 0.26 ± 0.02 (before the break) and α = −1.62 ± 0.13 (after the break), and a break time at 3.77 ± 0.35 days, where and hereafter we describe the temporal and spectral properties of the flux density as F ν ∝ t α ν β ( t and v are time since the burst in days and frequency). The multi-frequency measurements (Fig. 3 ) showed that the spectral slope changed from positive power-law index of β ≈ 0.602 ± 0.007 at 1.5 days and β ≈ 0.32 ± 0.15 at 2.5 days to a negative one ( β ≈ −0.7) at 9.5 and 18.4 days. These spectral slopes are in disagreement with the SSA effect which leads to β = 2 (refs. 11 , 12 ). Fig. 2: Radio afterglow light curve of GRB 191221BA with the simultaneous optical (R band) polarimetric observation. The red dashed line indicates the best fitted smoothly connected broken power-law functions of the 97.5 GHz light curve. The radio light curves and the optical R band photometric measurement are described by the standard forward shock synchrotron radiation model. Differences in the early optical afterglow (green small circles) and its wiggles may be caused by the magnitude-to-flux conversion of optical observations made by the very broad-band clear filter. The forward shock model describes the passing of synchrotron spectral peak over the ALMA observing band around 4 days, which is consistent with observed spectrum change between 2.5 and 9.5 days (Fig. 3 ). All error bars represent 1 σ uncertainties. Full size image Table 1 Radio polarization observing log. Measurements with no special notation are summarized with 1 σ errors Full size table Fig. 3: Spectral flux distributions of the GRB 191221B afterglow at 1.5, 2.5, 9.5 and 18.4 days after GRB. The photometry with high signal-to-noise characterized the spectral slope β as 0.602 ± 0.007 at 1.5 days, 0.32 ± 0.15 at 2.5 days and − 0.7 at 9.5 and 18.4 days, respectively. The changing of spectral indices from positive to negative indicated the passing of the spectral peak frequency through the radio band. All error bars represent 1 σ uncertainties. Full size image These afterglow properties are instead reproduced by the standard model 11 , 12 of optically thin synchrotron emission from an expanding shock in a uniform density medium with an isotropic energy that increases with time by long activity of the central engine E iso = 9.4 × 10 52 t 0.25 erg, an ambient medium density n = 5.9 cm −3 , a fraction of shocked energy transferred to non-thermal electrons ϵ e = 6.5 × 10 −2 , that to magnetic field ϵ B = 1.2 × 10 −2 , the energy spectral slope of non-thermal electrons p = 2.4, the jet opening half-angle θ j = 2.6° and the viewing angle θ v = 1.9° ( Methods ). The model lines in the top panel of Figs. 1 and 2 are the results of our numerical calculations of synchrotron flux by taking account of the equal observed times of photons 13 , 14 . The model also explains the X-ray and optical afterglows (Extended Data Fig. 6 ). The peak frequency at ~200 GHz in the top panel of Fig. 1 is the synchrotron frequency of minimum-energy electrons ν m , and the temporal and spectral changes around t ≈ 4 days are found to be consistent with the crossing of ν m at the observed frequencies. The energy scale is comparable to the observed gamma-ray energy E γ ,iso , and the micro-physical parameters are also consistent with other cases 15 . The deviation of the observed radio flux from the model light curve at 1.5 days (Fig. 2 ) and its slightly hard spectrum (Fig. 3 ) would imply an additional emission component, but it is negligible at t ≥ 2.5 days (see Methods for more discussion). If the magnetic field is ordered in the emitting shocked region, the PD of synchrotron emission is ≃ 70% at ν > ν m while it is 50% at ν < ν m , and the polarization direction is perpendicular to both the magnetic field direction and the line of sight in the comoving frame, where the electron momentum distribution is assumed to be isotropic 16 . Usually, however, the magnetic field in the shocked region is tangled through its amplification process from the field of the ambient medium by some type of instability 17 , 18 and, therefore, the net polarization of observed synchrotron emission is reduced, depending on the magnetic field configuration in the visible region. One may consider a simple one-zone model in which the PDs at various frequencies, that is, ν > ν m and ν < ν m , are reduced by the same factor 19 , 20 , 21 (the grey dashed line in Fig. 1b ), but this model is ruled out by the observed PD data in the radio and optical bands. One of the most actively discussed magnetic field amplification processes is the Weibel instability, which occurs at relativistic collisionless shocks and generates strong magnetic fields with random directions on a plasma skin depth scale 17 , 19 , 22 , 23 . In this case, the field component parallel to the shock plane may be dominant. This anisotropy results in a sizable PD at each position of the shock, although the field is tangled on the tiny scale 24 , 25 . We numerically calculated the net linear polarization in various frequencies based on the synchrotron emission model explained above (see Methods for more details). As shown in the middle panel of Fig. 1 , the PD at ν ≲ ν m is much lower than that at ν > ν m since the surface brightness distribution is significantly non-uniform at the frequencies ν > ν m (refs. 13 , 14 ). This property can be consistent with the data. However, this model has a clear prediction that the PAs at ν > ν m and ν ≲ ν m are the same or 90° different 14 . The difference in the observed PAs at the radio and optical bands (the bottom panel of Fig. 1 ) does not support this model. The temporal evolution of PD is also incompatible with this model (Extended Data Fig. 6 ). Another possible process of magnetic field amplification is magnetohydrodynamic instabilities at the shock. These include Richtmyer–Meshkov instability, which occurs in the case of ambient medium with inhomogeneous density 18 , 26 . In this case, the magnetic field directions in the shocked region can be random mainly on hydrodynamic scales comparable to the typical width of the bright region in the shock downstream, and the internal Faraday depolarization can be significant in the radio band if the number of non-thermal electrons is a fraction f (<1) of the total shocked electrons 21 . The fraction 1 − f of the total shocked electrons remain so cool that they cause the Faraday depolarization on the emission from the non-thermal electrons. The true isotropic energy is E iso / f and the true fraction of non-thermal electron energy is ϵ e f in this case 27 . We calculate the PD in the one-zone model similar to the procedure in Sokoloff et al. 28 , and plot the model in the middle panel of Fig. 1 (see more details in Methods ). To explain the observed PDs, the Faraday rotation in the shocked region should be significant at ν < 100 GHz. This leads to an upper limit f ≲ 0.3. The difference in the surface brightness distributions in the optical and radio bands or the contribution from the ordered magnetic field 28 explain the observed difference in PAs. Our observations performed simultaneous polarimetry between the millimetre and optical bands for the typical long GRB 191221B. The multi-frequency observations of the afterglow were also described by the standard model. The measured radio PD was significantly lower than the optical one. Two plausible models that provide new insights into GRB sciences were considered for the origin of the polarization. The measured PAs and the PD temporal evolution indicated that the Faraday depolarization was caused by cool electrons in the hydrodynamic-scale magnetic field. Our observation consolidates a new methodology for revealing the total jet energies and collisionless shock physics of various types of GRBs. If f is very small for low-luminosity GRBs and/or short GRBs, their true total jet energies are much larger than the current estimates, which may increase their neutrino production rates 29 , 30 and the lower bound of total explosion energies or mass scales of progenitor systems. Methods ALMA and Atacama Compact Array observations A total of 11 epochs of radio observations were conducted using the ALMA and Atacama Compact Array (Table 1 and Extended Data Table 3 ). Four epochs (0.5, 1.5, 2.5 and 9.5 days) of observations were performed with the polarization mode at 97.5 GHz (that is, band 3). Multi-frequency observations were managed with the photometry mode at 145 GHz among 5 of the 11 epochs. At 1.5 and 2.5 days, two additional photometry observations at 203 GHz were also conducted. Semi-simultaneous optical polarimetry was also performed 2.5 days after the GRBs using the VLT. Regarding the ALMA calibrations, the bandpass and flux were calibrated using observations of J1037-2934, and observations of J1036-3744 was used for the phase calibration. The polarization calibration was performed by observing J1058+0133. The raw data were reduced at the East Asian ALMA Regional Center using CASA (v.5.6.1) 31 . We further performed interactive CLEAN deconvolution imaging with self-calibration. The Stokes I , Q and U maps were CLEANed with an appropriate number of CLEAN iterations after the final round of self-calibration. The off-source root mean square (r.m.s.) levels in I , Q and U are consistent with the expectations for thermal noise alone. The quantities that can be derived from the polarization maps are the polarized intensity ( \(\sqrt{{Q}^{2}+{U}^{2}}\) ), PD ( \(100\sqrt{{Q}^{2}+{U}^{2}}/I\) %) and polarization position angle ( \(1/2\arctan (U/Q)\) ). By applying the polarization calibration to the phase calibrator J1036-3744 and creating Stokes maps for 6, 9 and 18 epochs during the 3 h observing period, we confirm that the stability of linear PD is <0.02%, which is consistent with the systematic linear polarization calibration uncertainty of 0.033% for compact sources. The Atacama Compact Array data were flagged, calibrated and imaged with standard procedures with CASA (v.5.6.1). The bandpass and flux were calibrated using observations of J1058+0133 and J1107-4449. Observations of J1018-3123 were used for the phase calibration. VLT spectroscopic observations The VLT also obtained X-shooter spectra for the afterglow of GRB 191221B at ~10 and ~34 h after the GRB onset. X-shooter spectra cover a very large wavelength range, from the ultraviolet (UV) atmospheric cutoff to more than 2 μm. This range is covered by the three arms of the instrument: the UVB, visible (Vis) and near-infrared (NIR) arms. Observations consisted of two sets of 600 s exposures in the three arms using the AB nod-on-slit offset mode. Data in the UVB and Vis arms have been reduced using the stare mode standard data reduction, namely by extracting the science spectra as if they were obtained without any offset. The NIR arm was extracted using the standard X-shooter Nodding mode pipeline. Each single spectrum has been corrected for slit losses, due to the finite aperture of the X-shooter slit, and the residual sky emission subtracted. The final stacked spectrum has been finally corrected for telluric features, whose response correction for the Vis and NIR arms has been estimated from the spectrum of the standard telluric star 32 , 33 , 34 . A full study of these spectra is well beyond our interest and, in this work, our goal is just to model the afterglow-only spectrum (the one obtained ~10 h after the burst) to derive an estimate of the optical extinction along the line of sight. This would allow us to compute a plausible maximum level of host-galaxy dust-induced (that is, non-intrinsic to the GRB afterglow) polarization. Properly connecting the three X-shooter arms requires a careful cross-calibration again beyond our interests, and therefore we limited our analysis to the UVB arm covering the rest-frame wavelength range from approximately 1,650 to 2,550 Å (from 3,450 to 5,500 Å in the observer frame). The resolution of the X-shooter spectra is 0.2 Å per bin. We first rebinned the spectra to 20 Å per bin by the algorithm described in Carnall 35 and then manually removed all the main emission or absorption lines. The resulting spectrum shows small-scale variations, which are probably an artefact of the reduction process related to the different orders of the Echelle spectrograph. This does not affect our fits, although we had to add, in quadrature, a systematic error of 7.5 × 10 −18 erg s −1 cm −2 Å −1 to the uncertainties computed by the reduction pipeline. We fit the afterglow spectrum in this wavelength range by a simple power law affected by rest-frame extinction following the Small or Large Magellanic Cloud or the Milky Way extinction curves 36 , 37 . The rest-frame extinction turns out to be very low, and therefore the three extinction recipes yield essentially the same results (see also ref. 38 ): \(\beta =-0.9{7}_{-0.07}^{+0.14}\) , E B – V < 0.038 (95% upper limit). The best-fit for the Small Magellanic Cloud recipe is shown in Extended Data Fig. 2 . The VLT also obtained spectro-polarimetric observations with the Focal Reducer and low dispersion Spectrograph (FORS2) instrument at about 10 h after the GRB onset. These data have already been reported in Buckley et al. 3 . The data show (their Fig. 4) a fairly constant polarization level and position angle. In Buckley et al. 3 , spectro-polarimetry obtained with the SALT/Robert Stobie Spectrograph telescope ~3 h after the burst is also reported. The more modestsignal-to-noise ratio S/N prevents us from carrying out further analyses on these data regarding the possible evidence for a Serkowski-law behaviour. We have downloaded the VLT spectrum and carried out a fit with a Serkowski law 8 and the predictions for afterglow polarization in the optical band (that is, constant polarization). As expected, both scenarios can provide an acceptable fit to the data, although for the Serkowski law the wavelength corresponding to the polarization maximum is pushed into the far UV (~200 nm) to give a roughly constant polarization in the wavelength range covered by the FORS2 spectro-polarimetry. This is a rather unusual result but not totally unprecedented 39 . However, the Serkowski-law fit is not statistically favoured compared with the afterglow only, since it requires a larger number of free parameters. Therefore, when also considering the low extinction along the line of sight derived by the analysis of the X-shooter spectra, an intrinsic origin (that is, due to the afterglow) of the observed polarization compared with the dust-induced hypothesis appears to be more in agreement with the data. VLT polarimetric observations Polarimetric observations were acquired using the FORS2 mounted on the VLT. A Wollaston prism was inserted in the light path to split the image of each object in the field into two orthogonal polarization components. A mask was used to avoid overlap of the two images; we used the FORS2 R band filter. For each position angle ϕ /2 of the half-wave-plate rotator, we obtained two simultaneous images of cross-polarization at angles ϕ and ϕ + 90°. We obtained observations at position angles 0, 22.5, 45 and 67.5° of the half-wave plate. This technique allowed us to remove any differences between the two optical paths (ordinary and extraordinary ray), including the effects of seeing and airmass changes. With the same setup we also observed polarized and unpolarized standard stars so we could convert position angles from the telescope to the celestial reference frame, and to correct for the small instrumental polarization introduced by the telescope. Reduction, that is, bias, flat-field correction, bad pixel masking and so on, were carried out following standard recipes. Aperture photometry was obtained for the target and several nearby sources in the field. We also confirmed that the GRB polarization measurement is unaffected by Galactic dust-induced polarization (Extended Data Fig. 3 .) We used custom software tools based on the Python Astropy library ( ). More details on polarimetric data analysis are reported in Covino et al. 40 and Wiersema et al. 41 . Extended Data Figure 6 shows the temporal evolution of polarization combined with the optical polarimetric results reported by Buckley et al. 3 . There are three epochs of optical observation including our measurements. Buckley et al. 3 made two epochs of polarimetry during the wiggles of the optical afterglow and reported the marginal decrease of PD (by ~0.3%) over a timescale of ~7 h. We derived the PDs and PAs in the wavelength range of the R band based on the data reported by Buckley et al. 3 (Extended Table 1 ). Based on the two-sample t -test, the PA between the radio and optical bands is different at an ~90% confidence level. The temporal evolution of PDs is also inconsistent with the plasma-scale magnetic field model. These properties do not support the plasma-scale magnetic field model. X-ray spectrum of GRB afterglows with optical polarimetry We checked the hydrogen column density of the line of sight ( N H ), which is one of the indicators of dust extinction of afterglows at their host galaxies. There are eight known- z GRB afterglows, including GRB 191221B, available with optical polarimetry and Swift X-ray observations. For six events (GRB 080928, GRB 091018, GRB 091208B, GRB 110205A, GRB 121024A and GRB 131030A), the optical observations reported the detection of intrinsic polarization 9 , 41 , 42 , 43 , 44 , 45 . For GRB 190114C, Jordana-Mitjans et al. 46 reported the detection of polarization induced by the dust in the host galaxy. The X-ray data obtained by the Swift/X-ray telescope were collected from the UK Swift Science Data Centre 47 , 48 . We rebinned the spectra so that each spectral bin contained more than five counts. Using the software XSPEC 12, we performed spectral fitting with a single power law modified with intrinsic and Galactic absorptions, the latter of which were fixed at values calculated from Willingale et al. 49 . The XSPEC 12 model components TBabs and zTBabs that incorporate three absorption elements (that is, gas, molecules and grains) 50 were used to describe the spectral absorptions. The derived best-fitting values are summarized in Extended Data Table 2 . Using the results presented in Schady et al. 51 , the measured N H are converted to the extinction at optical V-band A V . Five events, including GRB 191221B, exhibited an intrinsic absorption column density of order 10 21 . The intrinsic absorption column density of GRB 191221B is the smallest one ( \({N}_{\mathrm{H}}=1.{6}_{-0.8}^{+0.9}\times 1{0}^{21}\) cm −2 ). This result is consistent with the low dust extinction derived by the analysis of the VLT/X-shooter spectra. In contrast, the GRB 190114C X-ray spectrum is highly obscured by the intrinsic absorption column density of N H = 8.5 × 10 22 cm −2 (Extended Data Fig. 4 ). These results naturally explain the dust-induced optical polarization of GRB 190114C and support the intrinsic polarization observed in other events. Hence, these results also indicate the intrinsic origin of the optical polarization measured on the GRB 191221B optical afterglow. Afterglow modelling The observed radio spectra and the light curves in the radio, optical and X-ray bands are explained by the standard forward shock model 11 , 12 . The temporal change of the spectral slope in the range of 97.5–203 GHz (Fig. 3 ) and the breaks of the 97.5 and 145 GHz light curves (Fig. 2 ) at t ≈ 4 days indicate the crossing of the synchrotron frequency of minimum-energy electrons ν m at the observed frequencies. From the observed spectral slope β ≈ −0.7 at ν > ν m , the electron energy spectral index is estimated to be p = −2 β + 1 ≈ 2.4. Then the theoretical temporal decay indices at ν < ν m and at ν > ν m are 1/2 and 3(1 − p )/4 ≈ −1.1, respectively, in the case in which the collimated forward shock expands in a uniform density medium and its edge is not visible owing to the strong relativistic beaming. These are not consistent with the observed indices of approximately 0.26 ( t ≲ 4 days) and −1.6 ( t ≳ 4 days). After the edge becomes visible (without sideways expansion of the shock), the geometrical flux reduction \({\theta }_{j}^{2}/{(1/{{\varGamma }})}^{2}\propto {t}^{-3/4}\) results in decay indices of −1/4 (at ν < ν m ) and −1.8 (at ν > ν m ), where θ j and Γ are the opening half-angle and Lorentz factor of the shock. The wind type ambient medium gives decay indices of −3/4 (at ν < ν m ) and −2.3 (at ν > ν m ). We find that the observed indices can be fit by the model in which the ambient density is uniform and the energy continues to be injected into the shock by long activity of the central engine 52 , 53 . We note that our assumption of no sideways expansion of the shock is based on the results of high-resolution hydrodynamic simulations, which show that the collimated shock after the time of 1/ Γ ≈ θ j expands sideways logarithmically, not exponentially 54 , 55 . We performed numerical calculations of flux from the shock with a fixed θ j , which evolves following the Blandford–McKee self-similar solution, by taking account of the equal arrival time surface of the photons 13 , 14 . Then we tried to fit the model flux to the observed data by adjusting the model parameters, namely the isotropic energy E iso , the ambient medium density n , the fraction of shock energy carried by the electrons ϵ e , that carried by amplified magnetic field ϵ B and the viewing angle θ v , as well as θ j and p . This model can fit the data in the radio, optical and X-ray bands (Figs. 1 and 2 , and Extended Data Fig. 6 ). The model parameters are constrained to be E iso = 9.4 × 10 52 t 0.25 erg, n = 5.9 cm −3 , ϵ e = 6.5 × 10 −2 , ϵ B = 1.2 × 10 −2 , θ v = 1.9°, θ j = 2.6° and p = 2.4. These values of n , ϵ e and ϵ B are typical of GRB afterglows 15 . It is well known that X-ray flares with fast variability sometimes dominate the forward shock emission. If the flares have broad emission spectra, they might also affect the radio light curves. The slight deviations of radio data from the forward shock model light curves at t = 0.5 and 1.5 days might be related to the X-ray flares observed at similar times. Since they have fast variability they may not contribute to the spectrum at t = 2.5 days. Figure 1 simply indicates that the standard forward shock synchrotron spectrum with the radio data at t = 2.5 days (that is, the peak flux ~5 mJy, ν m ≈ 200 GHz and the spectral index (at ν > ν m ) β ≈ −0.7) can explain the optical data, and does not require any additional emission component. The possibility that the short-lived reverse shock explains the radio emission at t ≳ 1.5 days is excluded since the minimum synchrotron frequency of the reverse shock 56 \({\nu }_{\mathrm{m}}^{\mathrm{r}} \approx 200\) GHz with the peak flux ~5 mJy requires ϵ e ≈ 1. We also examined possible long-lasting reverse shock emission in the long-active central engine model, such as in our model shown above. Suppose the reverse shock emission is dominant in the radio band while the forward shock is dominant in the optical band; the difference in polarization in the two bands could be caused by possible differences in magnetic field structures in the two shocked regions. However, this scenario is also disfavoured owing to a high value of ϵ e . According to Sari and Mésáros 57 , the minimum synchrotron frequencies of the forward and reverse shocks for our model parameters at t = 1.5 days are ν m ≈ 9.8 × 10 11 Hz and \({\nu }_{\mathrm{m}}^\mathrm{{r}} \approx 2.9\times 1{0}^{10}\) Hz, respectively, where the equal arrival time surface of the photons is not taken into account. To increase \({\nu }_{\mathrm{m}}^{\mathrm{r}}\) to a frequency at ν m , without changing the forward shock X-ray flux which is proportional to \({\epsilon }_{\mathrm{e}}^{p-1}{E}_{{{{\rm{iso}}}}}^{(p+2)/4}\) , requires ϵ e ≈ 0.53. This value is unusually high, compared to ϵ e ≲ 0.3 estimated by systematic studies using well-sampled multi-frequency observations 15 , 58 , 59 . Polarization in the plasma-scale magnetic field model The synchrotron polarization depends on the magnetic field configuration at each position in the shocked fluid. Here we focus on the turbulent magnetic field with coherence length on a plasma skin depth scale, which is many orders of magnitude smaller than the shock width. Such a magnetic field is created by the Weibel instability at relativistic collisionless shocks 17 , 19 , 22 , 23 , and in this case the field may be anisotropic, that is, \({\xi }^{2}\equiv 2\langle {B}_{\parallel }^{2}\rangle /\langle {B}_{\perp }^{2}\rangle \ne 1\) , where \(\langle {B}_{\parallel }^{2}\rangle\) and \(\langle {B}_{\perp }^{2}\rangle\) are the averages of the powers of the magnetic field components parallel and perpendicular to the shock normal, respectively. On the basis of this model, we can calculate the local Stokes Q and U parameters corresponding to the surface brightness of the shock by averaging the emissivity with respect to the field directions at each position, and we find that the synchrotron emission at each position is polarized owing to the anisotropy of the turbulent magnetic field 24 , 25 , 60 . The polarization directions are symmetric around the line of sight, so that the net PD is non-zero only when the visible region of angular size ~1/ Γ includes the jet edge and becomes asymmetric 24 , 25 , 60 . We numerically calculated the linear PDs in various frequencies based on the light curve model explained above. The parameter value ξ 2 = 0.56 leads to an optical PD of ≃ 1.3% at t = 2.5 days. In the optical band, the surface brightness has a peak at θ ≈ 1/ Γ from the line of sight, while in the radio band, the region around θ ≈ 0 with low local PD is also bright 13 , so that the net radio PD is lower 14 . As a result, the model polarization spectrum at 2.5 days (Fig. 1 , middle panel) is consistent with the upper limit on the radio PD. In this model, however, the PA in the radio band is the same as that in the optical band. The difference in the observed PAs at the radio and optical bands does not support this model. The temporal changes of optical PD and PA in this model are plotted in Extended Data Fig. 6 . The PD changes as the angular size of the visible region ~1/ Γ increases. It has a maximum value when 1/ Γ ≈ θ j + θ v . The PA experiences a sudden 90° change at t ≈ 0.06 day, and it is constant before and after that time. The model with ξ 2 = 0.56 exhibits a PD as high as ≃ 5% at t ≃ 0.4 day, which is not consistent with the observed data. The less anisotropic turbulence leads to lower PD, as shown by the model with ξ 2 = 0.81 in Extended Data Fig. 6 , but it also appears incompatible with the observed data. Faraday depolarization model The low PD in the radio band could be ascribed to an internal Faraday depolarization effect by cool electrons. The standard forward shock model usually assumes that all shocked electrons gain energy from shocked protons and form a power-law energy distribution d n /d γ ∝ γ − p for γ ≳ γ m ≈ ϵ e ( m p / m e ) Γ (here γ, m p , m e are the electron Lorentz factor, proton mass, and electron mass). Plasma particle simulations showed that all the electrons gain energy from shocked protons 61 , but this has not yet been confirmed by observations. Indeed, the forward shock model in which only a fraction f (<1) of the total electrons is energized can also explain the observed afterglow light curves and spectra 27 . In this case, a fraction 1 − f of the total electrons remains as cool thermal electrons with the Lorentz factor \({\tilde{\gamma }}_{\mathrm{m}}=\eta {{\varGamma }}\) , where η is a factor of the order of unity if the cool electrons are just isotropized at the shock front, and the correct physical parameters are \(E^{\prime}_{iso}\) = E iso / f , \(n^{\prime} =n/f\) , \(\epsilon ^{\prime}_{\mathrm{e}} ={\epsilon }_{\mathrm{e}}f\) and \(\epsilon ^{\prime}_{B} ={\epsilon }_{B}f\) . The cool electrons cause Faraday depolarization of the synchrotron emission of the non-thermal electrons above the self-absorption frequency 21 . We assume that the magnetic field in the shocked fluid is turbulent on a hydrodynamic scale, which is comparable to the typical width of the bright region in the shock downstream. Such a field can be created by magnetohydrodynamic instabilities at the shock, such as the Richtmyer–Meshkov instability 18 , 26 . For simplicity, we consider that the globally ordered magnetic field is negligible and that the plasma in the visible region consists of N random cells, in each of which the magnetic field is ordered. At the optical band, for which the Faraday effect is not significant, the net PD is \({P}_{0} \approx \frac{(p+1)}{(p+7/3)}\frac{1}{\sqrt{N}}\) , so that N ≈ 5,000 can explain the optical PD ≈ 1%. The Faraday rotation effect within the emission region results in the PD being 28 P 0 [(1 − e − S )/ S ], where \(S={(\nu /\tilde{{\nu }}_{V})}^{-4}\) and \(\tilde{{\nu }}_{V}\) is the frequency at which the Faraday depth is unity, \({\tilde{\nu }}_{V} \approx 200\) \({\left(1+z\right)}^{-15/16}{\left(\frac{1-f}{10f}\right)}^{1/2}{\eta }^{-1}\sqrt{\ln {\tilde{\gamma }}_{\mathrm{m}}}{N}^{-1/12}{\left(\frac{{E}_{{{{\rm{iso}}}}}}{1{0}^{52}\,{{{\rm{erg}}}}}\right)}^{3/16}{n}^{9/16}{\left(\frac{{\epsilon }_{B}}{0.01}\right)}^{1/4}{\left(\frac{t}{1\,{{{\rm{day}}}}}\right)}^{-1/16}\) GHz (ref. 21 ). The middle panel of Fig. 1 shows the Faraday depolarization model for the radio and optical data, which indicate \({\tilde{\nu }}_{V}\gtrsim 100\) GHz. This leads to \({f}^{-1}-1\gtrsim 2.5{\eta }^{2}{(\ln {\tilde{\gamma }}_{\mathrm{m}})}^{-1}\) . Data availability Processed data are presented in the tables and figures in the paper. The ALMA data are available from the ALMA Science Archive. The VLT data are available from the ESO Science Archive Facility. Code availability We used standard data reduction tools in Python and CASA 31 . The theoretical calculation code of the flux and polarization used in this work is not publicly available. Results presented in this work are available from the corresponding author upon reasonable request. | None | [] | [] | [] | SciNews | Space | Yuji Urata et al, Simultaneous radio and optical polarimetry of GRB 191221B afterglow, Nature Astronomy (2022). DOI: 10.1038/s41550-022-01832-7 Journal information: Nature Astronomy | https://dx.doi.org/10.1038/s41550-022-01832-7 | https://phys.org/news/2022-12-gamma-ray-hidden-energy-unearths-clues.html | A team of astrophysicists has successfully measured the hidden energy of a gamma-ray burst by using light polarization, a technique that allows them to calculate the total energy of the explosion even when the conversion efficiency is low or unknown. The team, led by Dr. Yuji Urata and Professor Kenji Toma, used the Very Large Telescope and Atacama Large Millimeter/submillimeter Array to measure the polarimetry of a gamma-ray burst that occurred in 2019. By comparing the polarization degree at different wavelengths, they were able to determine the physical conditions of the emission region and measure the previously unmeasurable hidden energy. The result showed that the total energy was about 3.5 times bigger than previous estimates, which has important implications for determining the masses of stars and understanding the evolutionary history of the universe.
Gamma-ray bursts are the most luminous explosions in the universe, allowing astronomers to observe intense gamma rays in short durations. Gamma-ray bursts are classified as either short or long, with long gamma-ray bursts being the result of massive stars dying out. They provide hidden clues about the evolution of the universe. Gamma-ray bursts emit gamma rays as well as radio waves, optical lights, and X-rays. When the conversion of explosion energy to emitted energy (i.e., the conversion efficiency) is high, the total explosion energy can be calculated by simply adding all the emitted energy. But when the conversion efficiency is low or unknown, measuring the emitted energy alone is not enough. Now, a team of astrophysicists has succeeded in measuring a gamma-ray burst's hidden energy by using light polarization. The team was led by Dr. Yuji Urata from the National Central University in Taiwan and MITOS Science CO., LTD and Professor Kenji Toma from Tohoku University's Frontier Research Institute for Interdisciplinary Sciences (FRIS). Details of their findings were published in the journal Nature Astronomy on December 8, 2022. When an electromagnetic wave is polarized, it means that the oscillation of that wave flows in one direction. While light emitted from stars is not polarized, the reflection of that light is. Many everyday items such as sunglasses and light shields utilize polarization to block out the glare of lights traveling in a uniform direction. Measuring the degree of polarization is referred to as polarimetry. In astrophysical observations, measuring a celestial object's polarimetry is not as easy as measuring its brightness. But it offers valuable information on the physical conditions of objects. The team looked at a gamma-ray burst that occurred on December 21, 2019 (GRB191221B). Using the Very Large Telescope of the European Southern Observatory and Atacama Large Millimeter/submillimeter Array—some of the world's most advanced optical and radio telescopes—they calculated the polarimetry of fast-fading emissions from GRB191221B. They then successfully measured the optical and radio polarizations simultaneously, finding the radio polarization degree to be significantly lower than the optical one. "This difference in polarization at the two wavelengths reveals detailed physical conditions of the gamma-ray burst's emission region," said Toma. "In particular, it allowed us to measure the previously unmeasurable hidden energy." When accounting for the hidden energy, the team revealed that the total energy was about 3.5 times bigger than previous estimates. With the explosion energy representing the gravitational energy of the progenitor star, being able to measure this figure has important ramifications for determining stars' masses. "Knowing the measurements of the progenitor star's true masses will help in understanding the evolutionary history of the universe," added Toma. "The first stars in the universe could be discovered if we can detect their long gamma-ray bursts." |
10.1038/s41598-017-04546-3 | Sun eruptions hit Earth like a 'sneeze', say scientists | Long-term power cuts, destruction of electronic devices and increased cancer risk for aeroplane passengers are all potential effects of the Earth being hit by a powerful solar eruption. Yet, new research has found space scientists have their work cut out to predict when these coronal mass ejections (CMEs) are on a collision course with Earth. A study of CMEs by scientists at the University of Reading has found they have cloud-like structures. This means they are more influenced by solar wind, through which they pass to reach Earth, making their movements much harder to predict than if they were single bubble-like entities as was previously thought. CMEs are huge blasts of solar plasma and magnetic fields from the sun's atmosphere that can reach Earth in one to three days. A direct hit could have catastrophic consequences, as CMEs are capable of damaging satellites, destroying electronic devices and potentially exposing people at high altitude, such as astronauts and aviation crew and passengers, to cancer-causing radiation. They occur frequently, but predicting which ones will impact Earth and how severely is difficult. Clouds not bubbles Professor Mathew Owens said: "Up until now, it has been assumed CMEs move like bubbles through space, and respond to forces as single objects. We have found they are more like an expanding dust cloud or sneeze, made up of individual plasma parcels all doing their own thing. "This means that trying to predict the shape and movement of CMEs as they pass through the solar wind becomes extremely difficult. Therefore if we want to protect ourselves from solar eruptions, we need to understand more about the solar wind." The new study, published in Nature Scientific Reports on Friday 23 June, looks in detail for the first time at how CMEs behave as they make their way through space, and how they interact with external forces like solar wind. The Reading scientists took a cross section of a CME to examine its structure more closely. They found that a CME quickly reaches the point at which the speed of its expansion exceeds the speed at which information can travel within the CME. At this point, it ceases to be a coherent structure, so any distortion to one part of the cloud caused by external forces does not affect it as a whole. Space weather threat Scientists are constantly monitoring the sun to track solar wind and extreme space weather. The Reading team recommends that information about solar wind should be incorporated into CME observations to ensure we are fully aware of the threat they pose to Earth. A previous study by University of Reading scientists found a shift in solar activity, expected to occur by the middle of the century, could make us more vulnerable to CMEs, as well as concentrating the Northern Lights around the poles – out of view of Great Britain. In 2011, the threat of space weather was added to the Government National Risk Register of Civil Emergencies. | A new study by scientists at the University of Reading has found that coronal mass ejections (CMEs), powerful solar eruptions that can cause long-term power cuts, destroy electronic devices, and increase cancer risk for aeroplane passengers, have cloud-like structures that make them harder to predict. Unlike previously thought, CMEs do not move like single bubbles through space, but rather like an expanding dust cloud or sneeze, with individual plasma parcels moving independently. This makes it challenging to predict the shape and movement of CMEs as they pass through the solar wind, and scientists recommend incorporating information about solar wind into CME observations to better understand the threat they pose to Earth. | None | Abstract Coronal mass ejections (CMEs) are episodic eruptions of solar plasma and magnetic flux that travel out through the solar system, driving extreme space weather. Interpretation of CME observations and their interaction with the solar wind typically assumes CMEs are coherent, almost solid-like objects. We show that supersonic radial propagation of CMEs away from the Sun results in geometric expansion of CME plasma parcels at a speed faster than the local wave speed. Thus information cannot propagate across the CME. Comparing our results with observed properties of over 400 CMEs, we show that CMEs cease to be coherent magnetohydrodynamic structures within 0.3 AU of the Sun. This suggests Earth-directed CMEs are less like billiard balls and more like dust clouds, with apparent coherence only due to similar initial conditions and quasi homogeneity of the medium through which they travel. The incoherence of CMEs suggests interpretation of CME observations requires accurate reconstruction of the ambient solar wind with which they interact, and that simple assumptions about the shape of the CMEs are likely to be invalid when significant spatial/temporal gradients in ambient solar wind conditions are present. Introduction Coronal mass ejections (CMEs) are large, episodic eruptions of coronal plasma and magnetic flux that are ejected out into the heliosphere at speeds typically 1 ranging from 300–2000 km s −1 . They are of great interest both for their central role in extreme space weather 2 , 3 and in the solar cycle evolution of the coronal magnetic field 4 , 5 . In situ spacecraft observations of CMEs show that around a third to a half of all CMEs contain a magnetic flux-rope structure and low plasma beta 6 , 7 . These “magnetic clouds” are generally assumed to be (quasi-) coherent magnetohydrodynamic (MHD) structures, wherein the magnetic pressure and curvature forces act, to a greater or lesser extent, to resist deformation by external forces such as solar wind speed shear. This, in principle, enables a magnetic cloud to evolve as a single cohesive body. For example: Observations of CME-CME interactions in the heliosphere 8 have been interpreted as elastic or even super-elastic collisions 9 , suggesting the CMEs are solid-like, coherent structures. Non-radial deflection of CME trajectories, possibly by interaction with coronal hole magnetic flux, has been observed 10 , 11 , 12 . While this has largely been interpreted as centre-of-mass deflection, which would require the CME to behave as a coherent structure, distortion of the CME shape could equally explain the available observations. Methods for tracking CMEs through the corona and heliosphere assume the CME front remains quasi-spherical (or some other simple shape) 13 , 14 , 15 , 16 , implying the CME front remains a coherent structure throughout the heliosphere. There is observational evidence, however, for significant disruption of CME structure by solar wind inhomogeneity 17 . Numerous studies (including some by the authors of present paper) either explicitly or implicitly assume that single-point in situ measurements of a magnetic cloud are representative of its global structure 7 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , implying a large degree of coherence of CMEs. Single- 25 and multi-point 26 , 27 observations, even at relatively modest spacecraft separations, often reveal this picture to be far too simplistic, with evidence of CME distortion by the ambient solar wind. Numerical MHD models provide a complementary means to test the coherence of CMEs. There have been a number of numerical experiments investigating interaction of CMEs both with a structured solar wind and other CMEs, which often reveal significant distortion of CME structure 28 , 29 , 30 , 31 , 32 , 33 . Interpretation of the results, however, has largely focussed on the issue of force balance, with internal magnetic pressure/curvature from the magnetic flux-rope unable to resist distortion from interaction with external solar wind structures. Here, we investigate a fundamental physical limit on a CME’s ability to act as a coherent magnetohydrodynamic structure; namely the inability of information to propagate within a CME. We use a simple analytical model for CME evolution in the heliosphere to calculate the Alfvén wave speed [ V A ] within the CME at a range of heliocentric distances. We also estimate the geometric speed of separation of plasma parcels [ V G ] within the CME that results from purely radial heliocentric propagation. For a range of CME parameters, we determine the heliocentric distance at which V G exceeds V A and hence information can no longer propagate within the CME. Methodology The geometric and dynamic effects of CME propagation are investigated using a simple analytical model, closely following Owens, et al . 21 , which agrees well with numerical MHD simulations of CME evolution 34 . In summary, CMEs are assumed to initially take the form of a circular cross-section, force-free flux rope in the low corona and subsequently be deformed by a combination of CME-centred self-expansion and heliocentric radial propagation. The internally-driven self-expansion is limited to the heliocentric radial direction, so that the CME maintains constant angular width, as is commonly observed 1 . Figure 1 shows snapshots of the resulting CME cross section at increasing times (in arbitrary units), using typical CME parameters: an initial (at time t = 0) circular cross-section of radius 1 solar radii [ r S ] at a height of 2 r S gives a CME angular extent with respect to the Sun of approximately 60°; a constant CME transit speed [ V TR ] of 600 km s −1 and a constant internally-driven expansion speed [ V EX ] of 90 km s −1 35 . The CME rapidly “pancakes” due to radial propagation in spherical geometry 34 , 36 . The change in CME cross-sectional area, computed by numerically integrating the analytical model, is shown in Fig. 2a . By 1 AU, the cross-sectional area of the CME is approximately 3000 times its initial value. Figure 1 An analytical model for the cross-sectional area of a CME as it propagates anti-sunward. Snapshots are shown at successive times. The plane is perpendicular direction of propagation (e.g., the ecliptic or RN planes in heliocentric radial-tangential-normal, RTN, coordinates). Points P A and P B on the leading edge of the CME subtend an angle θ at the centre of the Sun. Due to radial propagation in spherical geometry, P A and P B separate with time, leading to the geometric speed V G . Full size image Figure 2 Evolution of CME properties with heliocentric distance, using V TR = 600 km s −1 , B 1AU = 15 nT and n 1AU = 7 cm −3 . Panel (a) shows the cross-sectional area of the CME. Panel (b) shows the magnetic field intensity ( B , in black), assuming constant magnetic flux threading the CME cross section, and the ion number density ( n , in red), assuming conservation of mass within the CME. Panel (c) shows the resulting Alfven speed within the CME ( V A , black). Coloured lines show the geometric separation speeds [ V G ] of points on the CME leading edge as a result of expansion in spherical geometry for a range of separation angles [ θ ], from 5° (red) to 60° (blue) in 5° steps. Full size image From this model and a number of reasonable assumptions, it is possible to estimate the bulk properties within the evolving CME and so compute the Alfvén speed. We assume that the total magnetic flux within the CME is conserved (true to within a few percent 37 ) and that the magnetic flux is orientated perpendicular to the CME cross section. Thus B , the magnetic field intensity within the CME at a heliocentric distance R , will scale with the CME cross-sectional area, A : $$B={B}_{0}\frac{{A}_{0}}{A}$$ (1) where the subscript 0 refers to values at a reference distance. Figure 1b shows the profile for B 0 = 15 nT at R 0 = 1 AU, a typical value observed in situ 35 . Similarly, if the amount of plasma within the CME is assumed to be constant, the ion density [ n ] at distance R will scale as the volumetric increase of the CME: $$n={n}_{0}\frac{{A}_{0}{R}_{0}}{A\,R}$$ (2) The black line in Fig. 2b shows the n profile for a CME proton density of n 0 = 7 cm −3 at R 0 = 1 AU, again a typical observed value 38 . Combining these two parameters allows approximation of the Alfvén speed [ V A ] within a CME as a function of heliocentric distance, R : $${V}_{A}=\frac{B}{\sqrt{{\mu }_{0}n\,{m}_{i}}}$$ (3) where μ 0 is the magnetic permeability of free space and m i is the mean ion mass. For simplicity we here assume a proton ion plasma which gives an upper limit for the Alfvén speed: for helium ion composition of 8%, m i is 1.24 a.m.u. and the Alfvén speed would be 0.9 times the values given here. Note that the maximum wave speed within a magnetised plasma is the fast magnetosonic speed, a combination of V A and the ion-acoustic wave speed [ V S ] which results from the finite plasma temperature. Using a typical 1-AU temperature and a polytropic index as high as 4/3, V S within a CME remains at least an order of magnitude lower than V A at all heliocentric distances, so can be ignored for the purposes required here. Results The black line in Fig. 2c shows V A as a function of heliocentric distance. The coloured lines show the separation speed [ V G ] of points on the CME leading edge which results from radial expansion in spherical geometry. The red line shows points separated by a heliocentric angle θ = 5°, while the blue line shows θ = 60°, the angular extent of a typical CME 1 . Coloured lines show separations in 5° steps between these two limits. For small values of θ (<10°), the Alfven speed is greater than the geometric separation speed for the entirety of the CME’s transit to 1 AU. For plasma parcels separated by θ = 15°, a quarter of the total angular extent of a typical CME, V G first exceeds V A at approximately 0.45 AU. We refer to this distance as the critical distance [ R CRIT ] as once this V G > V A conidtion is met information can no longer travel between plasma parcels of the given angular separation and the CME has lost coherence over such length scales. For increasing angular separations, this critical distance moves ever closer to the Sun. For θ = 60°, the typical CME angular width, magnetic coherence is lost almost immediately after eruption, at least in this example (i.e., for CME transit speed of 600 km s −1 and B at 1 AU of 15 nT). We now investigate the effect of CME properties on the critical distance. Figure 3 shows R CRIT as a function of CME transit speed [ V TR ] and magnetic field intensity at 1 AU [ B 1AU ]. n is fixed at 7 cm −3 , though similar results are found for a reasonable range of n . Panels, from left to right, show angular separations of 15°, 30° and 60°. These correspond to a quarter, half and the full angular extent of a typical CME, respectively. The general trend is for R CRIT to increase with CME magnetic field intensity and to decrease with CME transit speed. For extremely narrow CMEs (~15°), or plasma parcels within a typical CME that are separated by approximately a quarter of the total angular extent, V A can remain above V G out to 1 AU as long as the CME speed is relatively low and the magnetic field intensity is relatively high. The blue dots in Fig. 3 show values of B 1AU and V TR from observations of 477 CMEs, obtained by combining coronagraph and in situ over the period 1995–2016 38 . Only a small fraction of these observed CMEs (<10%) have properties which suggest they remain coherent over an angular extent of 15° out to 1 AU. The bulk of the CMEs, approximately 70%, have lost coherency across 15° of angular extent within 0.4 AU. Increasing the angular separation to 30°, about half the angular extent of a typical CME, none of the observed CMEs remain coherent to 1 AU, with most losing coherence within 0.2 AU. Finally, looking at the full angular extent of a typical CME, 60°, all observed CMEs have lost coherence by 0.3 AU, with ~90% losing coherence within 0.1 AU. Figure 3 The critical distance, R CRIT , at which expansion speed exceeds the Alfven speed and the CME ceases to be a coherent structure, as a function of CME transit speed [ V TR ] and the magnetic field intensity within a CME at 1 AU [ B 1AU ]. The panels, from left to right, show angular separations on the CME front of 15°, 30° and 60°, respectively. These correspond to a quarter, half and the full angular extent of a typical CME, respectively. The cyan dots show CME observations from the Cane and Richardson 38 catalogue, updated to the end of 2016. Full size image Discussion and Conclusions This study has investigated the speed at which information can propagate between CME plasma parcels (the Alfvén speed, V A ), relative to the speed at which CME plasma parcels separate owing to radial propagation in spherical geometry [ V G ]. Where V G exceeds V A , plasma parcels can no longer be considered to constitute a single, coherent magnetohydrodynamic (MHD) structure. Figure 4 illustrates this idea. It shows a CME travelling through fast solar wind, but the upper flank encounters a slow wind stream. This results in distortion of the magnetic field structure within the CME. An Alfven wave is launched at a speed V A from point P B , which lies within the CME at the latitude of the solar wind speed shear, towards a point P A , located near the centre of the CME. Geometric expansions means that P B is moving away from P A at a speed V G . If V G > V A , as shown in this example, information cannot travel between the two points. Thus P A and P B are effectively isolated, and the response of the CME at points P A and P B to a structured solar wind is entirely independent; there can be no action as a single body, regardless of the magnitude of restoring forces such as magnetic pressure and curvature forces. A similar effect is expected within the deflected solar wind flow in the sheath region ahead of a fast moving CME 39 . Due to the large V G , the deflected solar wind flow within the sheath (labelled V SH in Fig. 4 ) 24 cannot keep pace with a point on the leading edge and thus does not flow around the obstacle, but piles up ahead of it. Figure 4 A schematic of one flank of a CME (white) propagating through a structured solar wind, in the reference frame of a point P A , located close to the centre of the CME. The shock (thick black line), and CME leading/trailing edges move away from P A at the CME expansion speed, V EX . Fast solar wind, in beige, flows into the CME shock at a speed V TR + V EX − V FSW ( V TR and V FSW are the CME transit speed and the fast solar wind speed, respectively). Slow solar wind, in blue, flows into the shock at a speed of V TR + V EX − V SSW , (where V SSW is the slow solar wind speed). The point P B , located at the fast/slow solar wind interface, experiences a distortion of the CME magnetic field and launches an Alfven wave at speed V A towards P A . Point P B , however, is moving away from P A due to geometric expansion at a speed V G , thus the information can never arrive. Similarly, V SH , the speed of the deflected solar wind flow in the sheath behind the shock, is smaller than V G and thus the sheath flow cannot travel around the CME. Full size image We estimate V A and V G using an analytic model, allowing parameter space to be fully and efficiently explored. Where simplifying assumptions are required, they have been chosen as far as possible to act in the favour of CME coherence (e.g., limiting the expansion of CMEs to the radial direction reduces V G ; coherence is defined to be lost when V G exceeds V A , rather than when the information travel time becomes large compared to the CME life time; helium is not included in the Alfvén speed estimation, etc). Thus we effectively examine the “best case scenario” for CME coherence. Nevertheless, we find that all observed CMEs lose coherence over their full angular extent by 0.1 to 0.2 AU. Even considering Alfvén wave propagation over half the typical CME angular extent, which would allow, e.g., the east flank of an ICME to know what’s happening to the west flank, no observed CMEs are expected to maintain coherence to 1 AU; indeed, less than 0.5% of all observed CMEs are expected to maintain flank-to-flank coherence past 0.3 AU. One aspect that requires further investigation is the assumption that the fastest information path between two points is a straight line. While this is true for the analytical model employed here, as it has constant magnetic field intensity within a CME, in a real magnetic cloud this need not be the case. For an ideal force-free magnetic flux rope, the magnetic field intensity is highest at the flux rope axis (i.e., the centre of the CME). Thus shorter information travel times between two points on the CME leading edge could, in principle, be obtained using a non-linear ray path taking advantage of the increased Alfvén speed deep within the CME. An alternative preferential wave path could be through the high magnetic field intensities in the sheath region ahead of a fast CME, though the sheath is often high plasma density too, meaning the Alfvén speed may not be enhanced. These dynamic effects will be fully investigated using numerical magnetohydrodynamic modelling of an erupting magnetic flux rope and ray-tracing at each time step. In practice, however, these effects are unlikely to provide significantly different results to those presented here. Any increased Alfvén speed will be offset by an increased path length, and compression of the CME leading edge by interaction with the ambient solar wind means the highest magnetic field intensities are usually located near the CME leading edge, not near the centre of the CME 35 . In light of these findings, new approaches are required for the interpretation of CME observations. We discuss a few examples here. The highly structured intensity patterns routinely seen within CMEs in Heliospheric Imager (HI) observations 40 by the STEREO spacecraft may be a direct result of both the scale of coherence within a CME and the variability of the solar wind through which a CME is travelling. These relatively small-amplitude, small-scale structures are unlikely to be a significant issue for interpretation of the global properties of CMEs, either with the geometric models applied to HI observations to determine CME speed and direction 13 , or to flux-rope models applied to in situ observations 18 . Larger amplitude gradients in the solar wind, however, such as a sharp latitudinal or longitudinal transition between fast and slow wind (Fig. 4 ), are likely to invalidate both forms of reconstruction technique by generating both large distortion to the CME shape and radically altering the pile-up of the solar wind plasma in the CME sheath, which is the plasma that is imaged by Thompson-scattered photospheric light. The results presented here also suggest CME arrival-time forecasting is sensitive to ambient solar wind structure at the local scale, not just at a global scale 41 : application of a drag equation to a CME’s interaction with the solar wind 42 is only really valid along an individual radial flow line, not to the CME as a whole. We suggest CME reconstruction techniques need to be modified to incorporate information about solar wind structure, either from global MHD models or from previous solar wind observations (e.g., assuming corotation of the solar wind). Ultimately, this may require solar wind data assimilation, to best interpolate and extrapolate between the available observations using physics-based models 32 . | None | [] | [] | [] | SciNews | Space | M. J. Owens et al. Coronal mass ejections are not coherent magnetohydrodynamic structures, Scientific Reports (2017). DOI: 10.1038/s41598-017-04546-3 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-017-04546-3 | https://phys.org/news/2017-06-sun-eruptions-earth-scientists.html | A new study by scientists at the University of Reading has found that coronal mass ejections (CMEs), powerful solar eruptions that can cause long-term power cuts, destroy electronic devices, and increase cancer risk for aeroplane passengers, have cloud-like structures that make them harder to predict. Unlike previously thought, CMEs do not move like single bubbles through space, but rather like an expanding dust cloud or sneeze, with individual plasma parcels moving independently. This makes it challenging to predict the shape and movement of CMEs as they pass through the solar wind, and scientists recommend incorporating information about solar wind into CME observations to better understand the threat they pose to Earth.
Long-term power cuts, destruction of electronic devices and increased cancer risk for aeroplane passengers are all potential effects of the Earth being hit by a powerful solar eruption. Yet, new research has found space scientists have their work cut out to predict when these coronal mass ejections (CMEs) are on a collision course with Earth. A study of CMEs by scientists at the University of Reading has found they have cloud-like structures. This means they are more influenced by solar wind, through which they pass to reach Earth, making their movements much harder to predict than if they were single bubble-like entities as was previously thought. CMEs are huge blasts of solar plasma and magnetic fields from the sun's atmosphere that can reach Earth in one to three days. A direct hit could have catastrophic consequences, as CMEs are capable of damaging satellites, destroying electronic devices and potentially exposing people at high altitude, such as astronauts and aviation crew and passengers, to cancer-causing radiation. They occur frequently, but predicting which ones will impact Earth and how severely is difficult. Clouds not bubbles Professor Mathew Owens said: "Up until now, it has been assumed CMEs move like bubbles through space, and respond to forces as single objects. We have found they are more like an expanding dust cloud or sneeze, made up of individual plasma parcels all doing their own thing. "This means that trying to predict the shape and movement of CMEs as they pass through the solar wind becomes extremely difficult. Therefore if we want to protect ourselves from solar eruptions, we need to understand more about the solar wind." The new study, published in Nature Scientific Reports on Friday 23 June, looks in detail for the first time at how CMEs behave as they make their way through space, and how they interact with external forces like solar wind. The Reading scientists took a cross section of a CME to examine its structure more closely. They found that a CME quickly reaches the point at which the speed of its expansion exceeds the speed at which information can travel within the CME. At this point, it ceases to be a coherent structure, so any distortion to one part of the cloud caused by external forces does not affect it as a whole. Space weather threat Scientists are constantly monitoring the sun to track solar wind and extreme space weather. The Reading team recommends that information about solar wind should be incorporated into CME observations to ensure we are fully aware of the threat they pose to Earth. A previous study by University of Reading scientists found a shift in solar activity, expected to occur by the middle of the century, could make us more vulnerable to CMEs, as well as concentrating the Northern Lights around the poles – out of view of Great Britain. In 2011, the threat of space weather was added to the Government National Risk Register of Civil Emergencies. |
10.1038/srep23114 | New spin Seebeck thermoelectric device with higher conversion efficiency created | A thermoelectric (TE) device using cutting edge thermoelectric conversion technology has been created by a team comprising NEC Corporation, NEC TOKIN Corporation and Tohoku University. The new technology, known as the spin Seebeck effect, has conversion efficiency 10 times higher than the conventional method. Thermoelectric conversion technology that converts energy abandoned as waste heat back to electric power could potentially save energy and reduce greenhouse gas emissions. Although conventional spin Seebeck thermoelectric devices have the advantage of low manufacturing costs and high versatility and durability, their energy conversion efficiency is inferior. "We have improved the conversion efficiency of this spin Seebeck thermoelectric device by more than 10 times because of its newly developed material and device structure," says Soichi Tsumura, General Manager, IoT Device Research Laboratories, NEC Corporation. "Furthermore, devices made of flexible material, such as resin, have been achieved using a manufacturing process that does not require high-temperature heat treatment." "The conversion efficiency of this new spin thermoelectric device has been improved by almost one million times when compared to the earliest device, and has taken an important step towards practical use as a generator element. The achievement of practical use as a heat flux sensor is also in sight," says Tsumura. Devices with bending resistance and low heat treatment temperature achieved by new deposition technology.New deposition technology fabricates a fine ferrite film for spin Seebeck thermoelectric devices at 90°C, much lower than the 700°C used with the conventional method. Owing to the decrease in heat treatment temperature, elements can be created on the surface of plastic film, etc., and flexible devices of various shapes are created. Credit: NEC Corporation The three parties aim to further the research and development of technologies to generate electricity from the large amount of waste heat emitted by things such as plants, data centers and vehicles. These results were achieved as part of the "Saitoh Spin Quantum Rectification Project" led by Tohoku University Professor Eiji Saitoh. It is funded by the Exploratory Research for Advanced Technology (ERATO) program of the Japan Science and Technology Agency (JST). | A team of researchers from NEC Corporation, NEC TOKIN Corporation, and Tohoku University has developed a new thermoelectric device using cutting-edge technology, known as the spin Seebeck effect, which has a conversion efficiency 10 times higher than the conventional method. The device, made with a newly developed material and structure, can convert waste heat into electric power with an efficiency 1 million times higher than the earliest device. The technology also allows for the creation of flexible devices with bending resistance and low heat treatment temperature, achieved through a new deposition method that fabricates a fine ferrite film at 90°C, compared to the conventional 700°C. The team aims to further develop this technology to generate electricity from waste heat emitted by plants, data centers, and vehicles, with the goal of reducing energy consumption and greenhouse gas emissions. | None | Abstract Heat-flow sensing is expected to be an important technological component of smart thermal management in the future. Conventionally, the thermoelectric (TE) conversion technique, which is based on the Seebeck effect, has been used to measure a heat flow by converting the flow into electric voltage. However, for ubiquitous heat-flow visualization, thin and flexible sensors with extremely low thermal resistance are highly desired. Recently, another type of TE effect, the longitudinal spin Seebeck effect (LSSE), has aroused great interest because the LSSE potentially offers favourable features for TE applications such as simple thin-film device structures. Here we demonstrate an LSSE-based flexible TE sheet that is especially suitable for a heat-flow sensing application. This TE sheet contained a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film which was formed on a flexible plastic sheet using a spray-coating method known as “ferrite plating”. The experimental results suggest that the ferrite-plated film, which has a columnar crystal structure aligned perpendicular to the film plane, functions as a unique one-dimensional spin-current conductor suitable for bendable LSSE-based sensors. This newly developed thin TE sheet may be attached to differently shaped heat sources without obstructing an innate heat flux, paving the way to versatile heat-flow measurements and management. Introduction As efficient energy utilization is becoming a crucially important issue for sustainable future, a heat management technique to optimally control the flow of omnipresent thermal energy is currently of great interest. To realize smart thermal management with real-time controllability, there has been a growing demand for visualizing the flow of heat in various places such as industrial facilities and large-scale data centres. The thermoelectric (TE) conversion technique 1 , 2 , 3 , which directly converts a thermal gradient into an electric current, is one of the most powerful methods utilized to sense a heat flow as a voltage signal. In fact, heat-flow sensors based on the Seebeck effect 4 , which have thermopile structures consisting of π-structured thermocouples, are commercially available and used for various purposes such as the evaluation of materials. To further extend heat-flow-sensing capabilities to other widespread applications, however, such conventional devices face certain challenges. First, because Seebeck-based TE devices exhibit a relatively high heat resistance, the introduction of these devices into a heat-flow environment inevitably obstructs the heat flux and alters the distribution of the heat flow. Therefore, it is difficult to correctly evaluate the innate heat flux which we actually want to determine. Second, most of the commercially available heat-flow sensors are rigid and not easily applied to curved or uneven surfaces, making it difficult to monitor the heat flux around irregularly shaped heat sources. Because conventional TE devices, in which thermocouples are connected electrically in series, are intrinsically vulnerable to bending stresses, materials and structures for flexible TE devices have been extensively studied 5 , 6 , 7 . For such sensing applications, an emerging research field, spin caloritronics 8 , 9 , will provide new device-design opportunities. For example, TE devices based on the anomalous Nernst effect (ANE), which exhibit transverse TE voltage in ferromagnetic metals (FM), can be suitably utilized for sensing purposes 9 , 10 , 11 , 12 , 13 . In this work, we present another promising approach to realizing flexible heat flow sensors using the longitudinal spin Seebeck effect (LSSE) 14 , 15 , 16 , 17 , 18 . First reported in 2010, the LSSE offers an unconventional method to design TE devices by making use of a physical quantity called a spin current. The LSSE devices, typically composed of a ferromagnetic insulator (FI) and a normal metallic film (NM), have gained attention because of the simple device structure and novel scaling capability, leading to novel TE devices 16 . The LSSE also has potential to realize practical sensing applications. It is recently reported that FI/NM multilayer structure unexpectedly exhibit significantly enhanced LSSE signal 19 , which may lead to high-sensitive heat-flow sensors. Furthermore, combination of the LSSE and ANE is also a quite hopeful approach. In hybrid TE devices consisting of FI and FM layers, both the LSSE and ANE can constructively contribute to the output voltage, leading to largely enhanced TE signals 20 , 21 . To pave the way for practical sensing applications using the LSSE, here we have demonstrated LSSE-based heat-flow sensing sheets. The concept of the LSSE-based flexible TE sheet is schematically depicted in Fig. 1 . The sheet consists of a magnetic (ferro- or ferrimagnetic) film with in-plane magnetization M and a metallic film formed on a flexible substrate. When a heat flux q flows through the TE sheet perpendicularly to the film plane, a spin current density j s is induced by q via the LSSE. The value of j s is proportional to q (| j s | ∝ q ). Then, j s is converted into an electric field in the transverse direction via the inverse spin Hall effect (ISHE) 22 , 23 : Figure 1 Concept of TE sheet for heat-flow sensing based on the LSSE. The LSSE-based TE sheet consists of a metallic film and a magnetic (ferro- or ferrimagnetic) film formed on a flexible substrate. When a heat flux q flows through the TE sheet, a spin current j s is induced and injected from the ferrite film into the metallic film by the LSSE. Then the j s is finally converted into an electric voltage V as a result of the inverse spin Hall effect (ISHE) in the metallic film. The thin and simple bilayer structure of the TE sheet allows us to design novel heat-flow sensors with low thermal resistance and a flexible shape. Full size image In the above equation, θ SH and ρ represent the spin-Hall angle and resistivity of the metallic film, respectively. Therefore, the voltage signal V between the two ends of the TE sheet can be employed to evaluate the heat flux q penetrating through the sheet, because V is proportional to q ( V = E ISHE l ∝ q ). Here, it should also be emphasized that a longer sheet length l straightforwardly leads to larger output voltage V . This scaling law is in stark contrast to that of conventional TE devices, in which TE voltage scales with the number of thermocouples connected within the devices. These features enable us to design simple bilayer-structured devices suitable for heat-flow sensors. But there is a problem when we use the above setup for broad heat-flow sensing purposes. When q flows obliquely to the TE sheet, the in-plain component of q gives rise to another TE effect called the transverse spin Seebeck effect (TSSE) 24 , 25 , 26 , 27 , 28 , which can also contribute to the output voltage. Since the mixed output signals from the LSSE and TSSE cannot be distinguished from each other, the TSSE becomes an encumbrance to the correct evaluation of q penetrating the TE sheet in this case. To exclude the TSSE contribution, here we used unique one-dimensional (1D) spin-current conductor, which enables us to detect only the LSSE contribution and to correctly evaluate q flowing across the TE sheet. Results Fabrication of the LSSE-based TE sheet To demonstrate such an LSSE-based flexible TE sheet, we used a spray-coating technique known as “ferrite plating” to grow magnetic films. Ferrites refer to oxide ceramics containing iron, which typically exhibit ferromagnetic properties and have been successfully used as magnetic materials for LSSE devices 29 , 30 . However, conventional ferrite-film preparation techniques, such as liquid phase epitaxy and pulsed-laser deposition, require a high temperature process (400–800 °C) for crystallizing the ferrites, hindering the formation of films on soft-surfaced materials, such as plastics. By contrast, ferrite plating is based on a chemical reaction process in which the Fe ion is oxidized (Fe 2+ → Fe 3+ ); therefore, no high-temperature processes, such as annealing, are required 31 , 32 . This feature enables us to coat ferrite films on a variety of substrates, including plastic films. In this work, we prepared a ferrite Ni 0.2 Zn 0.3 Fe 2.5 O 4 film using this method. As schematically illustrated in Fig. 2(a) , we grew the film by simultaneously spraying an aqueous reaction solution (FeCl 2 + NiCl 2 + ZnCl 2 ) and an oxidizer (NaNO 2 + CH 3 COONH 4 ) onto a substrate. In this process, the oxidizer reacts with the metal chlorides on the surface, forming a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film on the substrate. All the processes were performed below 100 °C. Figure 2 Demonstration of heat-flow-sensing TE sheet based on the LSSE. ( a ) Schematic of the ferrite-plating method. An aqueous reaction solution (FeCl 2 + NiCl 2 + ZnCl 2 ) and an oxidizer (NaNO 2 + CH 3 COONH 4 ) are sprayed onto a substrate mounted on a rotating stage. ( b ) SEM image of a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film grown on a SiO 2 /Si substrate using the ferrite-plating method. The film exhibits a columnar-crystal structure. The typical diameter of the columnar grains is approximately 100 nm. ( c ) Photograph of an LSSE-based flexible TE sheet, in which a Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was formed on a 25-μm-thick polyimide substrate. ( d ) TE voltage V as a function of an external magnetic field H , measured when a heat flux q was applied across the TE sheet. The sign of V is reversed, when the sign of H or q changes. ( e ) TE voltage from the TE sheet as a function of q . From the fitting with the solid line, the heat-flow sensitivity of this TE sheet was V / q = 0.98 nV/(W/m 2 ). Full size image A noticeable feature of the ferrite film, grown via such a layer-by-layer chemical process, was its columnar-crystal grain structure. Figure 2(b) depicts the cross-sectional scanning electron microscope (SEM) image of a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film that was grown on a SiO 2 /Si substrate for the purpose of the SEM observation. The diameter of the columnar grain was typically approximately 100 nm. We also verified via transmission electron microscopy and electron diffraction measurements that the crystal orientation of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 was coherently aligned within a single columnar grain. Such a columnar structure can function as a 1D spin-current conductor favorable for LSSE-based (and TSSE-free) heat-flow sensors because of the following two reasons. First, in the LSSE configuration shown in Fig. 1 , a magnon spin current is driven along the columnar grain and is thus less subject to grain scattering, effectively leading to the LSSE signal. Second, since the columnar-grain boundaries impede the transverse propagation of both magnons and phonons, in-plane components of a heat flow cannot effectively produce the TSSE in the light of previous studies (e.g., see ref. 33 , 34 ). Thus we can exclude the possible TSSE contribution, enabling us to correctly measure a heat flow penetrating the TE sheet via the LSSE. Using the ferrite plating technique, we successfully fabricated a flexible TE sheet based on the LSSE. Figure 2(c) represents a photograph of the prepared TE sheet. First, a 500-nm-thick Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was grown on a 25-μm-thick polyimide substrate. Then, a Pt film with a thickness of 5 nm was formed on the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film by means of magnetron sputter deposition. As shown in Fig. 2(c) , our TE sheet was highly flexible and easily bent without breaking the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 film. The sheet was then cut into small pieces with a size of 8 × 4 mm 2 for TE measurements. Demonstration of the LSSE-based TE sheet for heat-flow sensing To evaluate how well the LSSE-based TE sheet functioned as a heat-flow sensor, we investigated its TE property in the following fashion. A heat flux q was driven across the 4 × 4-mm 2 central area of the TE-sheet sample by sandwiching the sheet between two Peltier modules. While driving the heat flow in such a manner, we simultaneously monitored the exact value of q penetrating the TE-sheet sample with a commercially available thin-plate-shaped heat-flow sensor, which was set immediately above the sample. Because the commercial heat-flow sensor was placed in direct contact with the central area of the TE-sheet sample, we could assume that the heat flux value monitored by the sensor was the same as the q actually penetrating across the sample. An external magnetic field H , which controls the direction of the magnetization M of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 films, was also applied to the entire system. The TE voltage V between the two ends of the Pt film was measured with two contact probes. The resistance of the Pt film was determined to be R Pt = 238 Ω. Figure 2(d) represents V as a function of H , measured when heat fluxes of q = −13.7, −6.5, 0.0, 5.6 and 11.6 kW/m 2 were driven across the TE-sheet sample. The TE voltage was observed along the direction perpendicular to the direction of both q and H , as derived from equation 1 (see the inset of Fig. 2(d) ). The result shows that the sign of V is flipped when q or H is reversed, which is a typical behaviour of LSSE-based devices. The heat-flux dependence of the TE voltage in Fig. 2(e) clearly demonstrates that V is proportional to q . The heat-flow sensitivity derived from the fitted line is V / q = 0.98 nV/(W/m 2 ). The demonstration of this linear relationship between V and q suggests that our LSSE-based TE sheet functioned as a heat-flow sensor. In an additional experiment, we have confirmed that the TE sheet exhibits no output signal when a temperature gradient was applied in the in-plane direction (see Supplementary Information ). It suggests that the TSSE is negligibly small in our ferrite-plated film because of its 1D spin-current conducting property. Ferrite-thickness dependence of the LSSE-based TE sheet We performed additional experiments to ascertain the origin of the observed TE signal. Given that a ferrite composed of Ni 0.2 Zn 0.3 Fe 2.5 O 4 is typically a semiconducting ferrimagnet with a small but non-zero electrical conductivity, it can exhibit the ANE, which also produces a transverse voltage in the same experimental configuration as the LSSE. In our ferrite plated film, however, the in-plane electrical resistance of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was too high to be measured, which may be partly attributed to the vertically oriented grain boundaries of the columnar-structured film. Due to this transverse electric insulation, we could not observe any signals originating from the bulk ANE in the Ni 0.2 Zn 0.3 Fe 2.5 O 4 . However, there still remains a possibility that the TE signal includes ANE contribution caused by magnetic proximity effects at the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 interface 35 . To shed light on such TE conversion mechanism, we investigated the TE properties of samples with varied ferrite-film thicknesses t F . Figure 3 presents the ferrite-thickness dependence of the heat-flow sensitivity ( V / q ) Norm normalized to the sensitivity at t F = 500 nm. The ( V / q ) Norm values monotonically increase for t F < 100 nm, whereas the t F dependence of V becomes saturated for t F > 100 nm. The plots are well fitted to an exponential curve ( V / q ) Norm = 1 − exp (− t F /λ) with λ = 71 nm. Similar to recent LSSE studies using yttrium iron garnet (YIG) films 36 , this ferrite-thickness dependence is consistently explained according to the magnon-driven LSSE scenario 27 , 28 , 37 , 38 , in which a certain ferrite thickness region (corresponding to the magnon-propagation length) below the Pt/ferrite interface effectively contributes to the voltage generation. On the other hand, the proximity-ANE scenario, which can occur at the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 interface, is not able to explain this dependence. Thus, our finding indicates that the obtained signal originated mainly from the bulk magnon spin current driven by the LSSE. The result also suggests that our columnar-crystalline film possesses good spin-current-conduction properties suitable for LSSE-based sensors. Though it is beyond the scope of this work, such 1D spin-current conductors might have unconventional magnon-propagation properties which is different from that of 3D conductors, because magnon-scattering events can be altered in such confined structure. Control of magnon propagation in low-dimensional conductors will be an exciting research topic for future work from both academic and practical viewpoints. Figure 3 Ferrite-film-thickness dependence of LSSE-based TE sheets. The t F dependence of the heat-flow sensitivity V / q , where the longitudinal axis is normalized by the sensitivity at t F = 500 nm. The dependence is well fitted by an exponential curve ( V / q ) Norm = 1 − exp (− t F /λ) with λ = 71 nm, consistent with the magnon-driven LSSE scenario. Full size image Bending-curvature dependence of the LSSE-based TE sheet Finally, we investigated heat-flow-sensing capability of the flexible LSSE-based TE sheet when the sheet was bent. Figure 4(a,b) depicts the H -dependence of the TE voltage V for the same Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 /polyimide sample when a heat flux q was applied across the samples over a 20 × 20-mm 2 area under condition where the sample was flat or bent (with a radius of curvature of r = 17 mm), respectively. The dependence of the heat-flow sensitivity, V / q , on the curvature r −1 is presented in Fig. 4(c) . The result clearly demonstrates that V / q is nearly constant independent of r −1 , suggesting that the bending stresses applied to the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 films do not significantly affect the TE conversion process consisting of the LSSE and the ISHE. This TE property, i.e., the TE conversion is independent of bending condition, is quite desirable for heat-flow sensing applications on various curved surfaces, because we are able to avoid additional calibration steps that depend on individual measuring objects with various surface curvature. Figure 4 Heat-flow sensing with a bent TE sheet. ( a ) TE voltage V from a flat Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 /polyimide sample as a function of an external magnetic field H measured when the heat flux q was applied across the sample over a 20 × 20-mm 2 area. ( b ) H dependence of the TE voltage V from a Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 /polyimide sample that was bent with a radius of curvature r = 17 mm, when the heat flux q was applied across the sample over a 20 × 20-mm 2 area. ( c ) Heat-flow sensitivity V / q as a function of curvature r −1 , indicating that V / q is almost independent of r −1 . Full size image Discussion We successfully demonstrated that an LSSE-based flexible TE sheet with 1D spin-current conducting film functions as a heat-flow sensor. The ferrite-thickness dependence of the TE voltage suggests that the TE signal was caused predominantly by the LSSE, which is consistent with other reports using Pt/NiFe 2 O 4 39 . The magnon-propagation length in our Ni 0.2 Zn 0.3 Fe 2.5 O 4 film perpendicular to the film plane is approximately 71 nm. The TE sheet exhibit nearly identical heat-flow sensitivity regardless of bending curvature, suggesting that our columnar-crystalline film retains good 1D spin-current-conduction properties even when bent. The outstanding features of our TE sheet in contrast to currently available sensors are high flexibility in shape and remarkably low thermal resistance, which is a highly desirable feature for versatile heat-flow sensing. Although we formed ferrite films on plastic substrates in this work, it is also possible to directly plate various heat sources with ferrite films, thereby offering a thermal-sensing function while minimally obstructing the innate heat flux. Such features will offer a variety of opportunities for less destructive heat-flow measurements. To use the TE sheets for a wide range of practical applications, the heat-flow sensitivity V / q must be further improved. A straightforward method to enhance the V / q is to enlarge the size of the TE sheet, as the output voltage scales linearly with the film length l . We can also increase the effective length l inside a certain area by adopting meandering-patterned metallic film structures 20 , 40 . Another strategy is to replace Pt. We investigated TE-sheet samples with different metallic materials instead of Pt and found that heat-flow sensitivity V / q of W/Ni 0.2 Zn 0.3 Fe 2.5 O 4 was 3.5-fold larger than that of Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 (see Supplementary Information ). Moreover, we can enhance the sensitivity further by adopting recently reported FI/NM multilayer structures 19 , or FI/FM structure which can utilize both the SSE and ANE 20 , 21 . Although we applied an external magnetic field H to the TE sheet for our experimental demonstration, this step is not necessary if the spontaneous magnetization M of the ferrite is sufficiently stable. The improvement of such magnetic stability is realized, for example, by doping cobalt into ferrite-plated films, which is known to enhance a coercive field of the ferrites 41 . The LSSE-based heat-flow-sensing technique, in which a heat flux induces an electrical signal indirectly via an LSSE-driven spin current, offers unconventional device-design opportunities, leading to novel heat-managing applications. Methods Sample preparation To prepare the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film for the TE sheet via a ferrite plating method, we first mounted a 25-μm-thick polyimide substrate on a rotating stage and then sprayed an aqueous reaction solution (FeCl 2 + NiCl 2 + ZnCl 2 ) and an oxidizer (NaNO 2 + CH 3 COONH 4 ) from two nozzles placed above the stage, as shown in Fig. 2a . This setup enabled us to grow the ferrite film by alternating adsorption and oxidation of the ingredient materials (including Fe, Ni and Zn). During the process, the temperature of the stage was maintained at approximately 90 °C. The thickness of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was controlled via the time period of this formation process. The composition of the ferrite film was analysed by inductively coupled plasma spectroscopy (ICPS). A Pt film was deposited on the top of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film with a magnetron sputtering system. Immediately before the sputtering process, the sample was exposed to argon plasma for 10 s to clean the surface of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 . TE conversion measurements To evaluate the TE conversion of a heat flow to electric voltage, the sample was cut into small 8 × 4-mm 2 pieces using a cutter. To investigate the heat-flow-sensing properties of the LSSE-based TE sheet, we drove a heat flow across the sheet using two commercial 4 × 4-mm 2 Peltier modules. The two Peltier modules were attached to the top and bottom of the TE sheet, enabling us to heat one side and cool the other side of the TE sheet. The temperature difference, applied in such a manner, led to a heat flux penetrating through the TE sheet. Because the in-plane thermal conductance in our thin TE sheet was quite small, we can assume that the direction of the heat flux was nearly perpendicular to the TE sheet. While driving the heat flow, we simultaneously monitored the exact value of q penetrating the TE sheet using a commercial thin-plate-shaped heat-flow sensor. The sensor was placed between the upper Peltier module and the TE sheet, in direct contact with the Pt film of the TE sheet. With this setup, we can assume that the same amount of heat flux q flowed across both the TE sheet and the sensor. The generated TE voltage was measured with a digital multimeter. TE measurements of the bent samples To evaluate bent LSSE-based TE sheets as shown in Fig. 4 , we used pairs of oxide-coated aluminium blocks with curved (concave and convex) surfaces. In the experiments, the TE sheet was sandwiched by the concave and convex blocks with a certain bending curvature. To investigate the bending-curvature dependence, we prepared several pairs of such blocks with different surface curvatures, in which the lateral size of the blocks was fixed to 20 × 20 mm 2 . The heat-flow-sensing properties of the bent TE sheets were evaluated in the same manner as described above. The heat flux was driven across the TE sheet by two Peltier modules attached to the top and bottom of the block pair that sandwiched the sheet. Commercially available 20 × 20-mm 2 heat-flux sensors were also used to monitor the level of the heat flux penetrating across the TE sheet. Additional Information How to cite this article : Kirihara, A. et al. Flexible heat-flow sensing sheets based on the longitudinal spin Seebeck effect using one-dimensional spin-current conducting films. Sci. Rep. 6 , 23114; doi: 10.1038/srep23114 (2016). | None | [] | [] | [] | SciNews | Physics | Akihiro Kirihara et al. Flexible heat-flow sensing sheets based on the longitudinal spin Seebeck effect using one-dimensional spin-current conducting films, Scientific Reports (2016). DOI: 10.1038/srep23114 Journal information: Scientific Reports | http://dx.doi.org/10.1038/srep23114 | https://phys.org/news/2016-04-seebeck-thermoelectric-device-higher-conversion.html | A team of researchers from NEC Corporation, NEC TOKIN Corporation, and Tohoku University has developed a new thermoelectric device using cutting-edge technology, known as the spin Seebeck effect, which has a conversion efficiency 10 times higher than the conventional method. The device, made with a newly developed material and structure, can convert waste heat into electric power with an efficiency 1 million times higher than the earliest device. The technology also allows for the creation of flexible devices with bending resistance and low heat treatment temperature, achieved through a new deposition method that fabricates a fine ferrite film at 90°C, compared to the conventional 700°C. The team aims to further develop this technology to generate electricity from waste heat emitted by plants, data centers, and vehicles, with the goal of reducing energy consumption and greenhouse gas emissions.
A thermoelectric (TE) device using cutting edge thermoelectric conversion technology has been created by a team comprising NEC Corporation, NEC TOKIN Corporation and Tohoku University. The new technology, known as the spin Seebeck effect, has conversion efficiency 10 times higher than the conventional method. Thermoelectric conversion technology that converts energy abandoned as waste heat back to electric power could potentially save energy and reduce greenhouse gas emissions. Although conventional spin Seebeck thermoelectric devices have the advantage of low manufacturing costs and high versatility and durability, their energy conversion efficiency is inferior. "We have improved the conversion efficiency of this spin Seebeck thermoelectric device by more than 10 times because of its newly developed material and device structure," says Soichi Tsumura, General Manager, IoT Device Research Laboratories, NEC Corporation. "Furthermore, devices made of flexible material, such as resin, have been achieved using a manufacturing process that does not require high-temperature heat treatment." "The conversion efficiency of this new spin thermoelectric device has been improved by almost one million times when compared to the earliest device, and has taken an important step towards practical use as a generator element. The achievement of practical use as a heat flux sensor is also in sight," says Tsumura. Devices with bending resistance and low heat treatment temperature achieved by new deposition technology.New deposition technology fabricates a fine ferrite film for spin Seebeck thermoelectric devices at 90°C, much lower than the 700°C used with the conventional method. Owing to the decrease in heat treatment temperature, elements can be created on the surface of plastic film, etc., and flexible devices of various shapes are created. Credit: NEC Corporation The three parties aim to further the research and development of technologies to generate electricity from the large amount of waste heat emitted by things such as plants, data centers and vehicles. These results were achieved as part of the "Saitoh Spin Quantum Rectification Project" led by Tohoku University Professor Eiji Saitoh. It is funded by the Exploratory Research for Advanced Technology (ERATO) program of the Japan Science and Technology Agency (JST). |
10.1038/s41558-022-01352-2 | New study shows that the resilience of ecosystems can be measured from space | A natural habitat's ability to withstand and recover from damage can be empirically monitored from space—and the method may prove important during upcoming decades of climate and land-use change. The first study to empirically document that vegetation resilience can be measured from space is published today in Nature Climate Change by a research team from the University of Potsdam, the Potsdam Institute for Climate Impact Research (PIK), the Technical University of Munich (TUM) and the University of Exeter. The method will likely be important for future assessments of declines in vegetation resilience due to anthropogenic climate change and unsustainable resource management. "New ways of handling large data sets make it possible to check on widely held theories and assumptions about how ecosystems function," said lead author Taylor Smith, from the University of Potsdam. "Our work empirically confirms of one of those theories—that it is possible to measure how resilient vegetation is to outside pressure with a straightforward mathematical model." The study used observational data to estimate the variability of global vegetation as well as the speed of recovery after large losses in vegetation. By analyzing different satellite products since 1992, the group shows that simple metrics can be used to estimate the resilience of ecosystems to large shocks—even where large losses of vegetation haven't happened yet. "So far it has been difficult to reliably measure vegetation resilience at a global scale," said co-author Niklas Boers, TUM, PIK and Exeter's Global Systems Institute. "We used powerful mathematical results to overcome this problem. This allows us to continuously measure changes in vegetation resilience at any place on the Earth's surface. We provide a solid, empirically confirmed framework for monitoring vegetation resilience from space." The work further reveals that in many regions, global vegetation has lost resilience over the last two decades, meaning vegetation has become more vulnerable and takes longer to regain its natural equilibrium after disturbances. "Vegetation resilience can be thought of as the ability to recover from large shocks such as droughts or fires. We find very different long-term trends in resilience—depending on climate zone and vegetation type—but overall, declines in vegetation resilience have become more common during the last two decades," said Smith. The analysis shows that on average, vegetation initially gained resilience globally during the '90s. Then a shift took place with a more pronounced resilience loss since the early 2000s. The finding indicates that especially tropical rainforests and Siberian Boreal forests have grown more vulnerable to events like wildfires, pests, human disturbances, and natural catastrophes. Numerous factors might contribute to this shift, such as natural variability, anthropogenic climate change, increasing human land use and deforestation, and a higher frequency of droughts and wildfires. "We urgently need to intensify our efforts to detect potential changes in vegetation resilience and to understand the underlying drivers," said Boers. "We expect anthropogenic global heating as well as land-use change to play an important role, but many processes aren't well understood, making it difficult to predict the fate of natural vegetation systems in the coming decades." Smith added: "Satellite data can play a crucial role here, particularly in continuously monitoring the health of vegetation and other ecosystems." The study is part of the TiPES project, an EU Horizon 2020 interdisciplinary climate science project on tipping points in the Earth system. Eighteen partner institutions work together in more than 10 countries. TiPES is coordinated and led by The Niels Bohr Institute at the University of Copenhagen, Denmark and the Potsdam Institute for Climate Impact Research, Germany. | A new study published in Nature Climate Change has found that vegetation resilience can be empirically monitored from space, allowing for the tracking of changes in ecosystems' ability to withstand and recover from damage. The research team, from the University of Potsdam, Potsdam Institute for Climate Impact Research, Technical University of Munich, and University of Exeter, used satellite data to estimate the variability of global vegetation and the speed of recovery after large losses. The study found that simple metrics can be used to estimate the resilience of ecosystems to large shocks, and that global vegetation has lost resilience over the last two decades, making it more vulnerable to events like wildfires, pests, and human disturbances. The researchers suggest that anthropogenic climate change, land-use change, and natural variability may be contributing factors to this decline, and that satellite data can play a crucial role in monitoring the health of vegetation and other ecosystems. | None | Abstract The character and health of ecosystems worldwide is tightly coupled to changes in Earth’s climate. Theory suggests that ecosystem resilience—the ability of ecosystems to resist and recover from external shocks such as droughts and fires—can be inferred from their natural variability. Here, we quantify vegetation resilience globally with complementary metrics based on two independent long-term satellite records. We first empirically confirm that the recovery rates from large perturbations can be closely approximated from internal vegetation variability across vegetation types and climate zones. On the basis of this empirical relationship, we quantify vegetation resilience continuously and globally from 1992 to 2017. Long-term vegetation resilience trends are spatially heterogeneous, with overall increasing resilience in the tropics and decreasing resilience at higher latitudes. Shorter-term trends, however, reveal a marked shift towards a global decline in vegetation resilience since the early 2000s, particularly in the equatorial rainforest belt. Main Natural ecosystems are severely threatened by climate change and biodiversity loss; the Amazon, African and southeast Asian rainforests are key examples that have attracted substantial recent attention 1 , 2 , 3 . These tropical vegetation systems have been inferred to exhibit multistability for broad ranges of mean annual precipitation 4 , 5 ; within the same precipitation ranges, both the rainforest state and an alternative savannah state are simultaneously stable. This implies that, even absent long-term changes in local or regional precipitation, transitions from the current rainforest state to the savannah state are possible and may be triggered by external perturbations such as droughts, forest fires and deforestation 6 . Although ecosystem transitions in tropical rainforests have received widespread attention, the risk of transitions to alternative ecosystem states appears to be a global characteristic that extends to high-latitude 7 , 8 and dryland ecosystems 9 . Given that ecosystem transitions could turn net carbon sinks into carbon sources 3 and the tremendous potential of vegetation to reduce atmospheric carbon dioxide concentrations 10 , the mitigation of anthropogenic climate change and the maintenance of global biodiversity are strongly dependent on the resilience of vegetation systems worldwide. Ecosystem resilience is typically defined as the capacity to resist and recover from external disturbances 11 , 12 , 13 . Unfortunately, this definition only allows for the empirical measurement of resilience either in controlled experiments (by applying an artificial disturbance) or by waiting for occurrences of large external disturbances to natural vegetation systems. Due to the scarcity of suitably strong external perturbations, it is difficult to quantify the resilience of natural ecosystems at a global scale, and in particular to investigate resilience changes over time. Theoretically, the fluctuation–dissipation theorem (FDT) from statistical mechanics 14 , 15 , 16 , 17 suggests that for specific classes of systems, the response to external perturbations can be expressed in terms of the characteristics of natural fluctuations around the equilibrium state. In other words, the FDT states that the rate at which a system will return to equilibrium following an external disturbance can be determined from its internal natural fluctuations. The tremendous practical value of the FDT comes from the fact that, if it can be shown to hold for a given system, the response to external perturbations can be predicted on the basis of the internal variability of the system in question. Evidence that the FDT holds has been revealed in several real-world systems 17 , ranging from financial market data 18 , 19 to atmospheric and climate dynamics 20 , 21 . Several studies have suggested that the lag-one autocorrelation (AC1)—a measure of how strongly correlated neighbouring time spans of a given time series are—and variance of a system can be used as measures of vegetation resilience 1 , 22 , 23 , 24 , 25 , 26 , 27 . The variability of natural fluctuations can be estimated in terms of the variance 22 , 27 , 28 , while the strength of the system’s memory can be measured using the AC1 1 , 23 , 24 , 25 , 28 . Low-dimensional dynamical system frameworks and designed experiments justify this choice by showing that variance and AC1 increase as the system approaches a critical threshold beyond which a bifurcation-induced transition—a jump to an alternative stable state—occurs, which is interpreted as a loss of resilience 29 , 30 . The increase in AC1 together with a corresponding increase in variance have been termed early-warning signals for critical transitions; the underlying change in dynamics is referred to as ‘critical slowing down’ 22 , 28 . It has been shown that early-warning signals can be identified before abrupt climate transitions evidenced in palaeoclimate records 31 , 32 , 33 as well as in ecosystem 28 and climate 34 , 35 model simulations. However, although the AC1 and variance have been used to quantify the stability or resilience of different systems, their actual suitability as measures of ecosystem, and in particular vegetation, resilience has not been confirmed outside of controlled and model-based experiments 36 , 37 , and in particular not based on empirical evidence. In this article, we use empirical remotely sensed vegetation data to test for the correspondence between theoretical vegetation resilience—AC1 and variance—and the rates of recovery from perturbations. We first use large perturbations to derive empirical recovery rates for diverse landscapes, vegetation types and climate zones using two independent vegetation datasets based on optical (advanced very-high-resolution radiometer (AVHRR) normalized difference vegetation index (NDVI), 1981–2015 38 ) and passive microwave (vegetation optical depth (VOD), 1992–2017 39 ) data; these data measure changes in vegetation with different methods and thus provide complementary information for our analysis. We then show that for VOD, the empirically estimated recovery rates from large external perturbations are indeed closely related to the continuously measurable response to small natural fluctuations, quantified here by AC1 and variance. We further show that the AC1 and variance of NDVI are not well matched to empirically estimated recovery rates from large disturbances and conclude that VOD is a more suitable basis for measuring vegetation resilience. We emphasize that while both AC1 and variance have previously been used to estimate vegetation resilience 1 , their theoretically expected relationships with recovery rates from perturbations, and thus with resilience, have yet to be confirmed empirically for vegetation systems. Moreover, temporal changes in AC1 and variance of remotely sensed vegetation indices, as we investigate here, have rarely been studied 40 , 41 . By comparing with the empirical rates of recovery from external perturbations, we demonstrate using VOD that both AC1 and variance provide robust, empirically verified global resilience measures. On the basis of this relationship, we further quantify global-scale changes in vegetation resilience since 1992 and find coherent resilience loss across land-cover types that has accelerated in the past two decades. Quantifying vegetation recovery from external perturbations Vegetation in the natural world is constantly subject to disturbances that vary greatly in frequency and intensity. Many of these signals are subtle, and identifying minor and short-term disturbances is difficult. Large excursions from the typical vegetation state of an ecosystem can, however, be identified by abrupt transitions in time series of vegetation indices. The empirical local recovery rate can then be estimated after each abrupt negative transition by fitting an exponential function to the time series as it recovers towards its previous state (Fig. 1 , see Methods for details). Fig. 1: Global vegetation data. a , Global long-term mean of VOD 39 (1992–2017). b , VOD time series for a given location in the Brazilian Amazon (8.375° S, 50.875° W). Raw time series in black, with deseasoned and detrended time-series residual in blue (see Methods for details). c , Recovery of the exemplary time series to the previous mean state after a rapid transition, with commensurate exponential fits. Rare large disturbances, such as those in 2007 and 2010, can be used to track the recovery of vegetation and assign a recovery rate using an exponential fit. See Extended Data Fig. 1 for a corresponding figure based on the NDVI. Exp., exponential. Full size image Both VOD (Fig. 1 ) and NDVI (Extended Data Fig. 1 ) are subject to the same types of major external disturbances (for example, droughts or fires) that can rapidly reduce both vegetation density (VOD) and vegetation productivity or greenness (NDVI). It is important to note that while both datasets measure vegetation, the data do not describe the same vegetation parameters and hence do not respond identically to external shocks; this can in some cases mean that the number of detected transitions differs between the two vegetation datasets over the same period. In addition, while vegetation recovery is measurable in both data, the time frame of those recoveries, and hence the fitted exponential function, can be dramatically different for the same perturbation (Fig. 1 and Extended Data Fig. 1 ). Further discussion of the limitations of the disturbance detection procedure can be found in Methods . Estimating resilience from intrinsic variability We find globally well-distributed recovery rates from diverse external shocks (Fig. 2 and Extended Data Fig. 2 ). Not all landscapes have experienced rapid and drastic changes in vegetation over the satellite measurement period; for such regions, it is impossible to directly measure vegetation resilience in terms of recovery from an external shock. Even in regions where perturbations are relatively frequent, they are too sparsely distributed to allow for an estimation of changes in the recovery rate, and thus resilience changes, through time (Fig. 2a ). Fig. 2: Global distribution of recovery rates. a , Recovery rate (for well-determined exponential fits, R 2 > 0.2) for VOD ( n = 11,538 perturbations for 10,620 unique locations). b , Theoretical estimate of the recovery rate computed via r AC1 = log[AC1] ( Methods ) from the AC1 of the detrended and deseasoned VOD time series at each location. c , Theoretical estimate of the recovery rate computed via r Var = σ 2 /(2〈 x 2 〉) ( Methods ) from the variance 〈 x 2 〉 of the detrended and deseasoned VOD time series at each location. Bare earth, snow and anthropogenic land covers are excluded from the analysis 42 ( Methods ). Note the sparsity of grid cells where there have been abrupt shocks that can be exploited to estimate the recovery rate ( a ), as opposed to theoretical measures ( b , c ) that can be computed for all grid cells with vegetation. Also note the similarity of the spatial patterns in b and c and their resemblance to the spatial pattern shown in a as far as there are values for the recovery rate available. d , e , Relative deviation d of theoretical recovery rate estimated from AC1 ( d ) and variance ( e ) (for example, d = ( r − r AC1 )/ r ). Clear patterns of over- and underestimation of recovery rate indicate that the theoretical framework does not perform equally in all locations. See Extended Data Fig. 2 for a corresponding figure based on the NDVI. Full size image The FDT suggests that the rate of a system’s recovery from large external perturbations is related to the variability (quantified by variance 22 , 27 , 28 ) and memory timescale (quantified by AC1 1 , 23 , 24 , 25 , 28 ) of natural fluctuations around the equilibrium 16 . Theory predicts an exponential relationship between the AC1 and the negative recovery rate r , that is, AC1 = e r Δ t , and a power-law relationship between the variance of the VOD time series x and the recovery rate r , that is, 〈 x 2 〉 = − σ 2 /2 r Δ t , where σ is the standard deviation of the driving noise, r < 0, and we set the time steps to Δ t = 1 (see Methods for details). For the set of locations where empirical recovery rates can be estimated (Fig. 2a ), both AC1 and variance can be derived directly from the corresponding time series. For areas where it was possible to empirically estimate the recovery rate from large perturbations (Fig. 2a ), there is broad spatial agreement with the AC1 (Fig. 2b ) and variance (Fig. 2c ) estimates (see also zoomed-in maps, Supplementary Figs. 1 and 2 ). Moreover, the two theoretical recovery rate estimates themselves, which are available for all vegetated grid cells, exhibit similar spatial distributions (compare Fig. 2b,c ), especially if the relative order of values is considered (see the rank comparison in Supplementary Fig. 3 ). Note that the AC1-based estimate for the recovery rate r mostly underestimates the recovery rate (Fig. 2d ), especially in parts of North America, central Europe and Southern Africa, while the variance-based estimate for the recovery rate mostly overestimates the recovery rate (Fig. 2e ). To more concisely compare the empirical (Fig. 2a ) with the two theoretical recovery estimates (Fig. 2b,c ), we compare them on a point-by-point basis (Fig. 3 ). For the VOD, the expected relationships hold remarkably well; for NDVI, the link between empirical and theoretical resilience metrics is much weaker (see Extended Data Fig. 3 ). Fig. 3: Empirical confirmation of recovery rates. Comparison between empirically measured recovery rates and theoretical resilience metrics calculated over the five years preceding each transition, for VOD data 39 . a , AC1 versus recovery rates r from exponential fits to recovering time series with R 2 > 0.3; the magenta (blue) line shows binned medians (means), which are close to the exponential fit of the empirical relationship between recovery rate and AC1 values (red line). Grey shading shows data interquartile range. The AC1 thus shows the expected exponential relationship with the recovery rate, but quantitatively, some deviations from the theoretically expected AC1 = e r (black line) are apparent. b , Same as a but for the variance. The variance indeed shows the expected power-law relationship with the recovery rate, but as for the AC1, there are some deviations from the theoretically expected \(\langle {x}^{2}\rangle =-{\sigma }_{r}^{2}/2r\) relationship (black line), where we use the spatial mean of the driving noise σ r . The mean variance and corresponding interquartile range are also shown for the case where the individual σ r values for each grid cell are used to compute the variance (orange line, with shaded interquartile range). c , Binned medians of AC1 as a function of the empirically measured recovery rate r , for increasing thresholds on R 2 of the exponential fit to the recovering time series after abrupt transitions, as indicated in the legend. d , Same as c but for the variance. Note that the match between empirical and theoretical estimates of the recovery rate improves the more restrictive the empirical estimation of the recovery rate is; low R 2 variance medians in d plot on top of each other until R 2 > 0.3. Bare earth, snow and anthropogenic land covers are excluded from the analysis 42 ( Methods ). See Extended Data Fig. 3 for a corresponding figure based on the NDVI and Extended Data Fig. 4 for a corresponding figure using the whole time series to compute AC1 and variance with VOD. See Supplementary Fig. 4 for alternative measures of theoretical variance based on different σ estimates. Full size image When considering the AC1 and variance values directly as functions of the recovery rates for all available grid cells together, the theoretically expected relationships are overall corroborated by the observational data, although differences between geographical regions are neglected when investigating the relationship in this way. As expected, some differences are therefore visible (compare Fig. 2 ). We note that the correspondence between theoretical and empirical estimates becomes substantially better if only recovery rates from exponential fits with R 2 > 0.5 are considered, compared with recovery rates from all fits with R 2 > 0.1 (Fig. 3 ). This indicates that the poor exponential fits to the recovering time series after transitions are a key reason for the differences between measurement and theory and suggests in turn that the more reliable the recovery rate estimate, the closer is the match between empirical and theoretical estimates of the recovery rates. We also note that for the variance, uncertainties in estimating the standard deviation of the driving noise σ also probably play a role. Estimating the variance from the empirically determined recovery rate via 〈 x 2 〉 = − σ 2 /2 r Δ t requires an estimate of σ . We calculate each individual σ i and then bin the resulting data points to obtain the orange curve in Fig. 3b , while we use the globally averaged σ to obtain the black curve. Global shifts in vegetation resilience Rapid large-scale perturbations are not evenly distributed in space and time (compare Fig. 2 ), which renders a reliable estimation of temporal resilience changes in terms of empirical recovery rates impossible. As justified by the relationship between recovery rates and theoretical resilience metrics (compare Fig. 3 ), we instead calculate resilience in terms of both the AC1 and variance in rolling five-year windows over all vegetated areas (Fig. 4 ). In the following, we define resilience loss (gain) if at least one of the two indicators (AC1 or variance) shows a statistically significantly increasing (decreasing) trend while the other indicator does not exhibit a significant trend in the other direction (Fig. 4 ). Fig. 4: Global resilience trends. a – c , Direction (+/–) of global resilience trends for AC1 and variance using VOD data 39 for 1992–2017 ( a ), 1992–2004 ( b ) and 2004–2017 ( c ). Bare earth, snow and anthropogenic land covers are excluded from the analysis 42 (white areas) ( Methods ). Linear trends are calculated on the basis of five-year rolling-window AC1 and variance estimates; only trends with P < 0.05 in either AC1 or variance are shown in colour (see Methods for details on significance testing). Pixels with mixed significant trends (for example, AC1 positive, variance negative) are shown in grey. See Supplementary Table 1 for raw pixel counts. See Extended Data Fig. 5 for global AC1 and variance trends for all three periods, and Supplementary Fig. 5 for latitude-aggregated trends. Note the increases in the strength of resilience loss since the 2000s, especially in the tropics (Extended Data Figs. 5 – 7 ). Full size image Over the period 1992–2017, the spatial pattern of resilience trends in terms of AC1 and variance is complex (Fig. 4a and Extended Data Fig. 5 ) but follows consistent latitudinal patterns where equatorial (for example, Amazon and Congo basins) and monsoon-driven (for example, southeast Asia) areas show generally increasing resilience (negative trends in both indicators), and high-latitude areas typically show decreasing resilience trends, especially for the Northern Hemisphere. The global picture for long-term resilience trends is thus mixed; there is only a slight majority of grid cells with resilience losses (54.2%) compared with the number of grid cells with resilience gains (41.6%) over the whole period 1992–2017. The given percentages refer to the set of grid cells that have at least one statistically significant trend in either of the two indicators; the unconfined class with significant yet opposing trends contributes the remaining ~4%. When we restrict the analysis to the first half of our study period (1992–2004), trends are again mixed, with increasing resilience in the tropics and decreasing resilience at higher latitudes (Fig. 4b ); these trends are stronger for variance than for AC1 (Extended Data Fig. 5 ). From the early 2000s onward, however, we observe a marked increase in resilience loss in terms of both indicators (that is, significantly positive trends in AC1 and variance; Fig. 4c and Extended Data Figs. 5 – 7 ). We observe an increase from 28.2% to 59.4% of pixels with resilience loss between the periods 1992–2004 and 2004–2017; the percentage of pixels showing resilience gains decreased from 37.9% to 33.8%. Areas with significant yet opposing trends contribute the remaining 33.8% and 6.8%, respectively; many regions with opposing significant trends until 2004 show coherent resilience loss in both indicators for the period since 2000, 2002 or 2004 (Fig. 4c and Extended Data Fig. 7 ). Some regions, such as the high northern latitudes, southern Africa and parts of Australia, show consistent resilience losses throughout the study period, which broadly agrees with previous findings based on alternative resilience metrics and AVHRR NDVI data 41 . Many regions, in particular the equatorial rainforest belt, have reversed from gaining resilience (blue regions, 1992–2004) to losing resilience (orange and red regions, 2000s onwards). Long-term (1992–2017) trends thus conceal a strong reversal from gains to losses in resilience in many regions. When changes in AC1 and variance are aggregated by land cover 42 , we infer that evergreen broadleaf forests show overall lower AC1 and variance (higher resilience) than other land-cover types (Extended Data Fig. 6 ); nevertheless, the global tendency is towards aggregate decreases in resilience (in terms of AC1) across all land-cover classes. These trends maintain a similar form if a three-year or seven-year rolling window is used to calculate continuous changes in resilience (Extended Data Fig. 6 ). It should be noted, however, that this approach conceals considerable spatial trend variability (Extended Data Fig. 5 ) and will (although confined to single land-cover types) smooth over vast differences in biomes worldwide; hence, these aggregated time series should be carefully interpreted in the context of the global trend maps (Fig. 4 and Extended Data Fig. 5 ). Variance presents a more mixed picture when aggregated by land-cover class, with losses of resilience being expressed more strongly since the early 2000s (Extended Data Fig. 6 ). Previous work proposed that AC1 will always increase towards a critical transition, but variance can in some cases decrease 27 ; the two metrics are also not guaranteed to change at the same rate. This is also to some degree expressed in our global trends (Extended Data Fig. 5 ), where variance trends, particularly for the tropics, are more strongly negative than for AC1 over both the whole period 1992–2017 and the early period 1992–2004. Both AC1 and variance trends, however, are majority positive for the recent period ~2000–2017 (Fig. 4c and Extended Data Figs. 5 – 7 ). Note that many regions where we observe strong vegetation resilience loss are also fire prone (for example, Siberia, Canada and western North America); increasing fire frequencies due to drier conditions in these regions could explain some of the observed recent vegetation resilience loss 43 . Increases in temperature, alongside changes in precipitation and weather extremes, could also be a potential driver of changing vegetation resilience; we emphasize, however, that a detailed analysis of the different potential causes for the inferred resilience loss (and in particular its acceleration during the past two decades) is still lacking and is an important topic for future research. Discussion Our results provide empirical evidence that both AC1 and variance are directly related to vegetation resilience, defined as the recovery rate from external perturbations. The AC1 and variance can hence be used to estimate resilience in situations where controlled experiments are not possible and external perturbations are rare. Our findings, therefore, justify the usage of AC1 and variance as vegetation resilience metrics 1 , 41 , 44 and provide an empirical basis for future studies based on these theoretical resilience metrics. However, our results also show that the resilience estimates derived from the common AC1 and variance metrics directly using theoretical relationships may be slightly biased, and instead the modified empirical relationships revealed in Fig. 3 should be used to translate AC1 and variance into the recovery rate as a measure of resilience. On the basis of the thus empirically confirmed relationship between AC1/variance and vegetation resilience, we infer a heterogeneous spatial pattern of resilience gains and losses; resilience losses in the high northern latitudes are consistent since the early 1990s, but in the tropics, we detect gains during the 1990s and pronounced resilience losses since around the year 2000. While the directions of AC1 and variance trends broadly agree (Fig. 4 and Extended Data Figs. 5 – 7 ), there remains considerable spatial heterogeneity. We find marked differences in our results when using the NDVI instead of the VOD data. While we cannot say with complete certainty what drives this disparity, it is likely that differences in the parameters measured by the satellites play a critical role. VOD is primarily sensitive to vegetation density and, thus, will respond to changes in both leafy and woody biomass 39 . NDVI, however, is sensitive to ‘greenness’, which is often interpreted as vegetation productivity or chlorophyll content; it is well known that NDVI is a poor estimator of biomass 45 . Recovery in NDVI after a disturbance can thus be rapid, even if a completely new species mix accounts for the post-disturbance vegetation growth (for example, forest replaced by grass). VOD, however, will remain suppressed until vegetation density (for example, leaves and stems) returns. It is thus likely that the empirically derived recovery rates for NDVI contain much higher levels of noise and that some recoveries to previous NDVI values represent a transition to a new vegetation mix rather than a return to the actual previous vegetation state. The relatively poor constraint on vegetation type provided by NDVI is a major barrier to its use in assessing ecosystem state and stability; we therefore propose to rather employ VOD data for such purposes. A few potential caveats should be kept in mind when interpreting our results. (1) We do not have a strong constraint on the type and cause of the vegetation perturbations used to calculate recovery rates. Sufficient data on all types of disturbances, their spatial extent and their magnitude do not exist; we thus rely on a data-driven approach to estimate the timing and magnitude of a given disturbance. We note, however, that since we determine the empirical recovery rates using only parts of the time series following an abrupt transition, we can estimate a recovery rate without knowing what kind of event (for example, fire, drought) caused the abrupt transition. (2) Possible spurious or missed time-series transitions are carried forward into our analysis of the global relationship between empirical and theoretical vegetation resilience; this probably accounts for some of the scatter seen in Fig. 3 (see Methods for further details). (3) Some changes in variance and autocorrelation are not necessarily related directly to vegetation resilience, for example, in the case of time-lagged vegetation response to water deficits 46 that could modify the measured AC1. At the global scale of our analysis, however, we posit that our empirical confirmation of resilience metrics and long-term trends remain robust. (4) We are limited by the mathematical framework to studying only systems that return to the previous state and therefore probably miss many important ecosystem transitions from which there has been no recovery to the original state. Finally, (5) it is important to note that we cannot say for certain whether the acceleration of resilience loss observed in the past decades (Fig. 4 ) will continue into the future; indeed, it is possible that global vegetation resilience is responding to a (multi-)decadal climate variability mode (compare Extended Data Fig. 6 ), which could in principle drive a global-scale reversal towards renewed resilience gains. Theoretically, a critical transition will occur when the AC1 reaches a value of one, corresponding to a vanishing recovery rate; in practice, however, extrapolating AC1 trends into the future is not feasible. Our results are based on empirical data and are thus not predictive; they show only how vegetation resilience has changed in recent decades. We have also not assessed changes in the magnitude or frequency of external disturbances (for example, droughts 47 ), which also play a key role in controlling global vegetation health; a comparison between vegetation resilience and contemporaneous changes in external disturbances would provide key context for the attribution of observed resilience changes to explicit drivers. Despite these caveats, our work represents the first empirical confirmation of previously proposed vegetation resilience metrics in terms of variance and AC1 and thus provides the basis for further investigations. Our study shows that the satellite-derived VOD data can be used to establish a global empirical manifestation of the FDT for vegetated ecosystems. Vegetation resilience, defined as the capacity to recover from external perturbations, can hence be approximated from the characteristics of natural internal variability in terms of AC1 and variance. On the basis of this correspondence, we identify a global loss in vegetation resilience over the course of the past decades, although the spatial pattern is heterogeneous and the inferred resilience changes depend on climate zones. The spatial pattern is complex for the full period for which reliable VOD data are available (1992–2017), with overall resilience gains in the tropical belts and losses in the higher northern and southern latitudes. From the 2000s onwards, however, we find globally almost coherent resilience loss; further work is required to constrain the causes of this loss and especially to investigate whether the observed resilience losses can be attributed to anthropogenic climate and land-use change. Our results establish a firm basis for a global, satellite-driven monitoring system of ecosystem resilience. Methods Data preparation We use two vegetation datasets in our analysis to provide a holistic view of vegetation response to shocks and stresses. (1) VOD at 0.25° spatial resolution; specifically, we employ the Ku-band and use daily values for the period 1992–2017 39 . Note that we do not use the entire VOD data record (1987–) as some pixels exhibit extreme discontinuities before 1992 (Extended Data Fig. 6 ). We posit that this is due to the change from Special Sensor Microwave/Imager satellite F08 to F11 in the VOD dataset 39 . While we observe these discontinuities only in the tropics, we choose to discard all data before 1992 for consistency; it should be noted, however, that our global-scale results are robust whether we use 1987 or 1992 as our first year of data (Extended Data Fig. 6 ). (2) NDVI (from AVHRR) at 1/12° spatial and 15-day temporal resolution for the period 1981–2015; specifically, we use GIMMSv3g 38 . We further use the moderate-resolution imaging spectroradiometer (MODIS) MCD12C1 land-cover database (2014 annual composite, resampled via the mode of land covers in each VOD/NDVI pixel) 42 to break our analyses into distinct land-cover types (for example, Extended Data Fig. 6 ). To limit the impact of anthropogenic land use on our results, we further use MODIS MCD12Q1 (500 m, annually 2001–2017) land-cover data to identify any pixels that were at any point during the period 2001–2017 subject to human land use (for example, urban, cropland). We then remove any NDVI/VOD pixels that had one or more anthropogenic land-cover pixels (at least one 500 m pixel) in at least one year between 2001 and 2017. This step helps to remove pixels that, for example, were once logged and then returned to grasslands; those pixels would not be classified as ‘anthropogenic’ for the entire period following the logging and thus might introduce spurious results. While this does not completely eliminate anthropogenic influence from our results (we do not have sufficient land-cover data before the MODIS sensing period), it conservatively removes all 0.25° (~25 × 25 km) regions where human use occurred. We thus cannot completely rule out the influence of human-driven land-cover change on our results at the global scale but have endeavoured to remove it to the furthest extent possible given data limitations. As a final robustness check, we have also used the ref. 48 global deforestation dataset to remove any pixels from our long-term trend data (Fig. 4 ) that suffered forest loss (Supplementary Figs. 6 and 7 ); as this dataset also includes non-anthropogenic forest loss—for example, due to natural fires—it serves as an even more conservative land-cover removal step. Removing these additional pixels does not substantially impact our reported long-term trend results or our inferred conclusions. Cloud cover and other data artefacts are removed from the NDVI data using an upward-smoothing approach to gap filling 49 . VOD data are resampled to a twice-monthly time step to match the temporal resolution of the NDVI data by taking the median of each time window; this step ensures that divergent results between the two vegetation datasets are not due to spatial or temporal sampling differences. Using these cleaned and evenly sampled time series, we then deseason and detrend the data using seasonal trend decomposition by loess (STL 50 , 51 , 52 ). We decompose the full-year signal using a period of 24 (one year at bi-monthly time sampling) and an adaptive loess filter. We use a value of 47 for the trend smoother (one point less than two years) and 25 for the low-pass filter (one point more than one year), according to the rules of thumb originally presented by ref. 50 (see code archive 53 for details). We then maintain the residual (deseasoned and detrended) time-series term for analysis. Note that the VOD dataset is a multi-satellite composite, with variable overlap between different input Ku-band datasets 39 . As multiple datasets are averaged in different configurations throughout the VOD period, there is the potential for changes in noise levels that could influence the computed AC1 and variance values if the underlying signal (for example, vegetation) changes on a slower timescale than the measurement noise. Stronger averaging associated with an increasing number of satellites would lead to step-wise increases in AC1 and step-wise decreases in variance. For the period we consider, we do not see step changes in AC1 or variance as would be expected if the noise level or character changed with the introduction or removal of a new satellite; indeed, we see consistent resilience loss during long periods of constant satellite configurations (for example, 2002–2009, Extended Data Fig. 6 ). Furthermore, there are no contemporaneous jumps in the variance, which would also be expected to change with shifts in data averaging. We posit that the changes in AC1 and variance that we observe are highly unlikely to be driven by data aggregation and are instead representative of a global change in vegetation resilience. Perturbation detection and recovery analysis We use two methods to detect perturbations in our residual time series: (1) a moving-average 54 and (2) a linear-fit approach 55 . For both methods, we use an 18-point (9 month) moving window over our residual time series and calculate either the simple mean difference between the first and second halves of our moving window (method 1) or a linear trend over the moving window (method 2). We then smooth these resultant derivative time series with a Savitzky–Golay filter (7 points, first order) to remove high-frequency noise 56 . Finally, we isolate any derivative values above the 99th percentile and label consecutive time steps as individual disturbance periods. We then use the highest peak within each disturbance as the perturbation date. Note that the results of our analysis are nearly identical whether we use method (1) or method (2) to detect perturbations; thus, we present here only data based on method (1). In our tests, a comparable set of disturbances was found using 12-, 24-, 36- and 72-point moving windows, which resulted in similar spatial (for example, Fig. 2 ) and global (for example, Fig. 3 ) patterns; for simplicity, we present only results using the 18-point moving window here. As we use a percentile approach to delineate large perturbations, we will not always capture each perturbation for a given time series; our detected perturbations will be biased towards the largest excursion within each individual time series. We acknowledge that not all events will be equally represented in both the VOD and NDVI datasets; in the case where a much stronger response is engendered in one dataset than the other, the percentile threshold may not identify the same event in both time series. Furthermore, we will by construction detect some non-significant perturbations, in particular for the case where a given time series does not experience a strong disturbance. We thus impose the condition that the raw time series must descend more than 0.01 to be considered a valid perturbation. While we do not identify every perturbation over the entirety of both datasets, we generate a large and diverse set of recovery rates that are well distributed in space and time. To ensure that our estimated recovery rates represent a return to the previous state, and not a transition to a new vegetation regime, we further apply the condition that the five years of data before and after the disturbance must pass a two-sample Kolmogorov–Smirnov test ( P < 0.05). We choose five years as our baseline to minimize the impact of long-term (for example, decadal) changes in vegetation state while maintaining enough data on both sides of the transition for a robust comparison. For each detected time-series perturbation, we then find the local minimum of the residual time series with a two-month constraint to account for the fact that disturbances are often detected before the residual time series reaches its lowest point. We then take a period of five years after the local minimum and fit an exponential function, capturing both the exponent r and the coefficient of determination R 2 . To create the map for Fig. 2 , if there is more than one transition at a given pixel location, we use the average recovery rate of all transitions. For Fig. 3 , we maintain all recovery rates (for example, a single time series could contribute more than one recovery rate). We note that most locations studied have only one significant transition during the study period, and it is a relatively small number that have two or more. The computed transition points and recovery rates can be found in our data repository 53 . Resilience estimation Resilience is defined as the capacity to recover from external perturbations 11 , 12 . Quantitatively, it can be determined in terms of the recovery rate r after a perturbation to some value x 0 : $$x(t)\approx {x}_{0}{e}^{rt}$$ where x ( t ) is the state of the system at time t after the perturbation. If r is negative, the system will recover to its equilibrium state at rate ∣ r ∣ . The characteristic recovery time is given by ∣ r ∣ −1 . Note that for positive r , the initial perturbation would instead be amplified, indicating that the system is not resilient. Empirically, we estimate r for each perturbation in each residual NDVI and VOD time series as described in the previous section. The AC1, a measure of how strongly correlated neighbouring time spans of a time series are, has been suggested as a measure for resilience 1 , 23 , 24 , 25 , 57 and more generally as an early-warning indicator for forthcoming critical transitions 28 , 31 . Theoretically, this can be motivated from a linearization of the stochastic dynamics around a given equilibrium point x* . For the fluctuations \(\bar{x}=x-{x}^{* }\) $$\frac{{\mathrm{d}}\bar{x}}{{\mathrm{d}}t}=\kappa \bar{x}+\sigma \eta \,,$$ which defines an Ornstein–Uhlenbeck process with linear damping rate κ < 0 and white-noise forcing with standard deviation σ > 0. It can be shown that the variance \(\langle {\bar{x}}^{2}\rangle\) and lag- n autocorrelation α ( n ) of the stochastic process obtained from a discretization of the Ornstein–Uhlenbeck process into time steps Δ t are given by 58 $$\langle {\bar{x}}^{2}\rangle =\frac{{\sigma }^{2}}{1-{e}^{2\kappa {{\Delta }}t}}\approx -\frac{{\sigma }^{2}}{2\kappa {{\Delta }}t}$$ and $$\alpha (n)={e}^{n\kappa {{\Delta }}t}\,.$$ If the stability of an equilibrium state gradually decreases, κ will approach zero from below, and correspondingly, the variance \(\langle {\bar{x}}^{2}\rangle\) will diverge to positive infinity and the AC1 α (1) will increase towards + 1. These increases in the damping rate κ , as well as the variance of the fluctuations \(\langle {\bar{x}}^{2}\rangle\) and the AC1 α (1) can thus serve as precursor signals for a forthcoming critical transition and, in relative terms, as measures for stability or resilience changes. The theoretical estimates for the recovery rates shown in Fig. 2b for AC1 and in Fig. 2c for the variance are given in terms of the damping rate κ , obtained by inverting the preceding equations. For the variance, an estimate of the driving noise σ r is also needed, which we obtain from $$\frac{{\mathrm{d}}\bar{x}}{{\mathrm{d}}t}=r\bar{x}+{\sigma }_{r}\eta \,,$$ where we used the empirically estimated recovery rate r rather than the damping rate κ on the right-hand side. Practically, we obtain very similar theoretical expressions for the variance when computed using the σ κ obtained when putting κ instead of r into the preceding equation (Supplementary Fig. 4 ). For an empirical confirmation of the FDT, we thus have to show that for the observed vegetation data, an exponential relationship between the AC1 and the recovery rate r , as well as a power-law (1/ r ) relationship between the variance and the recovery rate r , hold. It is important to note that for the comparison between empirical recovery rates and AC1 or variance, we consider only time series that eventually return to their pre-disturbance state, implying that the residual time series under study are, apart from infrequent large perturbations, approximately stationary. Long-term trend estimation To better understand temporal changes in vegetation resilience, we calculate the AC1 and variance on moving windows (with a size of 3, 5 and 7 years) over each entire residual time series. Using these windowed AC1 and variance measurements, we calculate Kendall–Tau 59 statistics to check for increasing or decreasing trends. As our rolling-window data are by construction serially correlated, we test for statistical significance based on a set of 10,000 phase-shuffled surrogates, which preserve the variance and autocorrelation function of the original time series 31 , 32 , 33 . These phase surrogates are obtained by computing the Fourier transform of the original time series, uniformly randomly shuffling their phases and then applying an inverse Fourier transform to each of them. We then calculate the probability that our measured AC1 Kendall–Tau trends are significant using a threshold of P < 0.05. Finally, we discard six months of data at either end of each time series before calculating trends, as the variance and autocorrelation of the residual produced by the STL procedure are less reliable within one half of the length of the seasonal decomposition window. The python codes to replicate our trend estimation procedure can be found in our code repository 53 . Data availability The satellite data used in this study are publicly available 38 , 39 , 42 . The data used for Figs. 2 and 3 are available via Zenodo: . Code availability The Python codes used in this study are available via Zenodo: . | None | [] | [] | [] | SciNews | Biology | Taylor Smith et al, Empirical evidence for recent global shifts in vegetation resilience, Nature Climate Change (2022). DOI: 10.1038/s41558-022-01352-2 Journal information: Nature Climate Change | https://dx.doi.org/10.1038/s41558-022-01352-2 | https://phys.org/news/2022-04-resilience-ecosystems-space.html | A new study published in Nature Climate Change has found that vegetation resilience can be empirically monitored from space, allowing for the tracking of changes in ecosystems' ability to withstand and recover from damage. The research team, from the University of Potsdam, Potsdam Institute for Climate Impact Research, Technical University of Munich, and University of Exeter, used satellite data to estimate the variability of global vegetation and the speed of recovery after large losses. The study found that simple metrics can be used to estimate the resilience of ecosystems to large shocks, and that global vegetation has lost resilience over the last two decades, making it more vulnerable to events like wildfires, pests, and human disturbances. The researchers suggest that anthropogenic climate change, land-use change, and natural variability may be contributing factors to this decline, and that satellite data can play a crucial role in monitoring the health of vegetation and other ecosystems.
A natural habitat's ability to withstand and recover from damage can be empirically monitored from space—and the method may prove important during upcoming decades of climate and land-use change. The first study to empirically document that vegetation resilience can be measured from space is published today in Nature Climate Change by a research team from the University of Potsdam, the Potsdam Institute for Climate Impact Research (PIK), the Technical University of Munich (TUM) and the University of Exeter. The method will likely be important for future assessments of declines in vegetation resilience due to anthropogenic climate change and unsustainable resource management. "New ways of handling large data sets make it possible to check on widely held theories and assumptions about how ecosystems function," said lead author Taylor Smith, from the University of Potsdam. "Our work empirically confirms of one of those theories—that it is possible to measure how resilient vegetation is to outside pressure with a straightforward mathematical model." The study used observational data to estimate the variability of global vegetation as well as the speed of recovery after large losses in vegetation. By analyzing different satellite products since 1992, the group shows that simple metrics can be used to estimate the resilience of ecosystems to large shocks—even where large losses of vegetation haven't happened yet. "So far it has been difficult to reliably measure vegetation resilience at a global scale," said co-author Niklas Boers, TUM, PIK and Exeter's Global Systems Institute. "We used powerful mathematical results to overcome this problem. This allows us to continuously measure changes in vegetation resilience at any place on the Earth's surface. We provide a solid, empirically confirmed framework for monitoring vegetation resilience from space." The work further reveals that in many regions, global vegetation has lost resilience over the last two decades, meaning vegetation has become more vulnerable and takes longer to regain its natural equilibrium after disturbances. "Vegetation resilience can be thought of as the ability to recover from large shocks such as droughts or fires. We find very different long-term trends in resilience—depending on climate zone and vegetation type—but overall, declines in vegetation resilience have become more common during the last two decades," said Smith. The analysis shows that on average, vegetation initially gained resilience globally during the '90s. Then a shift took place with a more pronounced resilience loss since the early 2000s. The finding indicates that especially tropical rainforests and Siberian Boreal forests have grown more vulnerable to events like wildfires, pests, human disturbances, and natural catastrophes. Numerous factors might contribute to this shift, such as natural variability, anthropogenic climate change, increasing human land use and deforestation, and a higher frequency of droughts and wildfires. "We urgently need to intensify our efforts to detect potential changes in vegetation resilience and to understand the underlying drivers," said Boers. "We expect anthropogenic global heating as well as land-use change to play an important role, but many processes aren't well understood, making it difficult to predict the fate of natural vegetation systems in the coming decades." Smith added: "Satellite data can play a crucial role here, particularly in continuously monitoring the health of vegetation and other ecosystems." The study is part of the TiPES project, an EU Horizon 2020 interdisciplinary climate science project on tipping points in the Earth system. Eighteen partner institutions work together in more than 10 countries. TiPES is coordinated and led by The Niels Bohr Institute at the University of Copenhagen, Denmark and the Potsdam Institute for Climate Impact Research, Germany. |
10.1038/s41467-021-21653-y | Automated next generation sequencing platform can accurately screen thousands for COVID-19 | A robotics platform designed by Toronto researchers to screen thousands of COVID-19 samples at once has the potential to revolutionize how labs track the spread of viruses and other pathogens, according to new findings. The study, out Wednesday in Nature Communications, found that the next-generation, ultra-high-throughput sequencing platform, called C19-SPAR-Seq, designed by researchers from the Lunenfeld-Tanenbaum Research Institute (LTRI) at Sinai Health, has a sensitivity rate greater than 95 percent in positive cases during peak onset. "Identifying positive samples quickly and accurately is critical in beating this pandemic," said Dr. Jeff Wrana, senior investigator at the LTRI and professor in the Department of Molecular Genetics at the University of Toronto. "With new and potentially dangerous variants now circulating, this is a platform that is scalable, automated and capable of analyzing thousands of COVID-19 patient samples in a single instrument run." Wrana and fellow LTRI senior investigator Dr. Laurence Pelletier, in collaboration with University of Toronto professor Dr. Ben Blencowe, credit a strong team of eager trainees who shifted from other areas of research to help develop and validate the platform, allowing for the team to go from concept to published paper in under 12 months. "The co-operation of the Mount Sinai Hospital clinical diagnostic lab was the other key ingredient to our success," said Pelletier. "To date the shared microbiology lab, headed by Dr. Tony Mazzulli, has provided access to thousands of samples." In late 2020, the team pivoted again to use the robotics platform to screen thousands of positive samples for variants by rapidly sequencing fingerprint regions of the viral genome to look for key mutations. "It has been an absolute pleasure to work with Dr. Jeff Wrana and his team at the LTRI," said Dr. Mazzulli, microbiologist-in-chief for Sinai Health and University Health Network (UHN). "His novel SPAR-Seq System is cutting-edge technology and his team's ability to sequence COVID-19 samples in real time has tremendous potential for impacting our understanding of the epidemiology and spread of novel mutants in the province." The platform is also cost-effective. The study notes it only costs about $8 USD per test when running thousands of samples at once, as the cost per sample decreases due to economies of scale. "It's extremely reliable and readily adaptable," said Javier Hernandez, a junior researcher in the Wrana lab who co-led the study with Drs. Marie-Ming Aynaud and Seda Barutcu. "The turnaround is approximately 24 hours. It's very simple as we've automated practically every step in the process. For me, it's been a very exciting thing to see my work make a difference." | Researchers at the Lunenfeld-Tanenbaum Research Institute (LTRI) at Sinai Health have developed a next-generation sequencing platform called C19-SPAR-Seq, which can screen thousands of COVID-19 samples at once with a sensitivity rate greater than 95% in positive cases. The platform, designed to identify positive samples quickly and accurately, is scalable, automated, and cost-effective, with a cost of around $8 USD per test when running thousands of samples. The team, led by Dr. Jeff Wrana, was able to develop and validate the platform in under 12 months with the help of a strong team of trainees and collaboration with the Mount Sinai Hospital clinical diagnostic lab. The platform has the potential to revolutionize how labs track the spread of viruses and other pathogens, and has already been used to screen thousands of positive samples for variants by rapidly sequencing fingerprint regions of the viral genome. | None | Abstract Population scale sweeps of viral pathogens, such as SARS-CoV-2, require high intensity testing for effective management. Here, we describe “Systematic Parallel Analysis of RNA coupled to Sequencing for Covid-19 screening” (C19-SPAR-Seq), a multiplexed, scalable, readily automated platform for SARS-CoV-2 detection that is capable of analyzing tens of thousands of patient samples in a single run. To address strict requirements for control of assay parameters and output demanded by clinical diagnostics, we employ a control-based Precision-Recall and Receiver Operator Characteristics (coPR) analysis to assign run-specific quality control metrics. C19-SPAR-Seq coupled to coPR on a trial cohort of several hundred patients performs with a specificity of 100% and sensitivity of 91% on samples with low viral loads, and a sensitivity of >95% on high viral loads associated with disease onset and peak transmissibility. This study establishes the feasibility of employing C19-SPAR-Seq for the large-scale monitoring of SARS-CoV-2 and other pathogens. Introduction Viral pathogens, such as SARS-CoV-2, that incorporate large numbers of asymptomatic or mild symptom patients present unique challenges for public health agencies trying to manage both travel and local spread. Physical distancing is the current major strategy to suppress spread of the disease, but with enormous socio-economic costs. However, modeling and studies in isolated jurisdictions suggest that active population surveillance through systematic molecular diagnostics, combined with contact tracing and focused quarantining can significantly suppress disease spread 1 , 2 , 3 and has significantly impacted disease transmission rates, the number of infected people, and prevented saturation of the healthcare system 4 , 5 , 6 , 7 . However, reliable systems allowing for parallel testing of tens of thousands to hundreds of thousands of patients in larger urban environments have not yet been employed. Here we describe “COVID-19 screening using Systematic Parallel Analysis of RNA coupled to Sequencing” (C19-SPAR-Seq), which is a next generation sequencing (NGS)-based platform 8 for analyzing tens of thousands of COVID-19 patient samples in a single instrument run. To enable NGS-based diagnostics we employed large numbers of control samples embedded in each run coupled to control-based Precision-Recall and predictive Receiver Operator Characteristics (coPR) analysis that assigns run-specific thresholds and quality control metrics. C19-SPAR-Seq coupled to coPR on a trial cohort of over 600 patients performed with a specificity of 100% and sensitivity of 91% on samples with low viral loads versus >95% on samples with the higher viral loads associated with disease onset and peak transmissibility. Our study thus establishes the feasibility of employing C19-SPAR-Seq for the large-scale monitoring of SARS-CoV-2 and other pathogens. Results Multiplex detection of SARS-CoV-2 using C19-SPAR-Seq The current gold standard diagnostic for SARS-CoV-2 is Real-Time Quantitative Polymerase Chain Reaction (RT-qPCR), which is not readily adaptable to large-scale population testing 9 . To establish a population-scale testing platform we designed a SPAR-Seq multiplex primer mix v1 that targets RNA-dependent RNA polymerase ( RdRP ), Envelope ( E ), Nucleocapsid ( N ), and two regions of the Spike ( S ) gene that correspond to the receptor-binding domain (RBD) and the polybasic cleavage site (PBS) (Fig. 1a , Supplementary Table 1 and Supplementary Data 1 ). The latter two are SARS-CoV-2-specific regions that capture five key residues necessary for ACE2 receptor binding ( Srbd ) and the furin cleavage site ( Spbs ) that is critical for viral infectivity 10 , 11 . Thus, the RdRP-specific primers could produce an amplicon from SARS-CoV-1 that can be readily distinguished based on sequence analysis, while the Spike-specific primers, targeting the RBD and Polybasic site regions, would distinguish a SARS-CoV-2 infection. For quality control, we targeted Peptidylprolyl Isomerase B ( PPIB ). Current standard testing strategies for viral pathogens employ gene-specific primers in “all-in-one” qRT-PCR reactions that could in principle be adapted to incorporate barcodes into gene-specific primers. However, to allow for rapid adaptation to test for novel and multiple pathogens, and/or profiling host responses we used a generic oligo-dT and random hexamer primed reverse transcription step followed by multiplex PCR and barcoding in a rapid, readily automated format we call “COVID-19 screening using Systematic Parallel Analysis of RNA coupled to Sequencing” or C19-SPAR-Seq (Fig. 1b , Supplementary Table 1 and Supplementary Data 1 ). Although cost is often cited as a concern for NGS-based testing, our platform is cost effective with retail material costs ranging from USD ~$9 to $6 for 500 versus 10,000 sample batch sizes, respectively (Supplementary Data 2 ). Fig. 1: Application of C19-SPAR-Seq to detect SARS-CoV-2. a Schematic representation of the SARS-CoV-2 with the five regions targeted for multiplex C19-SPAR-Seq indicated: RdRP (purple), S receptor-binding domain ( Srbd ) (red), S polybasic cleavage site ( Spbs ) (light red), E (yellow), and N (orange). b Schematic of the C19-SPAR-Seq strategy for detecting SARS-CoV-2. cDNA is synthesized using reverse transcriptase (RT) from RNA extracted from clinical samples, subjected to multiplex PCR, then barcoded, pooled, and analyzed by next generation sequencing (NGS). c Analysis of archival NASOP swab eluents by C19-SPAR-Seq. A Proof-of-Concept (PoC) cohort ( n = 19) was analyzed by C19-SPAR-Seq and read numbers for each of the indicated amplicons are presented in a heatmap. Control samples (HEK293T, synthetic SARS-CoV-2 RNA) are represented in the left panel, while the right panel shows unsupervised 2D hierarchical clustering of results from negative (blue) and positive (red) patients. Full size image To assess C19-SPAR-Seq performance, we assembled a proof-of-concept (PoC) cohort of 19 archival Nasopharyngeal (NASOP) swab eluents from the Toronto University Health Network-Mount Sinai Hospital clinical diagnostics lab (Supplementary Data 3 ), 17 of which were positive for SARS-CoV-2. Viral load in these archival samples was quantified using the clinically approved TaqMan-based SARS-CoV-2 RT-qPCR detection kit (‘BGI’, see the “Methods” section), which identified five SARS-CoV-2 low (Ct > 25), seven SARS-CoV-2 medium (Ct between 20 and 25), and five SARS-CoV-2 high (Ct < 20) patients (Supplementary Data 3 ). After confirming the efficiency of multiplex v1 primer pairs using a SARS-CoV-2 high sample (LTRI-18, Ct < 20; Supplementary Fig. 1 ), we performed C19-SPAR-Seq using HEK293T RNA as a negative control ( n = 2), and serial dilutions of synthetic SARS-CoV-2 RNA (Twist) as positive controls ( n = 5). Pooled sequence data was demultiplexed to individual samples prior to mapping to amplicon sequences. C19-SPAR-Seq was sensitive in detecting as low as 12.5 copies/μL of E, Srbd, and Spbs amplicons from Twist RNA (Fig. 1c , left panel). In patient samples, PPIB was present in all samples, and all viral targets were robustly detected in high/medium load samples, with reduced detection of E and RdRP genes in low samples (Fig. 1c , right panel). Development of a C19-SPAR-Seq diagnostic platform to detect SARS-CoV-2 To establish a diagnostic platform, we performed C19-SPAR-Seq on a larger test development cohort of 24 COVID-19 positive and 88 negative archival patient samples ( n = 112; Supplementary Data 4 ). The SARS-CoV-2 RNA standard curve showed a linear relationship between total viral reads and estimated viral copy numbers (Supplementary Fig. 2a ). Negative patient samples had low viral reads (median of 4; range 0–55) compared to positive samples (median of 5899; range 2–253,956 corresponding to 18–705,960 amplicon reads per million reads per sample) (Fig. 2a ). C19-SPAR-Seq read counts tracked inversely with qRT-PCR Ct values for RdRP , E , and N genes quantified in the diagnostic lab using the Seegene Allplex TM assay (see the “Methods” section) (Fig. 2b ). Unsupervised clustering showed that the controls performed similarly to the PoC cohort (Fig. 2c ), as did the positive and negative patient samples, with two exceptions: clinical samples LTRI042 and LTRI050, which displayed background signal, and corresponded to samples with extreme Ct values in only one viral gene ( N gene, Ct > 38; Supplementary Data 4 ). ROC analysis using total viral reads (Fig. 2d ) showed excellent performance with an area under the ROC curve (AUC) of 0.969. Using PROC, the point on the ROC curve that minimizes the distance to (0,1) 12 , defined a total viral read cut-off of 116 for calling a positive sample and yielded a sensitivity of 92% (95% confidence interval; CI of 73–99%), specificity of 100% (CI: 95–100%), and overall accuracy of 98% (Fig. 2d ). Using Youden parameters that maximize sensitivity and specificity defined a viral cutoff of 26 and yielded better sensitivity (96%), but lower specificity (95%) and accuracy (96%). Other than the two positive samples mentioned above that possessed extremely low levels of viral RNA (Ct 38 and 40), all other positive samples were above the C19-SPAR-Seq viral threshold limit, indicating that the lower limit of sensitivity in the CI is dictated by these samples that lie at the border of the detection limit of the diagnostic lab test. Thus, C19-SPAR-Seq robustly detects SARS-CoV-2 transcripts, correlates with Ct values from clinical diagnostic tests, and displays excellent performance in distinguishing positive and negative samples. Fig. 2: Performance of C19-SPAR-Seq in detecting SARS-CoV-2. a C19-SPAR-Seq of the test development cohort was performed and total viral reads+1 (log 10 ) ( Y -axis) are plotted for negative ( n = 88, black) and positive ( n = 24, red) patient samples, HEK293T RNA ( n = 6, blue), and the indicated serial dilutions of synthetic SARS-CoV-2 RNA ( n = 2-6, orange). For each group, the median, lower and upper confidence limits for 95% of the median are plotted. Whiskers are minimum and maximum values. Two-tailed unpaired t -test of negative versus positive samples (**** p = 1.67 × 10 −8 ). b C19-SPAR-Seq reads for the indicated gene in each patient sample were compared to Ct values obtained by the clinical diagnostics lab using the ‘Seegene’ Allplex assay. c Heatmap of C19-SPAR-Seq results. Read counts for the indicated target amplicons in control samples ( n = 16; left) and patient samples ( n = 112; right) are plotted according to the scale, and sample types labeled as indicated. Samples are arranged by hierarchical clustering with euclidean distance indicated by the dendrogram on the top, which readily distinguishes positive from negative samples. d Performance of C19-SPAR-Seq. ROC analysis on patient samples was performed using clinical diagnostic results (Seegene Allplex qRT-PCR assay, Supplementary Data 4 ) and total viral reads for patient samples ( n = 112). AUC (area under the curve) scores are indicated on the graph (left), with statistics at the optimal cutoff as indicated (right). Full size image An internal control-based classifier to assess patient samples Robust application of C19-SPAR-Seq as a diagnostic tool requires assigning thresholds for both viral RNA detection, as well as host RNA for filtering poor quality samples. In qRT-PCR diagnostics, external validation studies and rigorous standard operating procedures establish pre-defined cutoffs for sample quality and positive versus negative assignment (Seegene, see the “Methods” section); BGI (see the “Methods” section) However, in scalable, massively parallel, multiplexed NGS assays, variability in sample numbers and flow cell loading can create run-to-run variations in read numbers, while index-mismatching 13 , as well as trace cross-contamination events can create technical noise that are challenging to control. Furthermore, external validation strategies create a laborious path to adapt and test new multiplex designs to SARS-CoV-2, additional respiratory pathogens, or host responses. We therefore exploited the throughput of C19-SPAR-Seq to include in every run a training set of large numbers of controls that can be exploited to define cutoffs tailored to each C19-SPAR-Seq run (Fig. 3a ). To define quality metrics, we computed precision-recall (PR) curves for classifying control samples as either negative (H 2 O blanks), or positive for any anticipated amplicon (HEK293T for PPIB or synthetic SARS-CoV-2 RNA for viral amplicons) and calculated the highest F1 score, which is the harmonic mean of PR and a common measure of classifier accuracy (Fig. 3b ). When mapped onto a ROC curve this corresponded to the region closest to perfect sensitivity and specificity (0, 1) (Supplementary Fig. 2b ). To define the threshold for identifying SARS-CoV-2-positive cases, we next analyzed the embedded standard curve of synthetic SARS-CoV-2 RNA. This displayed a linear relationship over four orders of magnitude that extended to lower limits of detection indistinguishable from background reads from HEK293T cells (Fig. 2a and Supplementary Fig. 2a ), thus allowing us to identify the viral read count in each C19-SPAR-Seq run that most accurately distinguishes positive from negative (Fig. 3a ). To identify this threshold, we computed PROC01, which optimizes negative predictive value (NPV) and positive predictive value (PPV) 12 and defined a point (88 viral reads) close to perfect PR (Fig. 3c ) and sensitivity and specificity on the ROC curve (Supplementary Fig. 2c ). Importantly, these methods control for run-specific variables by employing training sets that are embedded in every C19-SPAR-Seq run. Fig. 3: Performance of C19-SPAR-Seq in detecting SARS-CoV-2 using control-based classifier. a Schematic of the control-based cut-off procedure for RNA quality and viral threshold by coPR analysis. b Thresholding sample quality. coPR analysis on control samples: PRC of control samples for accurate detection of mapped reads are plotted. The optimal precision and recall read cut-off associated ( P = 110) with the highest F1 (0.97) score, and AUC (area under the curve) are indicated in the PR plot. c Threshold for classification of positives in the test cohort. Optimum cut-off for viral threshold is calculated by PROC01 using clinical diagnosis and total viral reads and plotted on the precision-recall curve. d Threshold assignments for sample quality and classification. Total viral reads +1 ( Y -axis) are plotted against PPIB reads +1 ( X -axis) for positive (red) and negative (blue) patient samples. coPR-based RNA-QC filter and viral read filter are shown as indicated. Assay statistics using coPR thresholding are listed (right). Full size image We next mapped the control-based cutoffs onto our patient SPAR-Seq data (Fig. 3d ). This showed 15 of these archival samples had low PPIB counts that may be due to lost RNA integrity upon repeated freeze–thaw cycles (Fig. 3d and Supplementary Data 4 ), a variability we also observed in the PoC cohort (Fig. 1c ). Of note, C19-SPAR-Seq performance was not affected by filtering poor quality samples (AUC = 0.970; Supplementary Fig. 2d ). Furthermore, using PROC01 thresholding of viral reads identified 22/24 positives with no false positives (Fig. 3d ). This yielded an overall test performance of 92% sensitivity, 100% specificity, and 98% accuracy (Fig. 3d and Supplementary Tables 2 , 3 ). This is similar to the observed performance of C19-SPAR-Seq on clinical samples quantified by ROC analysis (Fig. 2d and Supplementary Fig. 2d , respectively). Thus, an extensive array of internal reference samples is effective as an embedded training set for implementing a control-based PR/PROC classifier (coPR) that is tailored to each C19-SPAR-Seq run. Negative samples create noise in C19-SPAR-Seq To validate our C19-SPAR-Seq platform we established a pilot cohort of 378 samples that contains 89 positive samples collected in May of 2020. We first screened for positivity using the clinically approved BGI SARS-CoV-2 kit (see the “Methods” section) which showed 52 samples were positive with >4 viral copies/μL (Supplementary Data 5 and Supplementary Table 4 ). Of the 37 failed samples, 86% had very low viral RNA (only 1 or 2 of the 3 genes detected and/or Ct > 35 on the ‘Seegene’ platform) that may have lost integrity upon storage. Indeed, comparison of Ct values for RdRP detection showed an overall increase of four cycles in these archived samples (Supplementary Fig. 3a ), despite the high sensitivity of the BGI platform 14 . The cohort also contained 289 negative samples collected prior to Ontario’s 15 first confirmed COVID-19-positive case in January 20, 2020, and 1 negative sample collected in May 2020 (Supplementary Data 5 ), and included broncho-alveolar lavages (BALs) and NASOP swabs. Surprisingly, the detection of human RNA dropped substantially to a median of 29 (range 0–41,874), compared to 15,058 (range 2–170,870) in the original test cohort. coPR filtering (Supplementary Fig. 3b ), marked 50% of samples as inconclusive compared to 13% in the test cohort (Supplementary Fig. 3c ), despite similar distribution of raw reads per sample (Supplementary Fig. 3d ), while mapping rates in the PoC, test and pilot cohorts, progressively declined to as low as 0.1% (Supplementary Fig. 3e ). To understand this collapse we analyzed unmapped reads and found that >90% were consumed by non-specific amplification products (NSAs; Supplementary Fig. 4a ) that comprised complex chimeric combinations of many viral and human primers (Supplementary Fig. 4a, b ). For example, RdRP and PPIB contributed to 4 of the top 5 NSAs (NSA1–4), and 2 had a spurious sequence (NSA4, 5). Indeed, analysis of C19-SPAR-Seq PoC, test and pilot libraries using a Bioanalyzer, showed that as cohort size and number of negatives increased, NSAs were more apparent, and dominated the pilot library (Supplementary Fig. 4c ). This suggests that NSAs, enriched in negative samples (3.7-fold increase in the pilot cohort), clog the NGS pipeline as sample numbers rise (Supplementary Table 4 ). This has serious implications for deploying an NGS platform in a population-scale COVID-19 surveillance strategy and highlights the importance of using large-scale cohorts during the development of multiplex testing platforms. Analyzing an extended cohort using an optimized multiplex panel v2.0 SARS-CoV-2 RNA concentration spans a large dynamic range, such that spike-in mutant amplicons which have been suggested to improve performance of NGS-based strategies 16 might interfere with detection of COVID-19-positive cases with low viral reads. Therefore, we instead used our NSA data to create multiplex panel v2.0 (see the “Methods” section) that removed primers yielding NSAs by targeting a distinct region of RdRP , removing E and N genes, and switching to primers that amplify intron spanning regions of the ACTB and ACTG genes (Supplementary Table 1 , Supplementary Data 1 and Supplementary Fig. 1 ). We extended the pilot cohort to 663 samples that included 98 confirmed positives and performed C19-SPAR-Seq, which showed targeted amplicons were the predominant product generated by multiplex panel v2.0 (Supplementary Fig. 5a ), and mapping percentages were restored to test cohort levels (Supplementary Fig. 5b ). Total viral read distributions for multiplex panel v2.0 showed good separation in clinically positive samples (Fig. 4a and Supplementary Fig. 5c ), while applying coPR thresholding (Supplementary Fig. 5d ) identified 121 samples as inconclusive (Fig. 4a ), all of which were older, pre-COVID19 material. Of these, 112 were BALs (40% of all BALs), 1 was a bronchial wash (BMSH), and only 8 were NASOPs (1.8% of all NASOPs) (Supplementary Data 6 ). Furthermore, analysis of 10 BAL samples below the QC threshold revealed little or no RNA, contrasting BALs with moderate levels of ACTB/G transcripts (representative examples in Supplementary Fig. 6a ), and BAL ACTB/G read distributions were much lower than NASOPs (Supplementary Fig. 6b ). This suggests that archival BALs suffered from substantive sample degradation and also highlights how coPR-based thresholding successfully identifies poor quality samples and readily adapts to the use of distinct primer sets. Fig. 4: C19-SPAR-Seq of a large patient cohort. a C19-SPAR-Seq on an extended patient cohort. coPR thresholds for sample quality and classification of a 663 patient cohort of negative (blue) and positive (red) specimens are shown as in Fig. 2a . Performance metrics with 95% confidence intervals for sample classification according to coPR thresholding are shown in the table. NA not applicable. b Heatmap of C19-SPAR-Seq results. Read counts for the indicated target amplicons in the filtered set of samples ( n = 542) are plotted according to the scale, and sample types labeled as indicated. Samples are arranged by hierarchical clustering with euclidean distance indicated by the dendrogram on the right. c Scatter plot of total viral reads+1 (left Y -axis, blue) versus Ct values of positive samples ( n = 98, BGI) ( X -axis). C19-SPAR-seq sensitivity at the indicated Ct values is overlaid (right Y -axis, red). Gray dashed lines indicate average copies/μL (c/μL). d ROC curve analysis. ROC curves were processed on filtered samples ( n = 542). AUC scores are indicated for filtered samples (blue; left) with corresponding performance statistics for the optimal cut-off indicated below. Full size image Next, we analyzed viral reads, which had a broad range in positive samples (median = 680.5 reads per sample, range 0–200,850; Fig. 4a and Supplementary Fig. 5c ). Two-dimensional clustering showed background SARS-CoV-2 products in negative samples were low to undetectable, and ACTB typically yielded higher reads than ACTG , likely reflecting their differential expression (Fig. 4b ). Positive samples were generally well separated, although some distinct clusters with lower SARS-CoV-2 reads were apparent (Fig. 4b and Supplementary Fig. 5e ). Indeed, total read distributions in positive samples displayed biphasic distribution (Supplementary Fig. 5e ), similar to observations made from RT-qPCR analyses of ~4000 positive patients 17 . Since the early rapid increase in SARS-CoV-2 viral load at symptom onset is followed by a long tail of low viral load during recovery 18 , 19 , this biphasic distribution could reflect patients in distinct phases of the disease. We also assessed viral amplicon sequences which matched the SARS-CoV-2 reference (MN908947.3 20 ) and found no variants (Supplementary Fig. 5f ). Since neutralizing antibodies are generally thought to target the critical region of the RBD analyzed here 15 , these results suggest the emergence of variant strains that might bypass acquired immunity is not a major feature of SARS-CoV-2. In addition, this supports the notion that biologic therapies targeting the RBD may show broad activity in the population. We next compared performance of multiplex panel v2.0 to v1.0 using the embedded controls, which showed similar performance (AUC = 0.90, Supplementary Fig. 5g versus 0.92, Supplementary Fig. 2c , respectively), with coPR yielding an optimal read cutoff of >16 total viral reads (Supplementary Fig. 5f ) that corresponded to a technical sensitivity of 3 viral copies/μL (Supplementary Fig. 6c ). coPR thus identified 82 positive samples (Fig. 4a and Supplementary Data 6 ), all of which were BGI-confirmed cases, to give an overall sensitivity of 84%, specificity of 100%, and accuracy of 97% (Supplementary Table 5 and Fig. 4a ). Importantly, total viral reads tracked with BGI Ct values (Fig. 4c ), and for samples with Ct < 35 (corresponding to ~12 viral copies/μL of specimen), sensitivity was similar to the test cohort at 91%. However, for samples with Ct between 35 and 37 (4–12 viral copies/μL) sensitivity dropped markedly to 44% (Supplementary Table 5 and Fig. 4a ), while at higher viral loads (Ct = 25 or ~8400 viral copies/μL) sensitivity rose to 100% (Fig. 4c ). ROC analysis of actual C19-SAR-Seq performance yielded an AUC of 0.96, sensitivity of 87% and specificity of 100%, similar to coPR (Fig. 4d ), while individual amplicons each underperformed total viral reads (AUC: 0.85–0.94; Supplementary Fig. 6d ). Our cohort was biased for samples with very low to low viral loads, which represents a small portion of the COVID-19 population 17 . This bias could lead to an underestimate of the sensitivity of C19-SPAR-Seq in the context of a large-scale population, so we mapped our sensitivity data at distinct viral loads onto the population distribution of viral loads obtained from ~4000 positive patients 17 . This showed a projected C19-SPAR-Seq sensitivity of ~97% for patients displaying >10,000 viral copies/mL (Supplementary Fig. 6e ), which encompasses ~90% of the positive population. Altogether, these results demonstrate that in high patient sample loads comprised of predominantly negative samples, C19-SPAR-Seq using coPR displays 100% specificity and >95% sensitivity at viral loads typically observed in populations. Discussion Systematic population-scale testing has been identified as an important tool in managing pandemics such as SARS-CoV-2, where large numbers of infected individuals display mild or no symptoms yet transmit disease. The scalable throughput of C19-SPAR-Seq combined with its excellent sensitivity and specificity at reasonable cost make it well-suited for this role. Data generated by large-scale routine testing of local and larger communities, with different interaction levels would provide valuable epidemiologic information on mechanisms of viral transmission, particularly when coupled to multiplex panels targeting regions of sequence variance currently in development. Indeed, while we detected no variants in our positive samples collected in the Spring of 2020, the S-RBD and S-PBS amplicons will detect the newly emergent N501Y and P681H variants 21 , 22 . In addition, the C19-SPAR-Seq platform can be readily adapted to incorporate panels tracking multiple pathogens, as well as host responses. C19-SPAR-Seq quantitation would also facilitate real-time tracking of viral load dynamics in populations that may be associated with COVID-19 expansion or resolution phases 18 . Although C19-SPAR-Seq is dependent on centralized regional facilities, it is readily coupled to saliva-based, at-home collection that exploits extensive transport infrastructure and industrialized sample processing to enable frequent widespread testing. Methods Samples collection Patient samples (Supplementary Data 3 – 6 ) were obtained from the Department of Microbiology at Mount Sinai Hospital. Patient samples used in this study were approved by the Mount Sinai Hospital (MSH) Research Ethics Board (REB): MSH REB Study #20-0078-E ‘Use of known COVID-19 status tissue samples for development and validation of novel detection methodologies’. The patient samples were de-identified prior to transfer from the Mount Sinai Hospital Microbiology Department to our research staff. The samples were excess to clinical need and considered residual samples which do not require informed consent for the secondary use of the de-identified biological materials utilized in this study. Patient samples were obtained as part of routine diagnostic testing. Total RNA extraction A step-by-step protocol describing the patient RNA extraction protocol can be found at Protocol Exchange 23 . Total RNA was extracted by using the Total RNA extraction kit (Norgen Biotek kit, Cat. #7200) for the samples in Supplementary Data 3 following the manufacturers guidelines. For all other samples (Supplementary Data 4 – 6 ), total RNA was purified in 96-well plates using RNAclean XP beads (Beckman, A66514) and a customized protocol. Briefly, 75.25 μL of patient swabs in transfer buffer were mixed with 14.5 μL of 10× SDS lysis buffer (1% SDS, 10 mM EDTA), 48 μL of 6 M GuHCl, and 7.25 μL proteinase K (20 mg/mL, ThermoFisher, 4333793), incubated at room temperature for 10 min and heated at 65 °C for 10 min prior to the addition of 145 μL of beads. Beads were washed twice in 70% ethanol using a magnetic stand and then RNA eluted into a 30 μL Resuspension buffer supplied with the kit. RNA quality was assessed using a Bioanalyzer (5200 Agilent Fragment Analyzer). HEK293T RNA was extracted using the Total RNA extraction kit (Qiagen). Synthetic Twist SARS-CoV-2 RNA (Twist Bioscience #102024-MN908947.3) was used as positive control. Reverse transcription (RT) A step-by-step protocol describing the reverse transcription protocol can be found at Protocol Exchange 23 . Total RNA was reverse transcribed using SuperScript™ III Reverse Transcriptase (Invitrogen) in 5× First-Strand Buffer containing DTT, a custom mix of Oligo-dT (Sigma) and Hexamer random primers (Sigma), dNTPs (Genedirex), and Ribolock RNase inhibitor (ThermoScientific). We followed the manufacturer’s protocol. Each reaction included: 0.5 μL Oligo-dT, 0.5 μL hexamers, 4 μL purified Total RNA, 1 μL dNTP (2.5 mM each dATP, dGTP, dCTP and dTTP), quantum satis ( qs ) 13 μL RNase/DNase free water. Samples were incubated at 65 °C for 5 min, and then placed on ice for at least for 1 min. The following was added to each reaction: 4 μL 5× First-Strand Buffer, 1 μL 0.1 M DTT, 1 μL Ribolock RNase Inhibitor, 1 μL of SuperScript TM III RT (200 units/μL) and then mixed by gently pipetting. Samples were incubated at 25 °C for 5 min, 50 °C for 60 min, 70 °C for 15 min and then stored at 4 °C. TaqMan-based RT-qPCR detection A Real-Time Fluorescent RT-PCR kit from ‘BGI’ was used according to manufacturer’s instructions (Cat no. MFG030010, BGI Genomics Co. Ltd. Shenzhen, China). Experiments were carried out in a 10 μL reaction volume in 384-well plates, using 3 μL of sample (LTRI patient samples or Twist RNA), and were analyzed using a Bio-Rad CFX384 detection system (Supplementary Data 3 , 5 , 6 ). Real-time Fluorescent RT-PCR results from ‘Seegene’ assay were provided by the Department of Microbiology diagnostic lab at Mount Sinai Hospital (Supplementary Data 4 – 6 ) (AllplexTM 2019-nCoV Assay, version 2.0, Cat no. RP10250X/RP10252W, Seegene). C19-SPAR-Seq primer design and optimization Optimized multiplex PCR primers for SARS-CoV-2 ( N , S , E and RdRP ) and human genes ( PPIB and ACTB/G ) were designed using the SPAR-Seq pipeline 8 , with amplicon size >100 bases (see Supplementary Table 1 and Supplementary Data 1 ). For the S gene, two regions were monitored, the S receptor-binding domain ( Srbd ), and S polybasic cleavage site ( Spbs ). The Universal adapter sequences used for sequencing were F: 5′-acactctttccctacacgacgctcttccgatct and R: 5′-gtgactggagttcagacgtgtgctcttccgatct). Primers were optimized to avoid primer–dimer and non-specific multiplex amplification. To assess the primers sensitivity and specificity, we performed qPCR (SYBR green master mix, BioApplied) on cDNA prepared from patient samples. Each primer was used at 0.1 μM in qPCR reaction run on 384-well plates using Biorad CFX 384 detection system. The thermal cycling conditions were as follows: one cycle at 95 °C for 2 min, and then 40 cycles of 95 °C for 15 arcsec, 60 °C for 15 arcsec, 72 °C for 20 arcsec, followed by a final melting curve step. Multiplexing PCR A step-by-step protocol describing the multiplex PCR protocol can be found at Protocol Exchange 23 . The multiplex PCR reaction was carried out using Phusion polymerase (ThermoFisher). The manufacturer’s recommended protocol was followed with the following primer concentrations: all primers ( N , Spbs , Srbd , E , RdRP , and PPIB ) were at 0.1 μM for the PoC cohort (Supplementary Data 3 ), SARS-CoV-2 primers ( N , Spbs , Srbd , E and RdRP ) were at 0.05 μM, and PPIB primer was at 0.1 μM for the test and pilot cohort (Supplementary Data 4 and 5 ), all primers ( Spbs , Srbd , RdRP and ACTB/G ) were at 0.05 μM for the extended cohort (Supplementary Data 6 ). For each reaction: 5 μL 5× Phusion buffer, 0.5 μL dNTP (2.5 mM each dATP, dGTP, dCTP, and dTTP), 0.25 μL for each human primers (10 μM), 0.125 μL for each SARS-CoV2 primers (10 μM), 2 μL of cDNA, 0.25 μL Phusion Hot start polymerase, qs 25 μL RNase/DNase free water. The thermal cycling conditions were as follows: one cycle at 98 °C for 2 min, and 30 cycles of 98 °C for 15 arcsec, 60 °C for 15 arcsec, 72 °C for 20 arcsec, and a final extension step at 72 °C for 5 min and then stored at 4 °C for the PoC and extended cohorts (Supplementary Data 3 , 6 ), one cycle at 98 °C for 2 min, and 35 cycles of 98 °C for 15 arcsec, 60 °C for 15 arcsec, 72 °C for 20 arcsec, and a final extension step at 72 °C for 5 min and then stored at 4 °C for the test and pilot cohorts (Supplementary Data 4 and 5 ). Barcoding PCR A step-by-step protocol describing the barcoding PCR protocol can be found at Protocol Exchange 23 . For multiplex barcode sequencing, dual-index barcodes were used 8 . The second PCR reaction on multiplex PCR was performed using Phusion polymerase (ThermoFisher). For each reaction: 4 μL 5× Phusion buffer, 0.4 μL dNTP (2.5 mM each dATP, dGTP, dCTP, and dTTP), 2 μL Barcoding primers F + R (pre-mix), 4 μL of multiplex PCR reaction, 0.2 μL Phusion polymerase, qs 20 μL RNase/DNase-free water. The thermal cycling conditions were as follows: one cycle at 98 °C for 30 arcsec, and 15 cycles of 98 °C for 10 arcsec, 65 °C for 30 arcsec, 72 °C for 30 arcsec, and a final extension step at 72 °C for 5 min and stored at 4 °C. Library preparation and sequencing A step-by-step protocol describing the library preparation and sequencing protocol can be found at Protocol Exchange 23 . For all libraries, each sample was pooled (7 μL/sample) and library PCR products were purified with SPRIselect beads (A66514, Beckman Coulter). The PoC, test, and pilot cohorts were purified as follows: ratio 0.8:1 (beads:library), and the extended cohort with 1:1 (beads:library) (Beckman Coulter). Due to NSA products in the fragment analyzer profile (Supplementary Fig. 3c ) in the test cohort and pilot cohort, we performed size selection purification (220–350 bp) using the Pippin Prep system (Pippin HT, Sage Science). Library quality was assessed with the 5200 Agilent Fragment Analyzer (ThermoFisher) and Qubit 2.0 Fluorometer (ThermoFisher). All libraries were sequenced with MiSeq or NextSeq 500 (Illumina) using 75 bp paired-end sequencing. COVID-19 (C19-)SPAR-Seq platform A step-by-step protocol describing the COVID-19 (C19-)SPAR-Seq platform protocol can be found at Protocol Exchange 23 . Our Systematic Parallel Analysis of Endogenous RNA Regulation Coupled to Barcode Sequencing (SPAR-Seq) system 8 was modified to simultaneously monitor COVID-19 viral targets and additional controls by multiplex PCR assays. For barcode sequencing, unique, dual-index C19-SPAR-Seq barcodes were used. Unique reverse 8-nucleotide barcodes were used for each sample, while forward 8-based barcodes were used to mark each half (48) of the samples in 96-well plate to provide additional redundancy. These two sets of barcodes were incorporated into forward and reverse primers, respectively, after the universal adaptor sequences and were added to the amplicons in the second PCR reaction. The C19-SPAR-Seq analysis pipeline with the algorithms used is explained in detail in Supplementary Fig. 7 with additional analytical tools described in Supplementary Fig. 8 and below in the “Methods” sections. Computational requirements for the demultiplexing step is 32 GB RAM and minimum 1 GB network infrastructure, with a Linux-operating system. Demultiplexing and mapping Illumina MiSeq sequencing data was demultiplexed based on perfect matches to unique combinations of the forward and reverse 8 nucleotide barcodes. Full-length forward and reverse reads were separately aligned to dedicated libraries of expected amplicon sequences using bowtie 24 with parameters –best -v 3 -k 1 -m 1. Read counts per amplicon were represented as reads per million or absolute read counts. The scripts for these steps are available at 25 . Filtering of low-input samples To remove samples with low amplified product, likely reflecting low input due to inefficient sample collection or degradation, before attempting to classify, we computed precision-recall curves for classifying control samples into ‘low amplification’ and ‘high amplification’ based on reads mapped to RNA amplicons but ignoring mapping to genomic sequence, if applicable. The former group comprised all controls in which individual steps were omitted (H2O controls) and the latter comprised HEK293T as well as synthetic SARS-CoV-2 RNA controls. For each PoC, test, pilot, extended runs, we obtained the total mapped read threshold (including reads mapping to both human and viral amplicons) associated with the highest F1 score, representing the point with optimal balance of precision and recall. Samples with reads lower than this threshold were removed from subsequent steps. Scripts for this step are available at 26 . SARS-CoV2-positive sample classification To assign positive and negative samples, we used negative (H2O and HEK293T) and positive (synthetic SARS-CoV-2 RNA dilutions) internal controls for each run and calculated optimum cut-offs for viral reads (total reads mapping to all three viral amplicons) by PROC which defines the threshold for optimum positive predictive value (PPV) and negative predictive value (NPV) for diagnostic tests. Thus, a sample was labeled positive if it had viral reads above the viral read threshold; negative if it had viral reads below the viral read threshold and human reads above the mapped read threshold; and inconclusive if it had both viral and human reads below the respective thresholds. Sample classification by heatmap clustering Heatmap and hierarchical clustering of viral and control amplicons, log 10 (mapped reads + 1), was used to analyze and classify all samples. Samples with a total mapped read count lower than the RNA QC threshold were labeled as inconclusive and removed before the analysis. Known positive (high, medium, and low) and negative control samples were used as references to distinguish different clusters. In addition, dilutions of synthetic SARS-CoV-2 RNA were also included as controls and analyzed across different PCR cycles and primer pool conditions. Viral mutation assessment To remove PCR and sequencing errors for the assessment of viral sequence variations, we determined the top enriched amplicon sequence. For this, firstly, paired end reads were stitched together to evaluate full length amplicons. The last 12 nucleotides of read1 sequence are used to join the reverse complement of read2 sequences. No mismatches were allowed for stitching criteria. The number of full length reads per unique sequence variation were counted for each amplicon per sample by matching the 10 nucleotides from the 3′ and 5′ end of the sequence with gene-specific primers. (scripts are available at 27 , and 28 ). The top enriched sequence variant from each sample is used for multiple alignment analysis using CLUSTALW V2.1. Non-specific amplicon assessment Single-end reads that contain the first 10 nucleotides of the illumina adaptor sequence were counted and binned into relevant forward and reverse gene specific primer pools by matching the first 10 nt of the reads with primer sequences. Relative abundance of the non-specific amplicons was quantified as percentage of the reads corresponding to non-specific amplicon per forward or reverse primer (scripts are available at 28 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data that support the findings of this study have been deposited in the Gene Expression Omnibus (GEO) at NCBI with the accession code GSE160036 . Figure 1 raw data, PoC cohort: GEO accession number GSE160031, Fig. 2 and Supplementary Fig. 2 raw data, Test cohort: GEO accession number GSE160032, Fig. 3 and Supplementary Figs. 3, 4 raw data, Pilot cohort: GEO accession number GSE160033, Fig. 4 and Supplementary Figs. 5 and 6 raw data, Extended cohort: GEO accession number GSE160034. Severe acute respiratory syndrome coronavirus 2 isolate Wuhan-Hu-1, complete genome: NCBI sequence ID: NC_045512 was used as reference for primers design and sequence analysis. Source data are provided with this paper. Code availability We provided the code for demultiplexing and mapping at 25 , quality filtering at 26 , viral mutation assessment and non-specific amplicon assessment at 27 and 28 . | None | [] | [] | [] | SciNews | Medicine | Nature Communications (2021). DOI: 10.1038/s41467-021-21653-y Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-21653-y | https://medicalxpress.com/news/2021-03-automated-sequencing-platform-accurately-screen.html | Researchers at the Lunenfeld-Tanenbaum Research Institute (LTRI) at Sinai Health have developed a next-generation sequencing platform called C19-SPAR-Seq, which can screen thousands of COVID-19 samples at once with a sensitivity rate greater than 95% in positive cases. The platform, designed to identify positive samples quickly and accurately, is scalable, automated, and cost-effective, with a cost of around $8 USD per test when running thousands of samples. The team, led by Dr. Jeff Wrana, was able to develop and validate the platform in under 12 months with the help of a strong team of trainees and collaboration with the Mount Sinai Hospital clinical diagnostic lab. The platform has the potential to revolutionize how labs track the spread of viruses and other pathogens, and has already been used to screen thousands of positive samples for variants by rapidly sequencing fingerprint regions of the viral genome.
A robotics platform designed by Toronto researchers to screen thousands of COVID-19 samples at once has the potential to revolutionize how labs track the spread of viruses and other pathogens, according to new findings. The study, out Wednesday in Nature Communications, found that the next-generation, ultra-high-throughput sequencing platform, called C19-SPAR-Seq, designed by researchers from the Lunenfeld-Tanenbaum Research Institute (LTRI) at Sinai Health, has a sensitivity rate greater than 95 percent in positive cases during peak onset. "Identifying positive samples quickly and accurately is critical in beating this pandemic," said Dr. Jeff Wrana, senior investigator at the LTRI and professor in the Department of Molecular Genetics at the University of Toronto. "With new and potentially dangerous variants now circulating, this is a platform that is scalable, automated and capable of analyzing thousands of COVID-19 patient samples in a single instrument run." Wrana and fellow LTRI senior investigator Dr. Laurence Pelletier, in collaboration with University of Toronto professor Dr. Ben Blencowe, credit a strong team of eager trainees who shifted from other areas of research to help develop and validate the platform, allowing for the team to go from concept to published paper in under 12 months. "The co-operation of the Mount Sinai Hospital clinical diagnostic lab was the other key ingredient to our success," said Pelletier. "To date the shared microbiology lab, headed by Dr. Tony Mazzulli, has provided access to thousands of samples." In late 2020, the team pivoted again to use the robotics platform to screen thousands of positive samples for variants by rapidly sequencing fingerprint regions of the viral genome to look for key mutations. "It has been an absolute pleasure to work with Dr. Jeff Wrana and his team at the LTRI," said Dr. Mazzulli, microbiologist-in-chief for Sinai Health and University Health Network (UHN). "His novel SPAR-Seq System is cutting-edge technology and his team's ability to sequence COVID-19 samples in real time has tremendous potential for impacting our understanding of the epidemiology and spread of novel mutants in the province." The platform is also cost-effective. The study notes it only costs about $8 USD per test when running thousands of samples at once, as the cost per sample decreases due to economies of scale. "It's extremely reliable and readily adaptable," said Javier Hernandez, a junior researcher in the Wrana lab who co-led the study with Drs. Marie-Ming Aynaud and Seda Barutcu. "The turnaround is approximately 24 hours. It's very simple as we've automated practically every step in the process. For me, it's been a very exciting thing to see my work make a difference." |
10.1038/s41598-020-75844-6 | New study into a rare type of cancer in abdomen lining shows possible immunotherapy treatment | A new study from the University of Birmingham has found that 50% of patients with a rare type of cancer that has spread into the lining of their abdomen may be suitable for immunotherapy treatment. Unfortunately for around 1% of bowel cancer patients, their cancer spreads to the lining of their abdomen (peritoneal cavity) - known as colorectal peritoneal metastasis (CPM). This type of spread in bowel cancer patients carries a very poor prognosis and so most patients do not survive beyond 12 months from diagnosis. Patients with CPM have a limited survival rate with the best available treatments. Conventional chemotherapy is ineffective, and current treatment consists of extensive surgery which does not always work. This first of its kind study funded by Good Hope Hospital Charity, found that understanding the tumor biology may identify which patients with bowel cancer are at risk of developing CPM. Results published in Scientific Reports show that by identifying the specific tumor biology of this groups of patients, they carry a specific mutation that makes them sensitive to immunotherapy. Lead author, Professor Andrew Beggs from the University of Birmingham's Institute of Cancer and Genomic Sciences, said: "We have found that approximately 50% of patients with CPM have a type of genetic change, called hypermutation. This means they may be sensitive to immunotherapy as this type of treatment has good results in other patient groups with hypermutations. "We also found potential sensitivity to a drug called a Porcupine inhibitor, based on another genetic marker identified in these patients. "This is the first study of its kind in the world for patients with CPM, and our results have shown this could provide a potentially curative option for patients given the responses we have seen to immunotherapy in other cancers." Researchers will now look to set up an international clinical trial to examine the use of immunotherapy for patients with CPM. | A new study from the University of Birmingham has found that approximately 50% of patients with colorectal peritoneal metastasis (CPM), a rare and aggressive form of bowel cancer, may be suitable for immunotherapy treatment. CPM has a poor prognosis, with most patients not surviving beyond 12 months from diagnosis, and current treatments are ineffective. The study, funded by Good Hope Hospital Charity, identified a specific mutation in the tumor biology of patients with CPM, known as hypermutation, which makes them sensitive to immunotherapy. Additionally, researchers found potential sensitivity to a drug called a Porcupine inhibitor based on another genetic marker. The study's lead author, Professor Andrew Beggs, believes that immunotherapy could provide a potentially curative option for patients with CPM, and researchers are now planning to set up an international clinical trial to examine the use of immunotherapy for this patient group. | None | Abstract Colorectal Peritoneal metastases (CPM) develop in 15% of colorectal cancers. Cytoreductive surgery and heated intraperitoneal chemotherapy (CRS & HIPEC) is the current standard of care in selected patients with limited resectable CPM. Despite selection using known prognostic factors survival is varied and morbidity and mortality are relatively high. There is a need to improve patient selection and a paucity of research concerning the biology of isolated CPM. We aimed to determine the biology associated with transition from primary CRC to CPM and of patients with CPM not responding to treatment with CRS & HIPEC, to identify those suitable for treatment with CRS & HIPEC and to identify targets for existing repurposed or novel treatment strategies. A cohort of patients with CPM treated with CRS & HIPEC was recruited and divided according to prognosis. Molecular profiling of the transcriptome (n = 25), epigenome (n = 24) and genome (n = 21) of CPM and matched primary CRC was performed. CPM were characterised by frequent Wnt/ β catenin negative regulator mutations, TET2 mutations, mismatch repair mutations and high tumour mutational burden. Here we show the molecular features associated with CPM development and associated with not responding to CRS & HIPEC. Potential applications include improving patient selection for treatment with CRS & HIPEC and in future research into novel and personalised treatments targeting the molecular features identified here. Background Little is known about the biology of isolated colorectal peritoneal metastasis (CPM), which although a relatively rare phenomenon is one with a high mortality rate 1 . Understanding tumour biology may identify which patients with primary colorectal cancer (CRC) are at risk of developing CPM, and which are suitable for treatment with cytoreductive surgery and heated intra-peritoneal chemotherapy (CRS & HIPEC). CRS & HIPEC (usually using an agent such as mitomycin C or more recently, oxaliplatin) aims to achieve macroscopic tumour resection with multiple visceral and peritoneal resections and ablation of microscopic disease. Five-year survival however varies widely, and morbidity and mortality are relatively high 2 . There is a need therefore to improve patient selection, allowing alternative existing or novel treatment strategies to be used for patients unlikely to respond. Primary CRC research has identified markers of response to specific treatments, for example KRAS mutation in selection for anti-EGFR mAb therapy 3 . Gene expression signatures have been developed and are in clinical use for prognostication and therapeutic stratification in breast cancer 4 , 5 , 6 , 7 . Gene expression profiling in primary CRC has identified signatures associated with the development of metastasis 6 . One small study combining a small number of CPM with a larger cohort of appendix adenocarcinoma identified a signature predictive of reduced overall survival (OS) following CRS & HIPEC; these are however two biologically distinct tumours, appendix having significantly improved prognosis 7 . The dysregulation of methylation is a key step in tumorigenesis CpG island promoter methylation (CIMP) appears to be stable between matched primary CRC and hepatic metastasis suggesting an epigenetic methylation programme is established prior to the development of metastasis 8 , 9 , 10 . Hypermethylation of KRAS, Wnt modulators, tumour suppressor genes, CIMP and hypomethylation of oncogenes are associated with an unfavourable response to chemotherapy and anti-EGFR antibodies as well as tumour recurrence and reduced OS in primary and metastatic CRC 11 , 12 , 13 , 14 , 15 , 16 . Chromosomal instability is ubiquitous in cancer, increased copy number alteration, indicative of chromosomal instability is found in metastatic CRC 17 , 18 . Lopez-Garcia et al. 19 demonstrated that the evolution of chromosomal instability is depending on cellular tolerance, either via dysregulation of TP53 or via alternate escape mechanisms such as dysfunction of BCL9L regulated caspase signalling. CRC metastatic drivers are less clearly defined, apart from TP53 which is well characterised as being present in metastatic cancer 20 . Some studies have found mutations exclusive to metastatic sites 21 , 22 , whereas others found similar patterns of mutation between primary and metastasis 23 . Studies have examined the somatic mutations in CPM and their prognostic implications. These studies are limited to individual or small panels of mutations routinely tested for in clinical practice with limited evidence to suggest which genes should be included in panel sequencing in CPM. Schneider et al. examined the KRAS and BRAF mutation status of patients with CPM who underwent CRS & HIPEC 24 . They found mutations of RAS/RAF were associated with reduced OS independent of the use of targeted anti-EGFR treatment 24 . Sasaki et al. examined the KRAS, BRAF and PIK3CA mutation status of patients with metastatic CRC, with or without CPM 25 . They found the incidence of BRAF mutation was significantly associated with the presence of CPM but not with prognosis 25 . The landscape of metastatic colorectal cancer was studied by the MSK-IMPACT 20 group which undertook panel based sequencing of 1134 metastatic colorectal cancers. Of these 39 patients were defined as “peritoneal” malignancy, it is unclear whether these were isolated peritoneal metastasis. Only 14 of these patients had metasectomy. 7 of these had peritonectomy suggesting isolated disease suitable for resection. These tumours were also not studied with matched primary tumour of origin. There is a need to improve the outcomes for patients with CPM and significant variation in survival despite patient selection for treatment using known prognostic factors. There is a paucity of knowledge concerning CPM tumour biology. Understanding tumour biology will identify patients with primary CRC at risk of developing CPM, those suitable for treatment with CRS & HIPEC or alternative existing and novel treatment strategies. This study aims to determine the landscape of gene expression, methylation, and somatic mutation profile associated with the transition from primary CRC to isolated CPM and determine the association between these and prognosis following CRS & HIPEC in order to identify therapeutic targets. Methods Patient cohorts This study obtained ethical approval from the North West Haydock Research Ethics Committee, (15/NW/0079), project ID (17/283). Participants gave informed consent. All experiments were performed in accordance with relevant guidelines and regulations Consecutive retrospective patients were recruited from an internally held database of all patients undergoing CRS & HIPEC at Good Hope hospital from 2011 to 2017. Patients with CPM (adenocarcinoma), no extra-abdominal metastasis, a complete resection (CC0) and a peritoneal carcinomatosis index (PCI) of < 12 were eligible for inclusion. The completeness of cytoreduction score describes the degree of macroscopic tumour remaining after CRS and the likelihood of benefit from intraperitoneal chemotherapy 26 . Patients with no residual tumour score CC0, residual tumour < 0.25 cm, CC1, residual tumour 0.25–2.5 cm CC2. The extent of peritoneal metastasis is described by the PCI score. A PCI of ≥ 12 is poor prognostic factor for patients undergoing CRS & HIPEC 27 . Patients were divided into two groups. CRS & HIPEC is a long operation associated with a protracted inpatient and high dependency (HDU) or intensive care (ITU) stay an associated mortality of 1–12% and morbidity of 7–63% and a prolonged post-operative recovery 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 .With palliative chemotherapy DFS is 11–13 months and therefore patients post-treatment (CRS & HIPEC) with disease free survival (DFS) < 12 months were defined as “non-responders” 38 . Patients undergoing therapy with DFS > 12 months were defined as “responders”. Patients were imaged with CT which was reported by an experienced CPM radiologist, diagnostic laparoscopy was not used, not all patients with recurrence are suitable for iterative CRS & HIPEC and so this is not a standard procedure in their follow up. Adhesions following primary excision and CRS & HIPEC may also preclude accurate assessment of peritoneal recurrence in all areas with laparoscopy. Disease recurrence was determined when confirmed by CT and MDT review. Demographic, tumour and treatment details were compared between the prognostic cohorts. For continuous variables, the students T-test was applied to normally distributed data and Mann Whitney-U to non-normally distributed data. Categorical variables were compared with the Chi-squared test or Fishers exact test. A p value of < 0.05 was considered statistically significant. DFS survival between the responders and non-responders was compared using the Kaplan Meier method. Statistical analysis was performed in IBM SPSS Statistics for Windows, Version 24.0 39 . Nucleic acid extraction DNA and RNA were extracted from histologically confirmed Formalin fixed, paraffin embedded (FFPE) scrolls using the Covaris E220 evolution focused-ultrasonicator and the truTRAC FFPE total NA Kit. All peritoneal metastases samples were taken at the commencement of surgery. Nucleic acid concentration was quantified using the Qubit 3.0 Fluorometer and Qubit RNA / DNA HS (high sensitivity) assay kit. Nucleic acid quality was measured by electrophoresis using the Agilent 2200 TapeStation Nucleic Acid System, Agilent 2200 TapeStation Software A.01.05 and the Aligent High Sensitivity RNA / DNA ScreenTape and reagents. RNA library preparation, sequencing and bioinformatics RNA library preparation was performed using the Lexogen Quant Seq 3′ mRNA-Seq Library Prep kit. RNA libraries were denatured, diluted, loaded onto a 75-cycle High output flow cell and sequenced using the NextSeq500 at 2.5–5 million reads 40 . Quality control, trimming and alignment to the reference genome, (NCBI build 37, hg19) was performed with the Partek Flow genomics suite software package (Partek, St Louis, MI, USA). The gene expression profiles of primary and CPM and responders and non-responders were compared using gene Specific Analysis (GSA) Modelling using Partek flow with a false discovery rate (FDR) of < 0.1. Gene specific enrichment analysis (GSEA) and gene expression pathway analysis was performed using Partek flow, a p value of ≤ 0.05 was considered statistically significant. CMS and CRIS classifications were performed using ‘CMScaller’ (v0.99.1) in the R package, version 2.10.2 38 , 41 , 42 . Fishers exact test was used to compare contingency between primary and CPM and responders and non-responders in IBM SPSS Statistics for Windows, Version 24.0 39 . A p value of < 0.05 was considered significant. Methylation array and bioinformatics DNA was treated with sodium bisulphite using the Zymo EZ-DNA methylation kit, according to manufacturer’s instructions. Degraded FFPE DNA was restored prior to methylation array with the Infinium HD FFPE restore kit, according to manufacturer’s instructions. Methylation array was performed according to the Infinium MethylationEPIC BeadChip Kit manufacturer’s instructions. BeadChips were imaged using the Illumina iScan system. Initial data quality was checked using GenomeStudio Methylation Module Software. Raw data was loaded into the RStudio version 3.5.0 software using the minifi package. Bioinformatics analysis was performed using the Chip Analysis Methylation Pipeline (ChAMP) R package, version 2.10.2 43 , 44 . Probes with signals from less than three functional beads, low confidence with a detection p value > 0.01, covering SNPs, non-CpG and those located on the X and Y chromosome where filtered. Beta-mixture quantile normalization (BMIQ) was applied and a singular value decomposition (SVD) performed to identify batch effects. The association between methylation and prognosis was determined using the Bioconductor R package limma and bumphunter functions. Copy number alteration calling was performed using the CHAMP CNA function with a significance threshold of, p value < p < × 10 –10 . Exome capture, high-throughput sequencing and bioinformatics DNA was sheared using the Covaris E220 evolution focused-ultrasonicator to produce a uniform 150 bp fragment size. Libraries were prepared using the TruSeq Exome Kit then denatured, diluted, loaded onto a 150-cycle High output flow cell and sequenced using the NextSeq500. Sequencing reads were assessed using FastQC. Sequences with a Phred score of < 30 were removed giving a base call accuracy of 99.9%. Sequence reads were aligned to the human reference genome, (hg19) using the Burrows–Wheeler Aligner (BWA) package 45 . SAMTools was used to generate chromosomal coordinate-sorted BAM files and Picard was used to remove PCR duplicates 46 . Somatic variants were called from matched tumour-normal samples using Strelka2 in tumour/normal mode 47 . Somatic variants were viewed, filtered and annotated in genomics workbench 48 . Mutations with a MAF of > 1% in known variant databases, (dbSNP and 100,000 genomes) were filtered. Mutations were annotated with information from known variant databases, (dbSNP and 100,000 genomes), PhastCons score and functional consequences. The prognostic groups were compared using Fischer exact test to identify potential candidate driver mutations for non-responders. Somatic mutations were entered into the IntOGen platform for further analysis 49 . The IntOGen-mutation platform incorporates a number of pipelines to identify cancer driver mutations and activated pathways 49 . The OncodriveFM pipeline identifies mutations with a high functional impact using three scoring methods (Sorting Intolerant From Tolerant, (SIFT) 50 , PolyPhen2 51 , and Mutation Assessor scores) 49 , 52 , and assesses the likelihood that such mutations are cancer drivers. The OncodriveCLUST pipeline assesses the clustering of mutations to identify relevant activated pathways 49 . MSI assessment was carried out using MSI_classifier_v3 ( ). Ethics approval and consent to participate North West Haydock Research Ethics Committee, (15/NW/0079), project ID (17/283). Results Patient cohort From 2011 to 2017 a total of n = 161 patients underwent CRS & HIPEC at University Hospitals Birmingham, n = 88 patients for metachronous CPM. Patients were excluded for the following reasons: other primary tumour (appendix, pseudomyxoma peritonei, ovarian) n = 49, synchronous colorectal cancer n = 26, no primary tumour available n = 53 CC2 resection n = 8 26 , PCI of ≥ 12 n = 20, follow up period of ≤ 12 months n = 27, leaving n = 28 patients. Complete information regarding the primary CRC pathology and treatment was available for n = 26 patients who form the basis of this study. Each patient had matched normal, primary CRC and CPM samples. Thirteen patients had a DFS of 24 months (15–72 range) following CRS & HIPEC and formed the ‘responders cohort, thirteen patients had a DFS of 6 months (2–11 range) and formed the ‘non-responders’. There were no significant differences between cohorts in demographics, primary CRC or CPM tumour, treatment or follow up (Table 1 ). No patients had neoadjuvant therapy for their primary tumour. Three patients (all in the responders group) had poorly differentiated, mucinous adenocarcinoma, one had signet ring adenocarcinoma (in the non-responders group) and all the others had moderately differentiated adenocarcinoma. Table 1 Comparison of responders and non-responders to CRS & HIPEC. Full size table Following nucleic acid extraction all patients had adequate CPM RNA for RNAseq (n = 13 responders, n = 13 non-responders), n = 25 had matched primary CRC samples. For methylation array n = 24 patients (n = 12 responders, n = 12 non-responders) had adequate DNA. As the Infinium methylation array comprises a 32-prep kit, n = 4 responders and n = 4 non-responders primary tumours were matched to these. For exome sequencing n = 24 patients (n = 12 responders, n = 12 non-responders) had adequate DNA from both the primary and CPM samples, extraction of DNA from normal tissue resulted in n = 21 samples (n = 9 responders, n = 12 non-responders). Exome sequencing Across all six sequencing runs, we obtained a median of 60X coverage (42–166) with a median uniformity of 88% (71–89). Somatic mutations identified in the primary and matched CPM cohort In the matched CPM cohort, a total of n = 244,531 somatic SNV’s were identified (CPM-primary subtraction) significantly more than found in the matched primary cohort (n = 112,420). Nine CPM samples, 9/24 (56%) had a high tumour mutational burden TMB ≥ 10 mut/Mb 53 compared with 7/24 (30%) samples in the matched primary cohort. Mutations were identified in n = 69 of n = 95 known CRC driver genes, n = 51 were shared between the primary and CPM, n = 13 were novel (supplementary table S1 ) 54 . Of the somatic variants identified in CPM, n = 58,958 (29%) were present in the primary CRC, n = 205,552 variants occurred exclusively in the CPM suggesting a significant accumulation of mutations in the transition to CPM (Fig. 1 ). OncodriveFM identified n = 265 potential driver genes with high levels of functional mutation (Q-value < 0.05) in the CPM cohort: FLNB , SPTB, PPL, TP53, PDE4DIP , RIOK2, CDC16, NUP98, CDC16 and SVEP1 (supplementary table S2 ), however these results must be treated with caution due to the bias of the hypermutator phenotype. KEGG pathway analysis of mutations demonstrated enrichment in pathways concerning the immune system, signalling, metabolism and cancer (supplementary table S1 ). In the CPM group KRAS or BRAF status was not significantly associated with prognosis (chi2 p = 1.00). Figure 1 Venn diagrams depicting the frequency of mutations exclusive to and shared between primary CRC and matched CPM and responders and non-responders. Full size image Clonality analysis with SuperFreq showed significant (Wilcoxon rank p = 0.007) differences between the responders and non-responders groups, with a median of 2 clones in the responders group of primary tumours (range 1–4) and 3 clones in the non-responders group (range 2–7). In the peritoneal metastases there were a median of 3 clones in both the responders (range 1–4) and non-responders (range 2–5) groups. Of note, in the non-responders group during clonal expansion, the dominant clone in the peritoneal metastasis group arose de-novo rather than being a prior clone that existed in the primary tumour ( Supplementary Fig. 1 , S1e primary tumours, 9/21 were MSI (47.4%) and 10/21 were MSS (52.6%) whereas in the isolated peritoneal metastasis group, 4/21 (19.0%) were MSS and 17/21 MSI (81.0%) Demonstrating that there was a significantly higher rate of MSI in the isolated peritoneal metastasis group ( p < 0.05, Chi2). Non-responders had a higher frequency of somatic mutations: 60% of all mutations in CPM cohort vs. 40%. Non-responders more commonly had a high tumour mutational burden, TMB ≥ 10 mut/Mb 53 , 56% vs. 44%. Of the somatic mutations identified in non-responders, n = 35,461 (30%) were present in responders, n = 145,089 variants occurred exclusively in non-responders, suggesting a high tumour mutational burden was associated with non-response to CRS & HIPEC (Fig. 1 ). Mutational signature analysis of the MSI tumours demonstrated a predominance of signature 5 (associated with mutational “clock” effects), signature 26 (associated with defective mismatch repair) and signature 20 (associated with defective mismatch repair). Comparison of somatic mutations in responders and non-responders identified two potential candidate genes to identify non-responders, FAM13A and PIEZO2 (Fishers exact p < 0.05, FDR = 0.53) (Table 2 ). Table 2 Potential candidate variants, non-responders to CRS & HIPEC. Full size table Differentialene expression Differential gene expression between primary CRC and matched CPM Primary CRC and matched CPM showed differential expression of n = 65 genes with an FDR < 0.1. (Fig. 2 ) Sixteen genes showed significantly decreased expression in CPM compared with primary CRC (Table 3 ). Forty-nine genes showed significantly increased expression in CPM compared with primary CRC (Table 3 ). A KEGG pathway analysis was performed to identify the enriched biological functions among the differentially expressed genes (Supplementary Table 1 ). The expression of FABP6 , an intercellular bile acid transporter, was decreased 34.30-fold in CPM. OLFM4 is a target of the Wnt/β-catenin pathway, its expression was reduced 3.77-fold in CPM. DCN and PTEN are able to initiate a number of signalling pathways including ERK and EGFR leading to growth suppression, their expression was increased 3.3-fold and 3.25 fold in CPM, this was unexpected and in contrast to the literature 55 . NF-κBIA expression was increased 3.24-fold in CPM, its upregulation may reflect increased NF-κB activity in the development of CPM 56 . Figure 2 Heatmap of differential gene expression in 100 highest genes ranked by variance between primary CRC (P, red) and colorectal peritoneal metastasis (CRS, blue). Sample type is indicated at the X axis of the heatmap with individual genes on the Y-axis. Individual IDs of each patient are below the indicators of primary or CRS sample. Gene expression as indicated by the Z-score is displayed as colour ranging from green to black to red as shown in the legend. Created in Partek Flow. Full size image Table 3 The top 10 genes with significantly altered expression (FDR < 0.1) in CPM samples compared with primary CRC samples. Full size table Gene specific enrichment analysis (GSEA) results are presented in supplementary table 5 We identified 848 upregulated gene ontology categories in CPM and 14 upregulated gene pathways. which may contribute to the pathogenesis of CPM: the mTOR pathway as well as immune pathways including the intestinal immune network for IgA production, Leukocyte transendothelial migration and the actin cytoskeleton pathway. Differential gene expression between non-responders and responders to CRS & HIPEC One hundred and forty-nine genes showed increased expression in non-responders (Fig. 3 ). Five genes showed decreased expression in non-responders, however none had a fold change ≥ 1.5 suggesting minimal difference in expression between the responders and non-responders ( Supplementary Table 2 ). KEGG pathway analysis demonstrated enrichment in endocytosis, metabolism, phagocytosis, cell movement and architecture, bacterial and viral cell infection, transcription and the expression of genes controlling apoptosis, cell cycle, oxidative stress resistance and longevity (Table 3 ). The expression of CEACAM1, a member of the carcinoembryonic antigen ( CEA ) immunoglobulin family, was increased 8.27-fold in non-responders 57 . Figure 3 Heatmap differential gene expression of top 100 genes as ranked by variance between responders (blue) and non-responders (red)Sample type is indicated at the transverse border of the heatmap with individual genes on the longitudinal border. Gene expression as indicated by the Z-score is displayed as colour ranging from green to black to red as shown in the legend. Created in Partek Flow. Full size image AXIN1 encodes a cytoplasmic protein which forms the ß-Catenin destruction complex, a negative regulator of the WNT signalling pathway 58 . AXIN1 expression was increased 5.42-fold in non-responders 59 . Gene specific enrichment analysis (GSEA) results are presented in supplementary table 6 . We identified 591 upregulated gene ontology categories in CPM and 15 upregulated gene pathways. which may contribute to the pathogenesis of CPM: Endocytosis, the adherens junction pathway and immune pathways such as those regulating the bacterial invasion of epithelial cells. Amongst the n = 51 primary CRC and CPM samples n = 29 were representative of each CMS subtype, the remaining n = 22 samples did not have a consistent pattern (Fig. 4 ). Comparison of the CMS subtypes in primary and CPM and prognostic groups revealed an apparent transition from primary CRC to CPM. No primary CRC samples were classified as CMS4 (mesenchymal subtype characterized by prominent transforming growth factor activation, stromal invasion and angiogenesis) compared to 31% of CPM ( p = 0.085). Secondly, non-responders were more commonly CMS4, 46% vs. 15% ( p = 0.005, Table 4 ). Figure 4 Sankey diagram depicting the transition in consensus molecular subtypes (CMS) from primary to CPM. CMS classifications were performed using ‘CMScaller’ (v0.99.1) in the R /Bioconductor statistics package. Classifications include CMS1 to CMS4, non-consensus samples do not have a consistent pattern of subtype label association. Primary CRC samples, classification and number are shown to the left of the diagram with CPM samples, classification and number to the right of the diagram. Fishers exact p value 0.085, values in parenthesis percentages. Full size image Table 4 CMS classification responders vs. non-responders to CRS & HIPEC. Full size table Methylation Differential methylation between primary CRC and matched CPM Thirty-two samples in total were hybridised successfully to the Illumina HumanMethylation EPIC microarrays. DMPs were called between the primary CRC and CPM. The top ranked differentially methylated probe was cg04146982, BF 34.5, adjusted p value 5.67 × 10 –16 (chr8:144,943,810–144,943,810, hg19 coordinates), which tags a CpG dinucleotide 3651 bp upstream of the transcription start site of gene Epiplakin 1, (EPPK1) 60 . EPPK1 is part of the Plakin family an important component of the cell cytoskeleton 61 . The other DMP was cg12209861, BF 7.1, adjusted p value 0.059 (chr4:37,459,078–37,459,078, hg19 coordinates), 3526 bp upstream of the transcription start site of gene Chromosome 4 Open Reading Frame 19, ( C4orf19 ). DMRs were called between primary CRC and CPM via the dmrLasso function of the CHAMP pipeline ( Supplementary Table 3 ). The top 10 most DMRs were in the region of IGF2 , ZNF461 , RASGFR1, CELF4 , ZSCAN18 , EDNRB , ZBED9, VTRNA2-1 , ZNF256 and EGFLAM. KEGG pathway analysis did not reveal any significantly enriched pathways. Comparison of CNA between primary and CPM via methylation arrays did not identify and significant differences in CNA between primary and CPM at a stringent p value of < × 10 –10 however a number of CNA were identified at a lower significance threshold, p = 2.78 × 10 –07 ( Supplementary Table 4 ).Genes showing CNA gains of known significance in patients with CPM included; TRIM3, 5, 6, 21 and 22, MT1A, 2A, 3, 4 encode proteins of the metallothionein family. Differential methylation between non-responders and responders to CRS & HIPEC The top ranked differentially methylated probe was cg07951355, BF = 6, (chr1:40,123,717) which tags an intergenic region 1076 bp before gene NT5C1A . Cg25909064, BF 4 adjusted p value 0.47 (chr11:120,081,487–120,082,345) which tags an intron of gene OAF and cg12977942, BF 4 adjusted p value 0.47 (chr5:92,839,309–92,839,309) which tags an intron of gene NR2F1-AS1 60 . Six significant DMRs ( Supplementary Table 3 ) were identified in the regions of NKX6-2, CHFR, GATA3 , IRX5 , HCK and BC019904 . KEGG pathway analysis did not reveal any significantly enriched pathways. Comparison of CNA between the CPM prognostic groups identified recurrent gene losses at chromosomes 3, 4, 14, 15, 17 and 19 ( Supplementary Table 4 ). CNA losses clustered in the RAS-MAPK-ERK signalling pathway suggesting dysregulation in non-responders. Comparison of CNA between the CPM prognostic groups identified n = 19 gene gains at chromosomes 9, 10 and 11. Genes showing CNA gains in non-responders included: SIT1, RNF38, MELK, PAX5, SHB, ZEB1, DEAF1, ANTXR, EPS8L2 and PIDD1. Discussion This study determined the gene expression, CNA, methylation and somatic mutation profile of primary CRC and matched isolated CPM to determine whether there were changes associated with the development of CPM or predicting prognosis for patients with CPM. To our knowledge, this is the first such analysis in a cohort of patients with isolated CPM suitable for treatment with CRS & HIPEC. The MSKCC cohort of metastatic cancer 20 had a diverse range of metastatic cancer, none of whom overlapped with the type we have studied, which is isolated colorectal peritoneal metastasis, with matched primary samples, suitable for cytoreduction. Within this study responders and non-responders to CRS & HIPEC were well matched by demographics, tumour stage, treatment and follow up. PCI varied between groups with responders having a median PCI of 5 (3–12) and non-responders a median PCI of 8 (2–12). A PCI of greater than 12 is associated with reduced survival following CRS & HIPEC, no significant difference is consistently found at PCI levels below this 27 . Comparison of patients with primary CRC and metachronous CPM identified biological changes associated with the transition from primary CRC to CPM. Hypermethylation, CNA and hypermutation resulted in the inactivation of tumour suppressors and oncogene activation in CPM, (TP53, VTRA2-1, TRIM proteins). These changes suggest a rapid rate of tumour growth unchecked by tumour suppressor or apoptotic mechanisms. Increased MAPK and Wnt/β-catenin pathway activation was noted in CPM. Gene expression of negative regulators of the Wnt pathway was reduced, (OLFM4, DEAFA6), negative Wnt regulators contained somatic mutations, (APC, RNF43, FAM123B and TSC1), and the MAPK marker, RASFGFR1 was hypermethylated suggesting persistent activation of MAPK and Wnt pathways. Multiple mutations of negative Wnt signalling regulators make this an attractive therapeutic target. Porcupine inhibitors mediate the palmitoylation of Wnt ligands, blocking Wnt signalling. The porcupine inhibitor LGK974 inhibits the upstream negative Wnt regulator mutant RNF43 and is a potential therapeutic target in CPM 62 . CPM contained a high proportion of MSH6 somatic mutations suggesting deficiency in the mismatch repair pathway and MSI. MSH6 mutations are commonly found in isolated peritoneal metastasis 59 . As expected for tumours with mismatch repair deficiency both the primary CRC and CPM cohort had a high tumour mutational burden, crucially this suggests they may have a good response to treatment with immune checkpoint inhibitors such as pembrolizumab 63 , a new therapeutic avenue for these difficult to treat patients. The frequency of hypermutation seen in our study (48%) was considerably higher than that observed for both the MSKCC metastatic disease cohort (5%) and the TCGA Colorectal 64 cohort (10%). The expression of genes regulating innate immunity however was downregulated, (DEFA6, DMBT1, MUC2) or altered via somatic mutations, (HLA-A antigen) suggesting immune evasion in the transition to CPM which may reduce the likelihood of successful PD-1 therapy. The expression of genes supressing invasion, migration and EMT was downregulated or hypermethylated, (MUC2, MMP26, ILK, FLNB, SPTB, PPL, and SVEP1) and those triggering these processes upregulated, (CYR61, CXCL12, CTGF, and CSTB). These changes suggest a mechanism by which CPM cells metastasise from the primary CRC. In keeping with changes in EMT regulators there appeared to be a transition in CMS subtypes towards CMS4 from primary CRC to CPM. The CMS4 subtype is an interesting therapeutic target, TGFβ signalling inhibitors and targeted immunotherapies have been trialled with success in pre-clinical models to block cross talk between the tumour microenvironment and halt disease progression of stromal rich CMS4 CRC 65 , 66 . Methylation appeared to be dysregulated in CPM with a bias towards a hypermethylator phenotype caused by somatic mutation of the TET2 tumour suppressor and CDH7 chromatin regulator. Active DNA demethylation by TET enzymes is an important tumour suppressor mechanism in a variety of cancers 67 , 68 , 69 . Downregulation of CES2, a gene known to activate the prodrug irinotecan, a chemotherapy used as part of the FOLFIRI regimen in the UK in the adjuvant treatment of primary CRC and CPM was seen in this cohort. Resistance to the treatment of primary CRC may in part explain the development of CPM. CEACAM1 expression correlates with metastasis and reduced survival in CRC and was upregulated in this cohort of patients 70 . Novel therapies in the form or CEA TCB IgG-based T-cell bispecific antibodies (Cibisatamab) may therefore be of benefit 71 . Additionally there was a downregulation of gene expression of negative regulators of the Wnt pathway, (AXIN1) and somatic mutations of key Wnt regulators, (FAM13A) and hypermethylation of MAPK and TGF-β pathway markers, (RAB8A, RAB34, FGF5 and BMP3) suggesting persistent activation of MAPK, TGF-β and Wnt in non-responders to CRS & HIPEC. A recent randomised controlled trial has called into question the use of HIPEC in CPM, PRODIGE-7 treated patients with CPM with CRS & HIPEC or CRS alone in addition to systemic chemotherapy. PRODIGE-7 suggests no added benefit from HIPEC however this study was not powered to stratify the impact of HIPEC according to PCI score, on subgroup analysis patients with a PCI of 11–15 had significantly improved median survival with the addition of HIPEC 41.6 months vs. 32.7 months p value 0.0209 72 . A relative weakness of this study is the small cohort of patients, the biological changes identified here form a starting point in identifying the tumour biology associated with the development of CPM and predicting non-responders to CRS & HIPEC. However, we have identified multiple potential targets for therapy, along with the important finding that CPM appears to be a hypermutated, hypermethylated, immune evasive cancer which allows it to be potentially targeted by emerging novel therapeutics. Our study findings have implications for the recent addition of oxaliplatin to HIPEC, as the FOXTROT study of neoadjuvant therapy in colorectal cancer showed that oxaliplatin has no effect in dMMR tumours. Conclusions Patients with colorectal peritoneal metastasis (CPM) secondary to colorectal cancer have limited survival with the best available treatments. Despite selection for treatment using known prognostic factors survival varies widely and can be difficult to predict. There is a paucity of knowledge concerning the biology of CPM, it is likely that there are additional biological markers of response to currently available as well as novel or re-purposed alternative treatments. Here we have comprehensively profiled a cohort of patients with isolated CPM and identified a number of therapeutically targetable alterations including mutations in Wnt/β catenin regulators (via Porcupine inhibitors), the mismatch repair pathway (via PD-1/CTLA-4 immunotherapy) and methylation regulators. We suggest that these are urgently investigated in a larger cohort with the development of pre-clinical models as, in particular, the finding that these patients may be sensitive to immunotherapy may radically change the therapy options available for this difficult to treat group of patients. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Abbreviations CRC: Colorectal cancer CPM: Colorectal peritoneal metastasis CRS & HIPEC: Cytoreductive surgery and heated intraperitoneal chemotherapy DFS: Disease free survival DMR: Differentially methylated regions OS: Overall survival FFPE: Formalin fixed paraffin embedded | None | [] | [] | [] | SciNews | Medicine | Sally Hallam et al. The transition from primary colorectal cancer to isolated peritoneal malignancy is associated with an increased tumour mutational burden, Scientific Reports (2020). DOI: 10.1038/s41598-020-75844-6 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-020-75844-6 | https://medicalxpress.com/news/2020-11-rare-cancer-abdomen-lining-immunotherapy.html | A new study from the University of Birmingham has found that approximately 50% of patients with colorectal peritoneal metastasis (CPM), a rare and aggressive form of bowel cancer, may be suitable for immunotherapy treatment. CPM has a poor prognosis, with most patients not surviving beyond 12 months from diagnosis, and current treatments are ineffective. The study, funded by Good Hope Hospital Charity, identified a specific mutation in the tumor biology of patients with CPM, known as hypermutation, which makes them sensitive to immunotherapy. Additionally, researchers found potential sensitivity to a drug called a Porcupine inhibitor based on another genetic marker. The study's lead author, Professor Andrew Beggs, believes that immunotherapy could provide a potentially curative option for patients with CPM, and researchers are now planning to set up an international clinical trial to examine the use of immunotherapy for this patient group.
A new study from the University of Birmingham has found that 50% of patients with a rare type of cancer that has spread into the lining of their abdomen may be suitable for immunotherapy treatment. Unfortunately for around 1% of bowel cancer patients, their cancer spreads to the lining of their abdomen (peritoneal cavity) - known as colorectal peritoneal metastasis (CPM). This type of spread in bowel cancer patients carries a very poor prognosis and so most patients do not survive beyond 12 months from diagnosis. Patients with CPM have a limited survival rate with the best available treatments. Conventional chemotherapy is ineffective, and current treatment consists of extensive surgery which does not always work. This first of its kind study funded by Good Hope Hospital Charity, found that understanding the tumor biology may identify which patients with bowel cancer are at risk of developing CPM. Results published in Scientific Reports show that by identifying the specific tumor biology of this groups of patients, they carry a specific mutation that makes them sensitive to immunotherapy. Lead author, Professor Andrew Beggs from the University of Birmingham's Institute of Cancer and Genomic Sciences, said: "We have found that approximately 50% of patients with CPM have a type of genetic change, called hypermutation. This means they may be sensitive to immunotherapy as this type of treatment has good results in other patient groups with hypermutations. "We also found potential sensitivity to a drug called a Porcupine inhibitor, based on another genetic marker identified in these patients. "This is the first study of its kind in the world for patients with CPM, and our results have shown this could provide a potentially curative option for patients given the responses we have seen to immunotherapy in other cancers." Researchers will now look to set up an international clinical trial to examine the use of immunotherapy for patients with CPM. |
DOI 10.1007/s00114-011-0792-1 | Reptilian root canal: Study reveals infection in jaw of ancient fossil | A reptile that lived 275-million years ago in what is now Oklahoma is giving paleontologists a glimpse of the oldest known toothache. Led by Professor Robert Reisz, the chair of the Department of Biology at the University of Toronto Mississauga, scientists found evidence of bone damage due to oral infection in Paleozoic reptiles as they adapted to living on land. Their findings, published online in the journal Naturwissenschaften – The Nature of Science, predate the previous record for oral and dental disease in a terrestrial vertebrate by nearly 200 million years. "Not only does this fossil extend our understanding of dental disease, it reveals the advantages and disadvantages that certain creatures faced as their teeth evolved to feed on both meat and plants," says Reisz. "In this case, as with humans, it may have increased their susceptibility to oral infections." The researchers investigated the jaws of several well-preserved specimens of Labidosaurus hamatus, a 275-million-year-old terrestrial reptile from North America. One specimen stood out because of missing teeth and associated erosion of the jaw bone. With the aid of CT-scanning, Reisz and colleagues found evidence of a massive infection. This resulted in the loss of several teeth, as well as bone destruction in the jaw in the form of an abscess and internal loss of bone tissue. As the ancestors of advanced reptiles adapted to life on land, many evolved dental and cranial specializations to feed more efficiently on other animals and to incorporate high-fiber plant leaves and stems into their diet. The primitive dental pattern in which teeth were loosely attached to the jaws and continuously replaced, changed in some animals. Teeth became strongly attached to the jaw, with little or no tooth replacement. This was clearly advantageous to some early reptiles, allowing them to chew their food and thus improve nutrient absorption. The abundance and global distribution of Labidosauris and its kin suggest that it was an evolutionary success. However, Reisz and his colleagues suggest that as this reptile lost the ability to replace teeth, the likelihood of infections of the jaw, resulting from damage to the teeth, increased substantially. This is because prolonged exposure of the dental pulp cavity of heavily worn or damaged teeth to oral bacteria was much greater than in other animals that quickly replaced their teeth. Reisz notes that human susceptibility to oral infection has some parallels to those of ancient reptiles that evolved to eat a diet incorporating plants in addition to meat. "Our findings suggest that our own human system of having just two sets of teeth, baby and permanent, although of obvious advantage because of its ability to chew and process many different types of food, is more susceptible to infection than that of our distant ancestors that had a continuous cycle of tooth replacement." | Paleontologists have discovered evidence of the oldest known toothache in a 275-million-year-old reptile, Labidosaurus hamatus, found in what is now Oklahoma. The fossil, analyzed by Professor Robert Reisz and his team, shows signs of bone damage due to oral infection, predating the previous record for oral and dental disease in a terrestrial vertebrate by nearly 200 million years. The researchers believe that as the reptile adapted to living on land and evolved to eat both meat and plants, its teeth became more susceptible to infection, leading to the loss of several teeth and bone destruction in the jaw. This finding has parallels with human oral health, suggesting that our own system of having two sets of teeth, while advantageous for food processing, may be more prone to infection than the continuous cycle of tooth replacement found in some ancient reptiles. | None | Abstract We report on dental and mandibular pathology in Labidosaurus hamatus , a 275 million-year-old terrestrial reptile from North America and associate it with bacterial infection in an organism that is characterized by reduced tooth replacement. Analysis of the surface and internal mandibular structure using mechanical and CT-scanning techniques permits the reconstruction of events that led to the pathology and the possible death of the individual. The infection probably occurred as a result of prolonged exposure of the dental pulp cavity to oral bacteria, and this exposure was caused by injury to the tooth in an animal that is characterized by reduced tooth replacement cycles. In these early reptiles, the reduction in tooth replacement is an evolutionary innovation associated with strong implantation and increased oral processing. The dental abscess observed in L . hamatus , the oldest known infection in a terrestrial vertebrate, provides clear evidence of the ancient association between terrestrial vertebrates and their oral bacteria. Access provided by DEAL DE / Springer Compact Clearingstelle Uni Freiburg _ Working on a manuscript? Avoid the common mistakes Introduction The rich fossil record of amniotes (extant reptiles, birds, mammals, and their extinct relatives) extends over the last 315 million years and spans three eras (Reisz 1997 ). Whereas Mesozoic dinosaurs and Cenozoic mammals often show evidence of pathology (Lucas and Schoch 1987 ; Rothschild 1997 ; Tanke and Rothschild 2002 ; Witzmann et al. 2008 ), including bite marks, healed scars, infections, and tumors, they are poorly documented in Paleozoic amniotes (Reisz 1980 ; Johnson 1988 ; Reisz and Tsuji 2006 ; Huttenlocker et al. 2010 ), the first vertebrates to diversify extensively on land. The pathology reported here was discovered in the anterior part of the lower jaw (Fig. 1 ) in the largest and presumably oldest known individual of Labidosaurus hamatus , a member of the late Paleozoic group Captorhinidae (Modesto et al. 2007 ). Captorhinids were the first reptiles to diversify rapidly and disperse globally during the Paleozoic (Müller et al. 2007 ). They range in size from 25 cm in total length in late Carboniferous (Müller and Reisz 2005 ) and Early Permian (Heaton and Reisz 1980 ) forms, and achieve total lengths up to 2.5 m in some of the Middle and Late Permian species (Dodick and Modesto 1995 ; O'Keefe et al. 2005 ). During the Early Permian, members of this clade were the most commonly occurring reptiles in the fossil record. Fig. 1 Evidence of dental and mandibular pathology in L. hamatus , a basal reptile from the Lower Permian of Oklahoma. a Skull reconstruction in right lateral view, modified from Ref. 4. Shaded area represents region of the lower jaw shown in b and c . b CMNH 76876, a right hemimandible in lateral, occlusal, and medial views. c Longitudinal CT scans of the mandible shown in b , illustrating the internal changes that occurred in the anterior region of the jaw as a consequence of the infection. Only one ( t2 ) of the three anterior teeth was functional at the time this individual died. Remnants of the first ( rt1 ) and third ( rt3 ) teeth are visible in the CT scan, and were encapsulated into the mandible by dentary bone, probably after they were broken. Tooth sockets at positions 1 ( tp1 ) and 3 ( tp3 ) have been filled with bone. The direction of infection extends posteriorly from the first tooth position to the fourth open tooth position ( ots4 ) and to the lingual and labial abscesses. It is at the level where the pulp cavity of the teeth would have been in the living organism. Scale bar = 10 mm Full size image The more derived captorhinids evolved dental and cranial specializations as part of their adaptation to omnivory and high-fiber herbivory (Reisz and Sues 2000 ). In particular they modified their dentition by attaching them very strongly to the jaws through ankylosis, and by changing dramatically the pattern of tooth replacement. The normal pattern of tooth replacement seen in most other Paleozoic tetrapods is characterized by teeth that are relatively loosely attached to the jaw bones, and continuous waves of new teeth erupting at specific tooth positions or sockets, with older teeth being partly resorbed and then shed as the new teeth erupt in the same socket (polyphyodonty). This pattern of tooth replacement is also present in extant tetrapods, including amphibians and most squamates (Edmund 1960 ). Thus, several teeth in any jaw in certain extant and fossil tetrapods can always be seen in the process of being replaced, with two teeth being present in a single tooth position: the crown of a partially resorbed older tooth from an older wave of replacement and another small tooth from the next wave of replacement growing at the base and slightly lingual to the older tooth (polyphyodonty). With continued resorption, the tooth of the previous wave of replacement is eventually shed and the younger tooth grows into full function in that tooth position (Edmund 1960 ). However, in the clade that includes captorhinids like Captorhinus , Labidosaurus , and Moradisaurinae (Fig. 2 ), the change in the pattern of dental development resulted in a dramatic decrease in tooth replacement waves, with older teeth being removed only occasionally, and by erosion, while new teeth did not erupt in the same tooth position as the older teeth. This highly modified pattern can be best seen in Captorhinus aguti , a species that developed multiple tooth rows (Bolt and DeMar 1975 ). The development of multiple tooth rows occurred by the eruption of a new series of teeth lingual to the older tooth row, with the wave of eruption extending mesially along the jaw. The older tooth row was not replaced, and instead, an additional row was added. Only the oldest tooth from an older series appears to be occasionally replaced, and only when it appears to be in the way of the new wave (de Ricqles and Bolt 1983 ). Fig. 2 Phylogeny of Captorhinidae modified from Müller et al. ( 2006 ) and Modesto et al. ( 2007 ). Previous studies (Bolt and DeMar 1975 ; de Ricqles and Bolt 1983 ; Modesto 1996 ) show that the evolution of reduced cycles of tooth replacement ( RTR ) evolved in the ancestor of Captorhinus , moradisaurines ( Labidosaurikos , Moradisaurus , Rothianiscus ), and Labidosaurus . Skull reconstructions of Romeria texana , C. aguti , Labidosaurikos meachami , and L. hamatus from Heaton ( 1979 ), this study, Dodick and Modesto ( 1995 ), and Modesto et al. ( 2007 ) Full size image The overall result is a dramatic reduction in all derived captorhinids in the replacement of old teeth with new ones. This can be seen even in the single-tooth-rowed forms like Labidosaurus and Captorhinus magnus , where there is usually no gap in the tooth row, and rarely is there any evidence of a tooth in the process of being replaced, as seen in the more basal members of the clade (Modesto 1996 ). The deep implantation and strong attachment (ankylosis) of the teeth into the jaw were clearly advantageous in these derived captorhinids. In addition, the reduction and changes in tooth replacement also allowed for the development of multiple-tooth-rowed forms through the addition of rows of teeth (Reisz and Sues 2000 ), a design that is ideally suited for increased oral processing in omnivorous and herbivorous animals like C. aguti and moradisaurine captorhinids. Careful preparation of several exquisitely preserved specimens while completing a thorough, detailed description of the cranial anatomy of the captorhinid reptile L. hamatus (Modesto et al. 2007 ) revealed a remarkable pathology in one jaw . Since several complete skulls were prepared as part of that analysis, we are confident of our interpretation that the unusual features of this specimen can be clearly attributed to modifications and damages that occurred during the lifetime of the individual, rather than due to postmortem, taphonomic, or preparatory effects. We employed traditional paleontological techniques and modern computerized tomographic scanning imagery to examine dental pathology in the Lower Permian captorhinid L . hamatus . Methods The study specimen is CMNH (Carnegie Museum of Natural History, Pittsburgh, Pennsylvania) specimen 76876, an isolated, partial right hemimandible from the Lower Permian “ Labidosaurus pocket” locality near Coffee Creek, Baylor County, TX (Modesto et al. 2007 ). CMNH 76876 was prepared manually using pneumatic airscribe equipment and pin vises. This specimen was then CT scanned using a Philips MX 8000 QuadCT scanner at Thunder Bay Regional Health Sciences Centre, ON, at 800-μm slice thickness, rendering 24 longitudinal, 44 coronal, and 359 transverse slices. Results Examination of CMNH 76876 shows that the teeth in the first and third position were clearly damaged but not replaced in the normal reptilian fashion, in which new teeth emerge from the lingual side of each empty socket. Instead, the tooth sockets were plugged with bone, with the result that fragments of the roots became encapsulated (Fig. 1c ), an unusual feature that could only occur while the organism was alive. Farther distally, three open tooth sockets were carefully prepared, and they show partly damaged interdental and strongly damaged lingual and labial walls in an otherwise perfectly preserved region of the mandible. Here, again we were able to determine that the damage developed during the lifetime of the organism, but in this case, the trabecular bone exposed in the enlarged tooth sockets and on the damaged areas around them indicates that these were caused by infection. Similarly, the lateral side of the mandible shows bone destruction in the form of a deep groove that runs posteroventrally from the tooth bearing jaw margin at the level of the damaged interdental wall between tooth sockets 5 and 6, and extends deeply below the cortical layers into the trabecular part of the bone. An internal line, which is visible in CT scans (Fig. 1c ), is seen extending from tooth position 1 to 4 and represents internal loss of bone through infection directly beneath the tooth row, and demonstrates clearly the direction of infection extending posteriorly from the first tooth position. Discussion Our extensive knowledge of the osteology and patterns of dental replacement in captorhinids, developed over several decades of study of these ancient reptiles, allows us to reconstruct the sequence of events that occurred in this individual. First, there was an initial loss of anterior mandibular teeth, possibly from a trauma, followed by a relatively slow, bony encapsulation that covered the open pulp cavity of the damaged tooth, trapping oral bacteria inside the jaw. The surrounding tissues became involved with the inflammatory reaction through the spread of pyogenic organisms, the acute localized periapical abscess slowly transforming into chronic osteomyelitis (White and Pharoah 2000 ). The inflammatory reaction extended posteriorly to the level of tooth positions 4–7. There, the osteomyelitis produced a radiolucent area, and quite possibly bony sequestra, resulting in a fistula formation, allowing for the drainage of the pus extraorally. As a consequence of the infection, teeth 4–6 (but not the tooth in position 7) were prematurely exfoliated and the bone of the jaw was irreversibly damaged by osteomyelitis. This interpretation is based on comparisons between the patterns that we see in this specimen with those of extant organisms. It is not possible to determine if this infection caused the death of the individual, but it may have been a major contributing factor, because it appears to have been an active pathology at the time of death and, in some extant lizards, oral osteomyelitis poses a serious health threat (Mehler and Bennett 2003 ). The dental abscess identified here in the Early Permian L. hamatus predates the previous record for dental pathology in a terrestrial vertebrate reported for late Cretaceous hadrosaurid dinosaurs (Moodie 1930 ) by nearly 200 million years. This presence of dental pathology in a reptile that has greatly reduced its tooth replacement pattern is particularly interesting. Among Paleozoic terrestrial vertebrates, lifelong cycles of tooth replacement represent the normal, primitive condition (Edmund 1960 ). This pattern extends to early amniotes, organisms that include the distant ancestors of most higher vertebrates such as extant mammals, birds, and reptiles, as well as dinosaurs, marine and flying reptiles. This ancient, primitive tooth replacement pattern was modified in various groups either by greatly reducing or eliminating replacement cycles (mammals and some reptiles, like the tuatara, respectively) or by disposing of dentition entirely (turtles and most birds). This evolutionary innovation also occurred within Captorhinidae, the oldest known such example in the fossil record of terrestrial vertebrates. Our knowledge of this group of ancient reptiles, one of the best known clades of early terrestrial vertebrates, allows us to place this innovation within a broader evolutionary context. The generally accepted phylogenetic relationships among Captorhinidae (Fig. 2 ) indicates that the reduction in tooth replacement cycles occurred within this family. Early basal members of the clade are small insectivorous and carnivorous predators and have the normal patterns of continual tooth replacement (Modesto 1996 ; Müller et al. 2006 , 2007 ), whereas the more derived omnivorous and herbivorous members (Sues and Reisz 1998 ) of the clade have modified and reduced the replacement cycles as part of an evolutionary strategy of developing deeply implanted teeth that are strongly ankylosed to the mandibles (Dodick and Modesto 1995 ; Jalil and Dutuit 1996 ). The subsequent development of multiple tooth rows appears to have evolved at least twice within this group and independent of each other (Dodick and Modesto 1995 ). Clearly, the multiple tooth rows in the upper and lower jaws occluding against each other created a system of oral processing that was superior to that employed by other organisms that used single rows of teeth for occlusion and oral processing (Sues and Reisz 1998 ; Reisz and Sues 2000 ; Reisz 2008 ). Interestingly, an independently evolved reduction in cycles of tooth replacement and dental occlusion for oral processing occurred in synapsids, in the line towards mammals (Rybczynski and Reisz 2001 ). However, the reduction in synapsids appears to be coupled not only with herbivory but also with the evolution of precise dental occlusion in small carnivorous and insectivorous forms, with deeply implanted teeth and deep, multiple roots (Reisz and Sues 2000 ). The obvious success of captorhinids, the first reptiles to diversify extensively and expand globally, suggests that the deep implantation and strong attachment (ankylosis) of the teeth into the jaw probably represented a significant evolutionary advantage. The reduction in tooth replacement also allowed for the evolution of multiple tooth rowed forms through the addition of rows of teeth without any replacement (Bolt and DeMar 1975 ), the first such occurrence in terrestrial vertebrates. However, if dental damage occurred in large, adult individuals, there was no readily available mechanism to replace the tooth, as would be available in the great majority of other Paleozoic amniotes that had continuous replacement cycles. Thus, the opportunity for mandibular infection from prolonged exposure to oral bacteria was much greater in this reptile than in other Paleozoic amniotes. This allows us to speculate that our own human system of partial diphyodonty, although of obvious advantage because of its precise dental occlusion and extensive oral processing, is more susceptible to infection than that of our distant ancestors that had a continuous cycle of tooth replacement. Finally, the discovery of dental and mandibular infection from bacteria in a 275million-year-old reptile indicates that interactions between terrestrial amniotes and their microbiota has a very extended history, a feature of vertebrate evolution that has begun to attract the attention of the broad scientific and medical community relatively recently (Ley et al. 2006 ; Dethlefsen et al. 2006 , 2007 ). | None | [] | [] | [] | SciNews | Other | Reisz R R et al (2011). Osteomyelitis in a Paleozoic reptile: ancient evidence for bacterial infection and its evolutionary significance. Naturwissenschaften – The Nature of Science. DOI 10.1007/s00114-011-0792-1 | http://dx.doi.org/10.1007/s00114-011-0792-1 | https://phys.org/news/2011-04-pain-evolution-big-toothache-reptiles.html | Paleontologists have discovered evidence of the oldest known toothache in a 275-million-year-old reptile, Labidosaurus hamatus, found in what is now Oklahoma. The fossil, analyzed by Professor Robert Reisz and his team, shows signs of bone damage due to oral infection, predating the previous record for oral and dental disease in a terrestrial vertebrate by nearly 200 million years. The researchers believe that as the reptile adapted to living on land and evolved to eat both meat and plants, its teeth became more susceptible to infection, leading to the loss of several teeth and bone destruction in the jaw. This finding has parallels with human oral health, suggesting that our own system of having two sets of teeth, while advantageous for food processing, may be more prone to infection than the continuous cycle of tooth replacement found in some ancient reptiles.
A reptile that lived 275-million years ago in what is now Oklahoma is giving paleontologists a glimpse of the oldest known toothache. Led by Professor Robert Reisz, the chair of the Department of Biology at the University of Toronto Mississauga, scientists found evidence of bone damage due to oral infection in Paleozoic reptiles as they adapted to living on land. Their findings, published online in the journal Naturwissenschaften – The Nature of Science, predate the previous record for oral and dental disease in a terrestrial vertebrate by nearly 200 million years. "Not only does this fossil extend our understanding of dental disease, it reveals the advantages and disadvantages that certain creatures faced as their teeth evolved to feed on both meat and plants," says Reisz. "In this case, as with humans, it may have increased their susceptibility to oral infections." The researchers investigated the jaws of several well-preserved specimens of Labidosaurus hamatus, a 275-million-year-old terrestrial reptile from North America. One specimen stood out because of missing teeth and associated erosion of the jaw bone. With the aid of CT-scanning, Reisz and colleagues found evidence of a massive infection. This resulted in the loss of several teeth, as well as bone destruction in the jaw in the form of an abscess and internal loss of bone tissue. As the ancestors of advanced reptiles adapted to life on land, many evolved dental and cranial specializations to feed more efficiently on other animals and to incorporate high-fiber plant leaves and stems into their diet. The primitive dental pattern in which teeth were loosely attached to the jaws and continuously replaced, changed in some animals. Teeth became strongly attached to the jaw, with little or no tooth replacement. This was clearly advantageous to some early reptiles, allowing them to chew their food and thus improve nutrient absorption. The abundance and global distribution of Labidosauris and its kin suggest that it was an evolutionary success. However, Reisz and his colleagues suggest that as this reptile lost the ability to replace teeth, the likelihood of infections of the jaw, resulting from damage to the teeth, increased substantially. This is because prolonged exposure of the dental pulp cavity of heavily worn or damaged teeth to oral bacteria was much greater than in other animals that quickly replaced their teeth. Reisz notes that human susceptibility to oral infection has some parallels to those of ancient reptiles that evolved to eat a diet incorporating plants in addition to meat. "Our findings suggest that our own human system of having just two sets of teeth, baby and permanent, although of obvious advantage because of its ability to chew and process many different types of food, is more susceptible to infection than that of our distant ancestors that had a continuous cycle of tooth replacement." |
10.1038/s41565-020-00812-0 | Reducing pesticide use with nanoparticles | Researchers at the Adolphe Merkle Institute and the Department of Biology at the University of Fribourg have discovered how certain silica nanoparticles could act as a traceless, degradable, and highly efficient treatment against some plant pathogens. One of the biggest challenges facing agriculture today is the extensive use of fertilizers and pesticides. With an increasing number of products banned or considered dangerous for human and animal health, the need for substitutes is acute. One approach is to stimulate plants' own immune response to pathogen attacks. Silicic acid, which naturally occurs in soil, is known to provoke such responses in plants, and amorphous silica nanoparticles can release this substance in small amounts. These nanoparticles, which are also naturally present in many food crops such as cereals, are more common than most people think. They are part of food grade silica (SiO2), otherwise known as E551 on labels and packaging, and used for decades in a variety of products such as table salt, pills, or protein powders to avoid clumping. Increased resistance With this in mind, the Fribourg-based researchers aimed to create an environmentally safe nano-agrochemical for the targeted delivery of silicic acid and to stimulate plant defense. They synthesized silica nanoparticles with similar properties to those found in plants. To test their efficiency, they applied the nanoparticles on Arabidopsis thaliana (thale cress), a widely used plant model, infected with the bacterial pest Pseudomonas syringae, another model organism. The results showed that their nanoparticles can boost resistance against the bacteria in a dose-dependent manner by stimulating the plant's defense hormone, salicylic acid (which is also the active ingredient in aspirin). The researchers also investigated the interactions of the nanoparticles with plant leaves. They were able to show that nanoparticle uptake and action occurred exclusively through the leaf pores (stomata) that allow the plants to breathe. The nanoparticles did not distribute further in the plants, and the particles degrade without leaving a trace in the presence of water, an important consideration for environmental and food safety. Compared to free silicic acid, which is already used in crop protection, the silica nanoparticles caused less stress to the plants and to other soil microorganisms due to the slow release of the silicic acid. The study, published in the top-ranking journal Nature Nanotechnology, shows that silica nanoparticles could serve as an inexpensive, highly efficient, safe, and sustainable alternative for plant disease protection. Future research could extend the investigations to a broader spectrum of plant pathogens according to the researchers such as other bacteria, insects, or viruses. They emphasize though that before any broad application of nanoparticles as nano-biostimulants and -fertilizers, a thorough analysis is needed to assess the potential long-term fate of silica nanoparticles in the environment. | Researchers at the University of Fribourg have discovered that certain silica nanoparticles can act as a traceless, degradable, and highly efficient treatment against plant pathogens. The nanoparticles, which are naturally present in food crops, can release silicic acid, a substance that stimulates plants' own immune response to pathogen attacks. The researchers synthesized silica nanoparticles with similar properties to those found in plants and tested their efficiency on Arabidopsis thaliana infected with the bacterial pest Pseudomonas syringae. The results showed that the nanoparticles can boost resistance against the bacteria in a dose-dependent manner by stimulating the plant's defense hormone, salicylic acid. The nanoparticles degrade without leaving a trace in the presence of water, making them a safe and sustainable alternative for plant disease protection. The study suggests that silica nanoparticles could serve as an inexpensive, highly efficient, and safe treatment against plant pathogens, and future research could extend the investigations to a broader spectrum of plant pathogens. | None | Abstract In plants, pathogen attack can induce an immune response known as systemic acquired resistance that protects against a broad spectrum of pathogens. In the search for safer agrochemicals, silica nanoparticles (SiO 2 NPs; food additive E551) have recently been proposed as a new tool. However, initial results are controversial, and the molecular mechanisms of SiO 2 NP-induced disease resistance are unknown. Here we show that SiO 2 NPs, as well as soluble Si(OH) 4 , can induce systemic acquired resistance in a dose-dependent manner, which involves the defence hormone salicylic acid. Nanoparticle uptake and action occurred exclusively through the stomata (leaf pores facilitating gas exchange) and involved extracellular adsorption in the air spaces in the spongy mesophyll of the leaf. In contrast to the treatment with SiO 2 NPs, the induction of systemic acquired resistance by Si(OH) 4 was problematic since high Si(OH) 4 concentrations caused stress. We conclude that SiO 2 NPs have the potential to serve as an inexpensive, highly efficient, safe and sustainable alternative for plant disease protection. Main Nanoagrochemicals are a promising tool to improve crop yield and thus global food security 1 . Silica nanoparticles (SiO 2 NPs) have been proposed for the controlled nanodelivery of silicon (Si) and other active ingredients to plants, but they have never been systematically tested for this purpose. Si from orthosilicic acid (Si(OH) 4 , also known as monosilicic acid)—the hydrolytic degradation product of SiO 2 NPs—is the only known form of Si bioavailable for plants, and it is ubiquitous in soil pore water 2 , 3 , 4 . Si(OH) 4 can promote plant growth and plant resistance against biotic and abiotic stresses 3 , 5 , thereby protecting plants against pathogen attacks or agricultural damages related to severe climate conditions 3 , 6 , 7 . The uptake and movement of SiO 2 NPs as well as other engineered nanomaterials in plants have been intensively studied in the past decade 7 , 8 , 9 , 10 . However, it is uncertain how the nanoparticles interact with leaves at the subcellular level. Direct evidence by nanometre-resolution imaging for the entrance of intact nanoparticles into leaves, or the intercellular movement of SiO 2 NPs within leaves, is mostly missing 10 . It is also not known whether SiO 2 NPs can induce resistance in plants, whether their performance differs from dissolved Si species and which molecular pathways they may induce. To fend off potential pathogens, plants have evolved disease resistance mechanisms that share mechanistic principles with the innate immunity of animals 11 . An especially interesting form of plant disease resistance is the so-called induced resistance in which the disease resistance of the plant can be enhanced by previous exposure to beneficial rhizosphere microorganisms, avirulent and virulent pathogens, or specific resistance-inducing chemical compounds 12 , 13 , 14 . A hallmark of induced resistance is its activity against a broad spectrum of pathogens. While the induction of plant disease resistance using chemical compounds is relatively well understood 12 , the benefit of using slow nano-enabled delivery systems for the same purpose has not been investigated via systematic experiments 1 , 7 . A special form of induced resistance is systemic acquired resistance (SAR) that is characterized by the spread of locally induced disease resistance to the whole plant 15 , 16 . SAR is induced in all plant parts after locally challenging the plant with a pathogen or by the local application of so-called resistance-inducing compounds. Both these treatments induce signal transduction pathways that lead to the production of signals moving to distant tissues 14 . A key signalling compound that contributes to SAR is the plant hormone salicylic acid (SA) that is responsible for the activation of pathogenesis-related (PR) genes 16 , 17 . Other factors include, for example, nitric oxide and reactive oxygen species 18 , 19 . The fact that SAR can be activated by the application of resistance-inducing compounds 12 , 13 makes SAR an attractive alternative strategy for controlling crop pests without the need for using irreversible genetic modifications or environmentally problematic pesticides. SAR-inducing compounds such as benzothiadiazole successfully enhance disease resistance, but also reduce crop yields 20 , 21 . Interestingly, Si-based compounds also seem to have the capacity to induce disease resistance via a broad range of different and partially still unknown mechanisms, including the mechanical reinforcement of defensive structures of the plant architecture, most notably the cell wall 3 , 22 , but also the activation of biochemical defences 3 , 23 . For example, biochemically, root-applied Si led to a broad-spectrum resistance against powdery mildew pathogen by increasing the activity of defence-related enzymes in leaves 24 . It is important to note that the protective effect of Si seems to have—in contrast to other biostimulants such as benzothiadiazole—no negative effects on the growth and yield of plants 3 , 25 . All this makes Si an attractive candidate to strengthen plant stress tolerance. Initial studies found that SiO 2 NPs may induce stress tolerance similar to conventional Si products, but a clear mechanistic understanding of the underlying processes is still lacking 7 , 8 , 26 , 27 . In this Article, we demonstrate the potential of SiO 2 NPs in inducing local and systemic disease resistance in the widely used model plant Arabidopsis thaliana against the bacterial pathogen Pseudomonas syringae . Silicic acid was assessed in parallel to disentangle the potential differences in the mode of action of dissolved Si species compared with SiO 2 NPs. We assessed the role of SA and reactive-oxygen-species defence-related genes, established the therapeutic concentration range of SiO 2 NPs to induce the desired beneficial effects in plants, compared the laboratory setup (infiltration of selected leaves) with the more realistic spray application and visualized the nanoparticle–leaf interactions using transmission electron microscopy (TEM), with important implications for future strategies to apply nanoscale active ingredients for slow release in leaves. SiO 2 NPs and subcellular distribution within the leaf The SiO 2 NP suspensions used for the dosing of plants (Fig. 1 ) were well dispersed with a hydrodynamic particle size of 76.7 ± 0.8 nm (average ± standard deviation) and a polydispersity index of 0.07. The primary particle size, as determined by TEM, was 54 ± 7 nm (average ± standard deviation). The interaction of the nanoparticles with the plant was assessed by TEM (Fig. 2 ) 2 d after the application of SiO 2 NPs. Preliminary experiments showed that at this time point, the SiO 2 NP-exposed plants had already developed resistance. The size range of ~50–70 nm of the nanoparticles allowed them to enter the leaf exclusively through the stomata and distribute within the large extracellular air spaces of the spongy mesophyll without penetrating any cell walls (Fig. 2 and Supplementary Fig. 1 ). The SiO 2 NPs remained within the air spaces of the leaf during the 2 d between their application and the time point of TEM observation. At the same time, the size of the nanoparticles prevented (undesirable) nanoparticle uptake into the cytoplasm as well as cell-to-cell translocation through the plasmodesmata (Fig. 2b ). This is in line with previous studies in the literature based on the nanometre-resolution imaging of nanoparticles in plants, suggesting that the cutoff for root–shoot nanoparticle translocation is at approximately <36 nm and it is <15–40 nm (basal size exclusion limits of ~3–4 nm) for cell-to-cell plasmodesmata transport 10 . Compared with the fully closed stomata in the control plants (samples were kept in the dark for fixation), the nanoparticle-treated plants showed incompletely closed stomata as the nanoparticles were stuck in between the guard cells (Fig. 2b ). Fig. 1: SiO 2 NPs under investigation. a , TEM image of the particles. b , Particle size distribution based on the TEM image analysis. c . DLS measurements of the SiO 2 NPs. The hydrodynamic radius is consistent with the primary particle size shown in a and b . PDI, polydispersity index. Averages ± standard deviations. For the DLS measurements, number of measurements N = 10. Full size image Fig. 2: TEM of SiO 2 NP distribution and physiological effects in Arabidopsis leaves. Red arrows and dots, nanoparticles. Comparison between the spray application used in the field and for local defence assays and the infiltration application used in laboratory studies. Images obtained when the plants had already developed resistance 2 d after exposure to SiO 2 NPs. a , Control leaves only treated with the buffer solution. b , TEM overview image and zoomed-in views of the stoma and cell–air space interface. False colours: red, cell wall (apoplast); green, cytoplasm (symplast); blue, spaces filled with air. SiO 2 NP-sprayed leaf at a higher resolution shows that the stomata are not tightly closed anymore due to nanoparticle uptake and clogging. Nanoparticles entered through the stomata into the air spaces of the leaf, and they were also found to be extracellularly adsorbed on the outer edge of the cell walls in the air gaps of the spongy mesophyll; they were absent in the cytoplasm (intracellular space). Higher-resolution TEM image is shown in Supplementary Fig. 1 . Full size image Exogenous application of SiO 2 NPs confers SAR The local defence responses of Arabidopsis sprayed with SiO 2 NPs or a control treatment to virulent P. syringae were quantified via bacterial growth on leaves (Fig. 3a ). Due to the lack of the avrRpt2 gene in the virulent P. syringae that is needed by the Resistance to Pseudomonas syringae protein 2 ( RPS2 ) resistance gene in Arabidopsis to induce a strong plant defence against P. syringae 28 , 29 , a severe infection would be expected. However, a pronounced infection occurred only in the control treatment. Plants sprayed with SiO 2 NPs showed an eightfold improvement in basal resistance compared with the 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES)-buffer-treated control plants (Fig. 3a ), demonstrating that the SiO 2 NPs induced local defence in the plant within 24 h (the nanoparticles were applied 24 h before inoculation with the virulent P. syringae ). The number of bacteria was reduced eightfold in SiO 2 NP-treated plants compared with the control. Fig. 3: Enhanced local and systemic disease resistance in wild-type Col-0 Arabidopsis to P. syringae induced by SiO 2 NPs or Si(OH) 4 . The bacteria in the leaves were quantified 0 and 3 dpi. a , Growth of virulent P. syringae in leaves. The plants were sprayed with different treatments, and virulent P. syringae was inoculated 24 h later. b , SAR in distal leaves. Plants locally infiltrated with different treatments. 48 h later, virulent P. syringae was inoculated on untreated systemic leaves. c , SAR in distal leaves; repetition of the experimental setup in b with an additional Si(OH) 4 treatment. d , No effect of SiO 2 NPs and Si(OH) 4 on the in vitro growth of virulent P. syringae bacteria in the absence of the plant. e . Phenotype of the Arabidopsis plants. Plants pretreated with the HEPES buffer (control), SiO 2 NPs or Si(OH) 4 (1,000 mg SiO 2 l –1 each). Note that the yellow leaves in the plant exposed to Si(OH) 4 coincide with the upregulated expression of the oxidative stress marker gene shown in Fig. 4c . In a – d , all the experiments were performed twice with comparable results. Bars and whiskers are averages and standard deviations; N = 3; one-way analysis of variance (ANOVA); post hoc least significant difference; P < 0.01. Full size image The systemic responses of wild-type Arabidopsis plants to SiO 2 NPs and dissolved Si species are reflected in the inhibited bacterial growth, as shown in Fig. 3b,c . The positive control showed that plants previously infiltrated with the avirulent P. syringae , which is known to induce SAR, expectedly contained tenfold less virulent P. syringae compared with magnesium chloride (MgCl 2 )- or HEPES-preinfiltrated plants (Fig. 3b,c ). Remarkably, treating local leaves with SiO 2 NPs led to comparable systemic protection against virulent P. syringae as observed in the positive avirulent P. syringae control (Fig. 3a ), which is equal to >90% bacterial inhibition. It is highly unlikely that a local response to SiO 2 NPs or Si(OH) 4 in the distal tissue has caused this resistance because of the observed distribution and very slow dissolution of SiO 2 NPs (Fig. 2 and Supplementary Fig. 1 ) and the passive transport 30 and high reactivity of Si(OH) 4 . This shows that treating Arabidopsis with SiO 2 NPs induced local and systemic resistance to P. syringae . It is well known that Si(OH) 4 improves plant defences against different plant pathogens such as fungi, bacteria and viruses 5 , 7 . We, therefore, also tested SAR in response to Si(OH) 4 (Fig. 3c ) and found that treatment with Si(OH) 4 was able to induce SAR. These results suggest that Si(OH) 4 released from SiO 2 NPs is at least partially responsible for the SAR-inducing ability of SiO 2 NPs and that the SiO 2 NPs can act as a slow-release source for Si(OH) 4 . Measuring the exact amount of free Si(OH) 4 directly in Plantae is challenging due to the low concentrations and fragile equilibrium of the dissolved Si(OH) 4 and Si oligomers and the solid SiO 2 species 2 , 4 , 31 ( Supplementary Information , ‘Details on Si(OH) 4 analytics’). We, therefore, resorted to direct TEM imaging of the nanoparticles in the plants; at high resolution, abundant intact SiO 2 NPs were observed in the stomata 2 d after the SiO 2 NP treatments (Fig. 2 ). This demonstrates that the plants could not degrade the nanoparticles at the time point of inoculation with virulent P. syringae (in all the assays, the nanoparticles were applied at least 24 h before inoculation). The slow nanoparticle dissolution is in line with the slow dissolution kinetics of the SiO 2 NPs measured previously in water (half-life of ~66 d at pH 7) 32 . To test whether SiO 2 NPs and Si(OH) 4 have a direct toxic effect on bacterial growth, virulent P. syringae was cultivated in vitro in the presence or absence of SiO 2 NPs or Si(OH) 4 at the lowest fully effective dose of SiO 2 NPs at 100 mg l −1 . At these concentrations that induced strong defence in plants, neither SiO 2 NPs nor Si(OH) 4 alone harmed the growth of the virulent P. syringae bacteria (Fig. 3d ), demonstrating that SiO 2 NPs induce resistance in plants by activating the defence responses in plants and not by directly inhibiting the bacterial growth. Dose dependence of SAR response The SAR was further tested in response to different concentrations of SiO 2 NPs or Si(OH) 4 ; for additional validation, a second bacterial growth quantification method 33 based on bacterial DNA was used (Fig. 4a,b and Supplementary Table 1 ). Treatment with SiO 2 NPs at a concentration of 25 mg SiO 2 l –1 already resulted in a partial reduction of 29% of bacterial growth in systemic leaves, and treatment with 100 mg ml – 1 resulted in maximum protection (>90%) compared with the positive control plants preinfiltrated with avirulent P. syringae (Fig. 4a ). As the series of concentrations shown in Fig. 4a , higher concentrations of SiO 2 NPs exceeding 1,600 mg SiO 2 l –1 led to increased bacterial infection and were thus less effective in activating SAR. Pretreatment with a concentration of 5 mg SiO 2 l –1 of Si(OH) 4 (concentration normalized to SiO 2 l −1 for the sake of comparability) led to a reduction of 81% in the bacterial numbers compared with the positive control. Maximum protection with a reduction similar to the control plants preinfiltrated with avirulent P. syringae was achieved at concentrations between 20 and 320 mg SiO 2 l –1 . A higher concentration of 640 mg SiO 2 l –1 was less effective, and a concentration of 2,560 mg SiO 2 l –1 was ineffective in inducing SAR, demonstrating a detrimental effect of higher concentrations of Si(OH) 4 on SAR induction. Fig. 4: SiO 2 NPs confer SAR in a dose-dependent manner. Distal leaves of wild-type Col-0 Arabidopsis treated with the control, SiO 2 NPs or Si(OH) 4 . a , SAR in plants locally infiltrated with different treatments. After 48 h, virulent P. syringae was inoculated on untreated systemic leaves. The bacteria in the leaves were quantified 0 and 3 dpi. b , qPCR transcript levels of oprF gene from virulent P. syringae using DNA templates extracted from the inoculated leaves. c , RT–qPCR transcript levels of the oxidative stress marker gene AtHSP17.4C1 in response to different treatments. Plants were locally infiltrated with different treatments. Leaves sampled 48 h after treatments. Reference gene, At4g26410 ( expG ). Bars and whiskers are averages and standard deviations; N = 3; one-way ANOVA; post hoc least significant difference; P < 0.05. All the experiments in a – c were performed twice with comparable results. Full size image The data in Fig. 4 served to establish a dose–response relationship between SAR and the SiO 2 NP concentration (Fig. 5a ). Using a standard log-logistic dose–response model, the dynamic range and the effective concentration at 50% bacterial inhibition (EC50) was determined as 0.4 ± 0.04 mM Si (average ± standard deviation) for SiO 2 NPs (that is, 24 mg SiO 2 l –1 ; Supplementary Fig. 2 shows the residual analysis and Supplementary Table 2 lists the fitting parameters) in a range of 25–100 mg SiO 2 l –1 . For spraying, the EC50 may be similar to the injected SiO 2 NPs, as both the local (sprayed) and the systemic (injected) assays at 100 mg SiO 2 l –1 resulted in disease resistance (Fig. 3 and Fig. 6a,b ). Fig. 5: Dynamic range for SAR induced in distal leaves by SiO 2 NPs in A. thaliana , and model summarizing the observed plant-defence-enhancing actions of SiO 2 NPs and Si(OH) 4 . a , Data from Fig. 4a . SiO 2 NP-triggered dose-dependent bacterial inhibition 3 d after infection of wild-type A. thaliana with virulent P. syringae . The EC50 value was 0.40 ± 0.04 mM Si (average ± standard deviation) for SiO 2 NPs (that is, 24 mg SiO 2 l –1 ). Above the dynamic range, the bacterial infection can increase again (Fig. 4a ). Six data points at 0 mM Si are not shown due to the nature of the log axis, but they are apparent in the detailed residual analysis shown in Supplementary Fig. 2 and Supplementary Table 2 . C Si , Si concentration in mM. b , SiO 2 NPs act by (1) slowly releasing Si(OH) 4 into cells, triggering SA, and thus local defence and SAR; (2) clogging stomata, triggering SA and subsequent defences. Absence of intracellular nanoparticles confirmed by electron microscopy (Fig. 2 and Supplementary Fig. 1 ). c , Si(OH) 4 instantly diffuses into cells, triggering SA and subsequent local defence and SAR. However, the instant uptake causes overdose, stress and compromised defences. Both the mechanisms are shown after treatment with the same amount of SiO 2 equivalents (1,000 mg SiO 2 l −1 ). SA, plant hormone regulating SAR and PR-1 / 5 gene expression; PR-1 / 5 , genes encoding PR proteins 1 and 5; HSP17.4C1 , heat shock protein and oxidative stress marker gene. Full size image Fig. 6: SiO 2 NPs induce disease resistance based on SA-dependent pathway. Experiments in Arabidopsis wild-type Col-0 and sid2 . The bacteria in the leaves were quantified 0 and 3 dpi. a , A. thaliana wild-type Col-0 and sid2 were locally infiltrated with different treatments. After 24 h of these treatments, virulent P. syringae was inoculated. b , SAR in the distal leaves of wild-type Col-0 and mutant sid2 . Plants were locally infiltrated with different treatments. After 48 h of these treatments, virulent P. syringae was inoculated. c , d , RT–qPCR analysis of the gene expression of the SA-regulated genes AtPR-1 ( c ) and AtPR-5 ( d ) in response to different local treatments of wild-type Arabidopsis . Leaves sampled 48 h after treatments. Reference gene, At4g26410 ( expG ). Bars and whiskers are averages and standard deviations; N = 3; one-way ANOVA; post hoc least significant difference; P < 0.02. All the experiments in a – d were performed twice with comparable results. Full size image The results based on counting bacterial colonies were confirmed by estimating the bacterial biomass based on a quantitative PCR (qPCR) analysis of the bacterial outer membrane protein gene oprF (Fig. 4b ). The bacterial DNA levels were in good agreement with the bacterial colony counting results shown in Fig. 4a , which is in line with a previous research that compared the two techniques 33 . In contrast to SiO 2 NPs, higher concentrations of Si(OH) 4 adversely affected the phenotype of the treated plants (Fig. 3e ). At a concentration of 1,000 mg SiO 2 l –1 , the leaves of the plants treated with Si(OH) 4 showed signs of chlorosis (yellowing), whereas the leaves of the plants treated with SiO 2 NPs looked healthy (Fig. 3e ). This different behaviour at higher concentrations prompted us to further investigate the negative effect of higher concentrations of SiO 2 NPs and Si(OH) 4 . The expression level of the heat shock protein AtHSP17.4C1 , a molecular marker for oxidative stress 34 , was analysed by qPCR with reverse transcription (RT–qPCR). The HSP17.4C1 transcript levels were determined in response to avirulent P. syringae , SiO 2 NPs or Si(OH) 4 (100 and 1,000 mg SiO 2 l –1 ; Fig. 4c ) 2 d after the treatments. Treatment with avirulent P. syringae caused a minor increase (2.7-fold) in the AtHSP17.4C1 expression compared with the control. Similarly, treatment with SiO 2 NPs led to a 1.6-fold (100 mg SiO 2 l –1 ) and twofold (1,000 mg SiO 2 l –1 ) increase in transcript abundance relative to the control treatment that was not statistically significant. However, treatment with higher concentrations of Si(OH) 4 caused stress, as the transcript levels of the oxidative stress marker gene HSP17.4C1 were induced ninefold at a concentration of 100 mg SiO 2 l –1 and 18-fold at 1,000 mg SiO 2 l –1 . SiO 2 NP-mediated SAR depends on SA The plant hormone SA plays a core regulatory role in plant immunity 35 . Thus, we tested the ability of SiO 2 NPs to induce local disease resistance and SAR in an Arabidopsis mutant defective in SA biosynthesis (SA induction–deficient 2 ( sid2 ) 36 ) to check if SiO 2 NPs confer SAR via SA-dependent defence pathway or not. Notably, neither Si(OH) 4 nor SiO 2 NPs induced local disease resistance or SAR in sid2 mutant plants, while they induced basal disease resistance and SAR in wild-type plants (Fig. 6a,b ), demonstrating that SA-dependent defence signalling is essential for Si(OH) 4 - and SiO 2 NP-induced disease resistance. To further support this result, we next quantified the expression of the SA-responsive marker genes PR protein 1 ( PR-1 , gene AtPR-1 ) and PR-5 (gene AtPR-5 ) in wild-type plants (Fig. 6c,d ). Similar to treatment with avirulent P. syringae , treatment with Si(OH) 4 and SiO 2 NPs resulted in an up to 30-fold and 6-fold increase in the transcript abundance of AtPR-1 (Fig. 6c ) and AtPR-5 (Fig. 6d ), respectively, compared with the control treatments. Hence, both Si(OH) 4 and SiO 2 NPs activated SA-dependent defence reactions. Although SiO 2 NPs triggered lower AtPR-1 and AtPR-5 expression levels in comparison with avirulent P. syringae –infiltrated plants and Si(OH) 4 -treated plants, the inducing effect was sufficient to confer SAR. Implications on the mode of action of leaf-applied SiO 2 NPs The pathosystem involving Arabidopsis and the hemibiotrophic bacterial pathogen P. syringae offers an ideal model to investigate the effect of SiO 2 NPs and Si(OH) 4 on plant defence. Our results (summarized in the model in Fig. 5b,c ) show that the protective effect of SiO 2 NPs and Si(OH) 4 is based on the ability to induce basal resistance and SAR (Fig. 3a–c ) and not on the direct toxic effects as neither SiO 2 NPs nor Si(OH) 4 inhibited bacterial growth (Fig. 3d ). Our data are in line with the initial results that suggested that Si(OH) 4 and sometimes SiO 2 NPs can protect plants from different plant pathogens 7 , 26 , 27 , 37 ; nevertheless, here we show that the mechanism had no toxic effect on the pathogen, but rather induced the defences of the plant. Both SiO 2 NPs and Si(OH) 4 induce SAR in a dose-dependent manner, leading to bacterial inhibition of >90% compared with the control plants treated only with the HEPES buffer or MgCl 2 . These results are consistent with the previous results suggesting that SiO 2 NPs and Si(OH) 4 function in a dose-dependent manner in plants and animals 26 , 27 , 38 . However, instead of the previously proposed pesticidal action of SiO 2 NPs, we show here that the nanoparticles caused an increase in the plant defence. Our data suggest that the SiO 2 NPs used in the present study can be successfully used to slowly release Si(OH) 4 to the plant from within the spongy mesophyll (Fig. 2 ) in close direct interaction with the diffusion layer on the plant cell walls, which is at least partially responsible for the SAR-inducing ability of SiO 2 NPs. Water (vapour) secreted from the plant cell wall or the plant-induced dissolution of SiO 2 NPs linked to increased secretory activity 10 (exudates) may have promoted the further dissolution of Si(OH) 4 . Based on the release rates of SiO 2 NPs that were determined earlier under conditions optimized for dissolution in a continuously depleted ultrapure water system (half-life of ~66 d at pH 7) 32 , a maximum of ~13% particles could have dissolved within 48 h of SiO 2 NP exposure. Si-containing reaction byproducts of the nanoparticle synthesis were ruled out to play a notable role in the induction of defence ( Supplementary Information , ‘Si reaction byproducts’). The maximum released Si(OH) 4 from SiO 2 NPs could, therefore, explain the bacterial inhibition; however, it cannot fully explain the lack of oxidative stress responses and higher bacterial DNA levels for SiO 2 NPs in the plants (Fig. 4b ). Probably, the absence of peak Si(OH) 4 concentrations resulted in lower Si(OH) 4 toxicity for both bacteria and plants. Other effects such as modulated evapotranspiration due to the blockage and incomplete closure of the stomata by the nanoparticles (Fig. 2 ), which can cause SA-related responses similar to drought stress 39 , and the close interaction of the nanoparticles with cells in the spongy mesophyll may play an important role; this is in line with earlier research about stomata as ports of entry for pollutants and nanoparticles 40 , 41 . The exact relative contribution of each effect remains to be elucidated in follow-up studies. It is important to note that the cell walls in the mesophyll air spaces have very thin, or lack, cuticular waxes 10 , and therefore, it is in contrast to the external leaf surface; a direct interaction of the nanoparticles can take place with the cell wall and thus the apoplast transport system including the xylem. Irrespective of the detailed mechanism of the nanoparticles, this is of importance for any nanoagrochemical application aiming at the slow release of active ingredients, because nanoparticles in the extracellular spongy mesophyll air spaces (Fig. 2 and Supplementary Fig. 1 ) can interact with the leaf for extended periods without being washed away by rain. High concentrations of Si(OH) 4 caused the chlorosis of leaves indicative of stress (Fig. 3e ). An increased expression of the oxidative stress marker gene AtHSP17.4C1 (ref. 34 ) confirmed stress in the Si(OH) 4 treatment at 100 and 1,000 mg SiO 2 l –1 , as the transcript levels of AtHSP17.4C1 were more strongly induced compared with avirulent P. syringae or SiO 2 NP treatments (Fig. 4c ). Together, these data show that Si(OH) 4 was more toxic to plants than SiO 2 NPs. Hence, impaired SAR in plants treated with higher concentrations of Si(OH) 4 (Fig. 4a ) might be linked to enhanced oxidative stress, consistent with the fact that higher levels of nitric oxide and reactive oxygen species were shown to impair the induction of SAR 19 , 23 . For SiO 2 NPs, no substantial increase in the oxidative stress marker gene was found (Fig. 4c ). Impaired SAR for SiO 2 NPs occurred only at very high concentrations in the gram per litre range (Fig. 4a ), probably due to the excess release of Si(OH) 4 causing oxidative stress or the highly intense clogging of the stomata (Fig. 2 ) that disrupted evapotranspiration. While the low polydispersity index measured by dynamic light scattering (DLS) (Fig. 1c ) indicates well-dispersed SiO 2 NP suspensions even at higher concentrations, heteroaggregation with mucilage in the stomata (upon contact with the leaf) and probably homoaggregation (at higher nanoparticle concentrations) appeared to promote the clogging of the stomata (Fig. 2a , red arrows). These results are in line with ref. 8 , according to which SiO 2 NP concentrations up to 1,000 mg SiO 2 l –1 were not phytotoxic despite the uptake of SiO 2 NPs into the root system of A. thaliana . Our results are also consistent with the initial studies 42 , 43 that found better effects of SiO 2 NPs on plant growth than conventional silica fertilizers. In conclusion, the application of SiO 2 NPs can reduce the risk of overdosage. Our data demonstrate that SiO 2 NPs- and Si(OH) 4 -mediated SAR acts via the activation of the SA-dependent defence pathway, which is a key component of the basal disease resistance and SAR 44 , 45 . Neither SiO 2 NPs nor Si(OH) 4 induced resistance in sid2 that has a defect in the SA biosynthesis (Fig. 6a,b ). The induction of resistance by SiO 2 NPs was comparable to the effect of Si(OH) 4 at intermediate concentrations, although the soluble fraction of Si(OH) 4 in this treatment was far lower as the particles dissolved only partially in the plant, if at all (Fig. 2 ), suggesting that SiO 2 NPs can induce SA-dependent defence pathways as intact particles. Furthermore, the expression levels of two SA-responsive marker genes, namely, AtPR-1 and AtPR-5 encoding PR-1 and PR-5 , respectively, were induced in response to SiO 2 NPs and Si(OH) 4 (Fig. 6c,d ). These results are in line with ref. 46 , who reported that the exogenous application of Si(OH) 4 induced SA biosynthesis in leaves exposed to the fungal pathogen Erysiphe cichoracearum . In addition, Si-primed tomato plants were protected against Ralstonia solanacearum via the upregulation of SA-controlled defence gene expression 47 . Although SiO 2 NPs triggered lower AtPR-1 and AtPR-5 expression levels than the plants infiltrated with avirulent P. syringae and Si(OH) 4 -treated plants, the achieved level of expression was sufficient to confer a full SAR response. Conclusions The present results show that low concentrations of SiO 2 NPs efficiently protect the widely used model plant Arabidopsis from infection by the bacterial pathogen Pseudomonas , and they revealed the mode of action of SiO 2 NPs compared with the dissolved counterpart, Si(OH) 4 . The protective effect of SiO 2 NPs is mediated by the activation of SA-dependent plant immunity responses and is partially based on the slow release of Si(OH) 4 from nanoparticles entering through the stomata and distribution within the spongy mesophyll and probably partially by intact nanoparticle-induced SA-dependent responses. Compared with direct Si(OH) 4 application, SiO 2 NPs proved to be safer for the plant. They did not cause phytotoxicity even at concentrations tenfold higher than the minimal dose needed for plant protection and therefore have a broader therapeutic range than Si(OH) 4 . The lowest fully effective dose (100 SiO 2 l –1 ) is promising because it corresponds to an extrapolated field dose of only 3 kg SiO 2 ha –1 , corresponding to more than 1,000-fold material savings compared with the solid bulk SiO 2 treatments. This calculation assumes a typical 300 l ha – 1 application (conventional aqueous spray volumes for pesticide application equipment 48 ), and an uncertainty factor of 100 for the concentration. Contrary to previous assumptions about the ability of nanoparticles to penetrate the cuticle, SiO 2 NP intake was clearly restricted to the stomata and the extracellular spongy mesophyll, confirming our hypothesis that the leaf cuticle represents an impermeable barrier to nanoparticles 10 , which is in line with earlier fundamental research 49 . The spongy mesophyll is an attractive target for the long-term deposition of slow-release nanoagrochemicals. Future research should extend the investigations to a broader spectrum of defence-related genes with other plant pathogens and the biomechanical quantification of the physical effects of nanoparticles that affect leaf permeability and may trigger the SA-related responses. To further advance SiO 2 NPs as nanobiostimulants and fertilizers, which should be the case with every material or organism used in agriculture, the long-term effects of SiO 2 NPs to occupationally exposed agricultural workers and non-target organisms, such as beneficial soil microorganisms or bees, must be thoroughly analysed before broad commercial application. The potential risks of nanoagrochemicals and possible strategies for risk mitigation have been thoroughly reviewed previously 1 , 50 , 51 . Amorphous SiO 2 NPs have already been approved by the Food and Drug Administration as they are generally regarded as safe, and they are in use as dietary additives (E551) 52 in a broad range of foodstuffs such as table salt. The daily intake of nanoscale silica from food is estimated to be 1.8 mg kg – 1 (ref. 53 ). Our own initial experiments with Caenorhabditis elegans nematodes used as model non-target microorganisms (Supplementary Fig. 3 ) have shown an ~36-fold lower ecotoxicity of SiO 2 NPs compared with liquid Si(OH) 4 preparations that are in use for plant nutrition since decades. Thus, compared with currently used treatments, the present SiO 2 NPs alone, or in combination with other active ingredients, promise to offer a cost-effective, consumer-safe strategy that is tracelessly degradable and a sustainable alternative to protect plants against pathogens via the controlled induction of SAR, without any negative effects on yield or non-target organisms associated with the action of previously described plant biostimulants or pesticides. Methods Plant growth conditions A. thaliana seeds were grown on Jiffy soil substrates (powered by Tref, Jiffy Products International). Two A. thaliana strains were grown: wild-type Columbia (Col-0) plants that carry an RPS2 locus responsible for the recognition of P. syringae strains expressing the avirulent gene avrRpt2 (refs. 28 , 29 ) and an A. thaliana mutant defective in SA biosynthesis ( sid2 (ref. 36 )). The seeds sown on the soil were kept at 4 °C for 2 d and then transferred to the growth chamber (RMC Tableaux SA). The plants were grown in a 12 h photoperiod with 60% relative humidity, with a day temperature of 22 °C and a night temperature of 18 °C (photon flux density, 100 µmmol m –2 s –1 ). The transplanted seedlings were covered with transparent plastic domes for 2–3 d to allow the seedlings to adapt to the new soil. Four- to five-week-old plants were used in the experiments, because previous experiments had shown that under the abovementioned growth conditions, this is the optimal age of the plant to induce SAR 54 . Culture of P. syringae pv. tomato P. syringae pv. tomato bacteria were prepared by inoculating a single colony in 10 ml King’s B medium (1.5 g K 2 HPO 4 , 1.5 g MgSO 4 ·7H 2 O, 20 g tryptone and 10 ml glycerol per litre of water; Sigma-Aldrich; purity ≥99%) containing the appropriate antibiotics. A virulent and an avirulent strain of P. syringae were grown: P . syringae DC3000 (virulent P. syringae ) and P. syringae DC3000 expressing the avirulent gene avrRpt2 recognized by the A. thaliana RPS2 locus and inducing SAR (avirulent P. syringae ). The virulent P. syringae bacteria strain served to induce a strong infection with P. syringae in the plants. The avirulent P. syringae strain served as a positive control to induce SAR and thus an actively suppressed bacterial growth in the A. thaliana plants via recognition of the bacterial avrRpt2 gene by the plant’s RPS2 gene (refer to ref. 29 for a detailed description of the pathosystem). The virulent P. syringae was grown with rifampicin (25 μg ml –1 ) and the avirulent P. syringae was grown with kanamycin (50 μg ml –1 ) and rifampicin (25 μg ml –1 ). After overnight incubation in a shaker at 28 °C in the dark (Kuhner LT-W Lab Therm Table Top Incubator Shaker, Adolf Kühner AG), the cells were centrifuged at 3,000 r.p.m. for 10 min, and the pellet was suspended in 10 mM MgCl 2 . The cell density was calculated by measuring the light absorption of the liquid culture using a spectrophotometer (BioPhotometer, Eppendorf) at the absorption wavelength of 600 nm and by counting the colonies plated on King’s B agar (raw data are publicly available 55 ). Inoculation procedures for local disease resistance For a local disease resistance assay, three leaves per A. thaliana plant were inoculated with the virulent P. syringae bacteria, and the plants were incubated under the standard A. thaliana growth conditions described above. The inoculation with the virulent P. syringae bacteria was operationally defined as 0 d post inoculation (dpi). After inoculation, leaf discs (4 mm) were collected from the inoculated leaves at 0 and 3 dpi using a cork borer (three leaf discs from different plant leaves per sample). The leaf discs were ground and homogenized with pestles in 10 mM MgCl 2 and the undiluted (0 dpi) or the 1,000-fold diluted (3 dpi) homogenates were plated on King’s B agar plates (King’s B medium as above with 15 g l –1 agar). The plates were incubated at 28 °C in the dark for 48 h. Then the bacterial colonies were counted (raw data are publicly available 55 ). Inoculation procedures for SAR assays For an SAR assay, three leaves of four- to five-week-old wild-type Col-0 plants were infiltrated with 10 mM MgCl 2 (negative control) or the avirulent P. syringae bacteria at 10 6 colony-forming units (CFU) per millilitre in 10 mM MgCl 2 (positive control). After 48 h, the distal leaves were inoculated with the virulent P. syringae bacteria (10 5 CFU ml –1 ). The inoculation with the virulent P. syringae bacteria was operationally defined as 0 dpi. Leaf discs (4 mm) were collected from the distal leaves at 0 and 3 dpi using a cork borer (three leaf discs from different plant leaves were analysed three times for each treatment). The leaf discs were ground in 10 mM MgCl 2 , and the undiluted (0 dpi) or 1,000-fold diluted (3 dpi) homogenates were plated on King’s B agar and incubated at 28 °C for 48 h in the dark (SalvisLab incubator). Then the bacterial colonies were counted (raw data are publicly available 55 ). For details about this procedure, refer to ref. 18 . Plant treatments The SiO 2 NPs (25, 100, 400 and 1,600 mg SiO 2 l –1 at pH 7) and Si(OH) 4 (5, 20, 80, 100, 320, 640 and 2,560 mg SiO 2 l –1 at pH 7 from an aqueous potassium silicate stock solution; K 2 O:SiO 2 of 1:2.60; SiO 2 content, 20.8 wt%; MonDroguiste) were prepared in sterile, distilled water in HEPES buffer (1 mM, pH 7, 99.5%; Sigma-Aldrich). The Si(OH) 4 concentrations were expressed in mg SiO 2 l –1 to facilitate a direct comparison of the effects of dissolved Si(OH) 4 and solid SiO 2 NPs without having to take into account the different molecular weights. For the local disease resistance assay, the plants were sprayed with these chemicals 24 h before inoculation with virulent P. syringae . For the SAR assays, all these chemicals were injected abaxially (from the bottom of the leaf) into Arabidopsis plant leaves 2 d before inoculation using 1 ml needleless sterile disposable syringes. SiO 2 NPs and subcellular distribution within the leaf The SiO 2 NPs were synthesized and characterized according to a previously established procedure 31 , 32 adapted from an earlier work 56 . Briefly, one equivalent of tetraethyl orthosilicate (10 ml, >99%; Sigma-Aldrich) was added to an equilibrated reaction mixture at 70 °C containing two equivalents of ultrapure water (Milli-Q, 18.2 MΩ arium 611 DI, Sartorius Stedim Biotech), and absolute ethanol (81 ml) as a solvent under basic conditions (2.93 ml of 25% NH 3 ). The particles resulting after 3 h of hydrolysis and polycondensation of tetraethyl orthosilicate were washed by three steps of centrifugation (15,000 × g for 15 min, where g is the Earth’s gravitational acceleration) in ultrapure water and five or more steps of dialysis through a membrane with a 14 kDa molecular weight cutoff (regenerated cellulose, Carl Roth). Several batches of particles with hydrodynamic diameter in the range of 64.8–76.7 nm were prepared using an identical procedure to prevent artefacts due to suspension aging (size variability between batches, 5.2 nm). DLS was used to quantify the hydrodynamic particle size and surface charge of the diluted samples (1% v/v; NanoBrook Particle Size Analyzer 90Plus, Brookhaven; scattering angle, 90° at 1 min acquisition; raw data are publicly available 55 ). Inductively coupled plasma–optical emission spectroscopy and gravimetry served to quantify the SiO 2 concentration (methods described in ref. 31 ). For the particle characterization and to analyse the effects of SiO 2 NPs, Si(OH) 4 and control treatments in the leaves, we used TEM. The particle size distribution was established using the ImageJ software (version 1.52n) analysis of the TEM micrographs (raw data are publicly available 55 ). The plants were pre-fixed in 4% glutaraldehyde solution, gently stained in the dark with 1% OsO 4 solution that was centrifuged beforehand to remove potential precipitates, dehydrated using an ethanol series and embedded in polymer resin (AGAR Low Viscosity Kit, Plano) without further staining according to a procedure described in detail in ref. 57 . The correct position of the stomata to cut the cross-sections were identified by light microscopy examination of semi-thin resin sections before ultramicrotoming. The TEM images were taken on an FEI Tecnai Spirit instrument at an acceleration voltage of 120 kV (resolution, 2,048 × 2,048 pixels; Veleta CCD camera, Olympus). Besides the cropping and adjustment of brightness and contrast, the micrographs were not further processed; unprocessed raw data are publicly available 55 . DNA extraction The plant leaf samples (five leaf discs from different inoculated plant leaves per sample) were frozen in liquid nitrogen and were homogenized using a ceramic mortar and pestle. The total DNA was extracted with a Plant DNA Mini Kit (peqlab, VWR). More information about the sample preparation is available in Supplementary Information , ‘Details on DNA extraction’. RNA extraction and complementary DNA synthesis The plant leaf samples (ten leaf discs taken from different infiltrated plant leaves per sample) were flash frozen in liquid nitrogen, and the total RNA was extracted with the Spectrum Plant Total RNA Kit (Sigma Life Science). One microgram of the total RNA was used for complementary DNA synthesis using the Omniscript Reverse Transcription Kit (Qiagen). More information about the sample preparation is available in the Supplementary Information , ‘Details on RNA extraction and complementary DNA synthesis’. qPCR To validate the SAR response based on the bacterial colony counts, the bacteria were also quantified via the outer membrane protein oprF gene of P. syringae in the inoculated leaves (raw data are publicly available 55 ) based on a previously established method 18 , 33 . For this bacterial DNA quantification, a reaction mixture for qPCR was prepared with 7.5 μl of 2× SensiMix SYBR Hi-ROX Mastermix (no. QT605-05, Bioline, Meridian Bioscience), 5 μl plant DNA and 0.5 μl of each primer (Supplementary Table 1 ) at a concentration of 10 μM in a final volume replenished with water to 15 μl in magnetic induction cycler (Mic) tubes (Bio Molecular Systems). The runs were performed on a Mic qPCR machine (Bio Molecular Systems). The conditions for the qPCR were as follows: initial denaturation for 10 min at 95 °C followed by 40 cycles (95 °C for 15 s, 62 °C for 1 min and 72 °C for 30 s). The final PCR products were analysed by a melting point analysis. The qPCR analysis software for the melting curve analysis and amplification efficiency calculation was micPCR v. 2.8.13 (Bio Molecular Systems). This software is designed to meet the minimum information for publication of quantitative real-time PCR experiments (MIQE) 58 specifications and automatically performs the qPCR analysis based on the real-time runs. Five leaf discs from different plant leaves were sampled for each replicate, frozen in liquid nitrogen and immediately processed for DNA extraction. The bacterial DNA levels of the bacterial oprF gene in Arabidopsis plants were calculated using At4g26410 ( expG ) as a reference gene 33 and the comparative cycle threshold method (2 (-ΔΔCt) ) 59 . For oxidative stress and SA-responsive plant transcript levels, leaf discs were flash frozen in liquid nitrogen and stored at –80 °C for <24 h before being processed for RNA extraction and complementary DNA synthesis. Three independent technical replicates (ten leaf discs taken from different plant leaves) were used per treatment. The reaction mixture for RT–qPCR contained 7.5 μl of 2× SensiMix SYBR Hi-ROX Mastermix (no. QT605-05, Bioline, Meridian Bioscience), 5 μl of complementary DNA (corresponding to 25 ng RNA) and 0.5 μl of each primer (Supplementary Table 1 ) at a concentration of 10 μM in a final volume replenished with water to 15 μl in Mic tubes (Bio Molecular Systems). Runs were performed on a Mic qPCR machine (Bio Molecular Systems). The conditions for the qPCR and the analysis of the final PCR products by melting point analysis were analogous to the above bacterial DNA quantification. The final PCR products were analysed by a melting point analysis. The transcript levels of the oxidative stress marker (At3g46230; HSP17.4C1 ) 34 and the SA-responsive genes AtPR-1 and AtPR-5 in Arabidopsis plants were calculated with At4g26410 ( expG ) as the reference gene 60 and the comparative cycle threshold method (2 (–ΔΔCt) ) as mentioned above. The expG gene was selected because another study 60 specifically recommended expG as one of the top five reference genes to be used in biotic stress studies due to its high stability under such conditions. This high stability was confirmed in a previous work of our laboratory 61 and another work 33 . In the present study, the stable expression of expG is reflected in the very small variation in the cycle in which fluorescence can be detected in qPCR termed quantitation cycle ( C q ) for expG . For example, in the PR-1 expression experiments (Fig. 6c ), the average C q ranged from 23.19 to 23.93 for all the different testing conditions, with an average relative error of only 0.63% (ref. 55 ). All the amplification efficiencies were very close to two and with good comparability between the reference gene and the target gene. For example, in Fig. 6c , the average amplification efficiency of expG and AtPR-1 across all the different treatment conditions (1.949 ± 0.011 versus 1.962 ± 0.027, averages ± standard deviations) differed by only 0.7% (ref. 55 ). All the statistical tests hereinafter were performed using the IBM SPSS Statistics software (version 22). Ecotoxicity of SiO 2 NPs and Si(OH) 4 to C. elegans larvae The ecotoxicity assays were conducted on the larval stage one (L1) nematodes of the C. elegans wild-type (ancestral; N2) genotype. Synchronized C. elegans larvae were grown according to a previously established protocol 62 (raw data are publicly available 55 ). A known number of larvae (~70) per replicate were then exposed to 0, 25, 125, 250, 500, 750, 1,000, 1,500 or 2,000 mg SiO 2 l –1 of SiO 2 NPs or Si(OH) 4 in 96-well plates (Corning Costar no. 3596). A 0.1% NaN 3 solution served as the positive control. As a food source for the nematodes, the wells contained 10 µl of living Escherichia coli (strain OP50; final optical density at 600 nm and 1 a.u.; ~5 × 10 8 cells ml −1 ). The total volume per well was 100 µl, and the final pH installed in the phosphate-buffered saline test solutions was 7.4. After incubating the nematodes at 20 °C for 48 h in the dark, the surviving larvae were counted under a stereo microscope at ×20 magnification. The resulting number of mobile nematode larvae was subtracted from the initially incubated number of larvae to calculate the percentage of immobile nematodes. The EC50 values were calculated using a numerically fitted standard log-logistic dose–response model (Levenberg–Marquardt iteration algorithm, Origin 2016, build 9.3.2.903, OriginLab; Supplementary Fig. 3 ). The experiment comprised 12 biological replicates for each treatment and was repeated twice with comparable results. Data availability The datasets that support the findings of the current study are available in the Zenodo repository with the identifier . Additional data related to this study are available from the corresponding authors upon reasonable request. | None | [] | [] | [] | SciNews | Nano | Mohamed El-Shetehy et al. Silica nanoparticles enhance disease resistance in Arabidopsis plants, Nature Nanotechnology (2020). DOI: 10.1038/s41565-020-00812-0 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/s41565-020-00812-0 | https://phys.org/news/2020-12-pesticide-nanoparticles.html | Researchers at the University of Fribourg have discovered that certain silica nanoparticles can act as a traceless, degradable, and highly efficient treatment against plant pathogens. The nanoparticles, which are naturally present in food crops, can release silicic acid, a substance that stimulates plants' own immune response to pathogen attacks. The researchers synthesized silica nanoparticles with similar properties to those found in plants and tested their efficiency on Arabidopsis thaliana infected with the bacterial pest Pseudomonas syringae. The results showed that the nanoparticles can boost resistance against the bacteria in a dose-dependent manner by stimulating the plant's defense hormone, salicylic acid. The nanoparticles degrade without leaving a trace in the presence of water, making them a safe and sustainable alternative for plant disease protection. The study suggests that silica nanoparticles could serve as an inexpensive, highly efficient, and safe treatment against plant pathogens, and future research could extend the investigations to a broader spectrum of plant pathogens.
Researchers at the Adolphe Merkle Institute and the Department of Biology at the University of Fribourg have discovered how certain silica nanoparticles could act as a traceless, degradable, and highly efficient treatment against some plant pathogens. One of the biggest challenges facing agriculture today is the extensive use of fertilizers and pesticides. With an increasing number of products banned or considered dangerous for human and animal health, the need for substitutes is acute. One approach is to stimulate plants' own immune response to pathogen attacks. Silicic acid, which naturally occurs in soil, is known to provoke such responses in plants, and amorphous silica nanoparticles can release this substance in small amounts. These nanoparticles, which are also naturally present in many food crops such as cereals, are more common than most people think. They are part of food grade silica (SiO2), otherwise known as E551 on labels and packaging, and used for decades in a variety of products such as table salt, pills, or protein powders to avoid clumping. Increased resistance With this in mind, the Fribourg-based researchers aimed to create an environmentally safe nano-agrochemical for the targeted delivery of silicic acid and to stimulate plant defense. They synthesized silica nanoparticles with similar properties to those found in plants. To test their efficiency, they applied the nanoparticles on Arabidopsis thaliana (thale cress), a widely used plant model, infected with the bacterial pest Pseudomonas syringae, another model organism. The results showed that their nanoparticles can boost resistance against the bacteria in a dose-dependent manner by stimulating the plant's defense hormone, salicylic acid (which is also the active ingredient in aspirin). The researchers also investigated the interactions of the nanoparticles with plant leaves. They were able to show that nanoparticle uptake and action occurred exclusively through the leaf pores (stomata) that allow the plants to breathe. The nanoparticles did not distribute further in the plants, and the particles degrade without leaving a trace in the presence of water, an important consideration for environmental and food safety. Compared to free silicic acid, which is already used in crop protection, the silica nanoparticles caused less stress to the plants and to other soil microorganisms due to the slow release of the silicic acid. The study, published in the top-ranking journal Nature Nanotechnology, shows that silica nanoparticles could serve as an inexpensive, highly efficient, safe, and sustainable alternative for plant disease protection. Future research could extend the investigations to a broader spectrum of plant pathogens according to the researchers such as other bacteria, insects, or viruses. They emphasize though that before any broad application of nanoparticles as nano-biostimulants and -fertilizers, a thorough analysis is needed to assess the potential long-term fate of silica nanoparticles in the environment. |
10.1136/bmj.n1904 | Human health may be at risk from long-term exposure to air pollution below current air quality standards and guidelines | Long-term exposure to air pollution appears to still be linked to higher mortality despite the existence of air quality standards that restrict levels of pollution, suggests a study published online in The BMJ today. Researchers found evidence of higher death rates amongst people who had been exposed to more air pollution even though the levels were allowed under current official standards. Previous studies have found an association between long term exposure to outdoor air pollution such as those in the form of fine particles in the air (known as particulate matter or PM2.5) and nitrogen dioxide (NO2) and poor health or death. Air pollution concentrations have fallen substantially in Europe since the 1990s, but it is unclear whether there still is a link between pollution and ill health or death at concentrations of pollution that are below current permitted limits. Therefore, an international team of researchers led by the Institute for Risk Assessment Sciences at Utrecht University in the Netherlands, set out to investigate if there was an association between low levels of air pollution concentrations and natural and cause specific deaths. Low level air pollution was defined as concentrations below current limit values as set by the European Union, US Environmental Protection Agency and the World Health Organization (WHO) air quality guidelines. The researchers analysed data on eight groups of people within six European countries—Sweden, Denmark, France, the Netherlands, Germany and Austria—totalling 325,367 adults collectively. Their study, known as the Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) recruited participants in the 1990s or 2000s. Of the 325,367 participants who were followed up over an almost 20-year period, around 14.5% (47,131 people) died during the study period. Analysis of the results showed that people who had higher exposure to particulate matter (PM2.5), nitrogen dioxide, and black carbon were more likely to die. An increase of 5 µg/m3 (a concentration measure of particulate matter) in PM2.5 was associated with a 13% increase in natural deaths while the corresponding figure for a 10 µg/m3 increase in nitrogen dioxide was 8.6%. Associations with PM2.5 and nitrogen dioxide were largely independent of each other. Moreover, associations with PM2.5, nitrogen dioxide, and black carbon remained significant at low to very low concentrations. For people who were exposed to pollution levels below the US standard of 12 µg/m3, an increase of 5 µg/m3 in PM2.5 was associated with a 29.6% increase in natural deaths. People exposed to nitrogen dioxide at less than half the current EU standard of 40 µg/m3, a 10 µg/m3 increase in nitrogen dioxide was associated with a 9.9% increase in natural deaths. This is an observational study, and as such, can't establish cause. The study also has some limitations, say the authors, such as the fact that it focused on exposure in 2010 which was towards the end of the follow-up period for most participants and, given the downward trend in air pollution, this measure might not exactly reflect the concentrations experienced during follow-up. However, this was a large study from multiple European groups of people with detailed information provided. As such, the authors conclude: "Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. "These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines and standards, and future assessments by the Global Burden of Disease [study]." | A recent study published in The BMJ suggests that long-term exposure to air pollution is still linked to higher mortality rates, even at levels below current official standards. The study, which analyzed data from over 325,000 adults in six European countries, found that people who were exposed to higher levels of particulate matter (PM2.5), nitrogen dioxide, and black carbon were more likely to die. The researchers found that even at low concentrations, an increase of 5 µg/m3 in PM2.5 was associated with a 13% increase in natural deaths, while a 10 µg/m3 increase in nitrogen dioxide was associated with an 8.6% increase. The study's findings suggest that outdoor air pollution is associated with mortality even at levels below current European and North American standards and WHO guideline values, and may contribute to the debate about revising air quality limits, guidelines, and standards. | None | Abstract Objective To investigate the associations between air pollution and mortality, focusing on associations below current European Union, United States, and World Health Organization standards and guidelines. Design Pooled analysis of eight cohorts. Setting Multicentre project Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) in six European countries. Participants 325 367 adults from the general population recruited mostly in the 1990s or 2000s with detailed lifestyle data. Stratified Cox proportional hazard models were used to analyse the associations between air pollution and mortality. Western Europe-wide land use regression models were used to characterise residential air pollution concentrations of ambient fine particulate matter (PM 2.5 ), nitrogen dioxide, ozone, and black carbon. Main outcome measures Deaths due to natural causes and cause specific mortality. Results Of 325 367 adults followed-up for an average of 19.5 years, 47 131 deaths were observed. Higher exposure to PM 2.5 , nitrogen dioxide, and black carbon was associated with significantly increased risk of almost all outcomes. An increase of 5 µg/m 3 in PM 2.5 was associated with 13% (95% confidence interval 10.6% to 15.5%) increase in natural deaths; the corresponding figure for a 10 µg/m 3 increase in nitrogen dioxide was 8.6% (7% to 10.2%). Associations with PM 2.5 , nitrogen dioxide, and black carbon remained significant at low concentrations. For participants with exposures below the US standard of 12 µg/m 3 an increase of 5 µg/m 3 in PM 2.5 was associated with 29.6% (14% to 47.4%) increase in natural deaths. Conclusions Our study contributes to the evidence that outdoor air pollution is associated with mortality even at low pollution levels below the current European and North American standards and WHO guideline values. These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines, and standards, and future assessments by the Global Burden of Disease. Introduction Epidemiological cohort studies have consistently found associations between long term exposure to outdoor air pollution and a range of morbidity and mortality endpoints. Concentrations of health relevant regulated pollutants, including fine particles and nitrogen dioxide, have decreased in the past decades in developed countries. Recent evaluations by the World Health Organization and the Global Burden of Disease study have suggested that health effects might persist at these lower concentrations. 1 2 3 However, there is uncertainty about the shape of the concentration-response function at the low end of the air pollution concentration distribution, related to the scarcity of observations at the lowest concentrations. Associations with mortality at low pollution levels in large populations were primarily investigated in a few North American studies, specifically the Canadian census cohort, the Canadian Community Health survey, the US Medicare cohort, and the US National Health Interview Survey study. 4 5 6 7 8 9 10 All the studies found associations below the current annual average US standard of 12 µg/m 3 and WHO guideline value of 10 µg/m 3 for fine particles with an aerodynamic diameter of <2.5 µm (PM 2.5 ), but only two studies were able to adjust for detailed individual lifestyle factors. 7 9 Most of the studies suggested a steeper concentration response function at the lowest levels, but the National Health Interview Survey study 9 suggested little association below about 5 µg/m 3 . Most studies focused primarily on PM 2.5 , whereas increasing evidence shows that pollutants related to local combustion sources, including nitrogen dioxide and black carbon, might be relevant to health. Few studies have assessed the mortality effects of long term exposure to ozone. Within the project Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) we assessed associations of low level air pollution concentrations with natural and cause specific mortality. Low level air pollution was defined as concentrations below current European Union limit values, US Environmental Protection Agency national ambient air quality standards, or the 2005 WHO air quality guidelines. We investigated PM 2.5 , nitrogen dioxide, ozone, and black carbon at a fine spatial resolution. To have sufficient statistical power to detect associations at low exposure levels, we pooled data from eight European cohorts with information on important individual risk factors, including smoking and body mass index. Methods Study population The eight cohorts were selected from six European countries (see supplementary figure): Sweden (Stockholm county), Denmark (Copenhagen and Aarhus, and nationwide), France (nationwide), the Netherlands (four cities), Germany (Ruhr and Augsburg areas), and Austria (Vorarlberg region). All the cohorts, except the Danish cohort, 11 were previously part of the European Study of Cohorts for Air Pollution Effects (ESCAPE). 12 Not all ESCAPE cohorts were included, either because of relatively high annual air pollution concentrations or because the data could not be pooled. Several of the included cohorts (ie, from Sweden, Denmark, the Netherlands, and Augsburg, Germany) combined multiple original cohorts, termed subcohorts. All cohorts and subcohorts included general population samples and specific subgroups, such as Danish nurses (DNC cohort). Most cohorts were from large cities and surrounding regions. Supplementary appendix section 1 describes the cohorts in more detail. Recruitment of most of the cohorts was in the 1990s or 2000s (supplementary table S1). To pool data, we used a common codebook to harmonise individual and area level covariates and outcome variables between cohorts. Information on covariates was only available at baseline. Assessment of exposure to air pollution We assessed air pollution concentrations at the baseline residential address of the study participants using land use regression models, described in detail elsewhere. 13 Briefly, we estimated 2010 annual mean PM 2.5 , nitrogen dioxide, black carbon, and (warm season) ozone concentrations using the European Environmental Agency AirBase routine monitoring data (PM 2.5 , nitrogen dioxide, and ozone) and ESCAPE monitoring data (black carbon). Predictors were satellite derived and chemical transport model air pollutant estimates at 10×10 km, and fine scale land use and road traffic data. Western Europe-wide models were developed on a 100×100 m grid and were assigned to the participants using their geocoded residential address. The PM 2.5 , nitrogen dioxide, black carbon, and ozone models generally explained a large fraction of measured spatial variation of the annual average concentration: 72%, 59%, 54%, and 69%, respectively. In the ELAPSE paper on exposures 13 we reported performance of the models by comparing with the external ESCAPE measurements by cohort (typically 20 sites for PM 2.5 and 40 sites for nitrogen dioxide). The root means square error of this comparison was between 1.0 µg/m 3 and 1.7 µg/m 3 for PM 2.5 , except for the French cohort, where the value was 3.3 µg/m 3 . For nitrogen dioxide, the root means square error was between 5 µg/m 3 and 7 µg/m 3 , except for the nationwide French cohort (12 µg/m 3 ). Differences were therefore modest, and some of the variability was probably related to the small number of sites in each ESCAPE area. To enable time varying exposure analysis, we extrapolated concentrations to every year of follow-up using the estimated annual concentrations from the Danish eulerian hemispheric model, which models monthly average concentrations across Europe at 26×26 km spatial resolution back to 1990. 14 Mortality data Mortality was defined based on the underlying cause of death recorded on death certificates in mortality registries as ICD-9 and ICD-10 (international classification of diseases, ninth and 10th revisions, respectively) codes. We analysed mortality from natural causes (ICD-9: 001-779; ICD-10: A00-R99) and cause specific mortality for cardiovascular disease (ICD-9: 400-440; ICD-10: I10-I70), ischaemic heart disease (ICD-9: 410-414; ICD-10: I20-I25), cerebrovascular disease (ICD-9: 430-438; ICD-10: I60-I69), respiratory disease (ICD-9: 460-519; ICD-10: J00-J99), chronic obstructive pulmonary disease (ICD-9: 490-492, 494, 496; ICD-10: J40-J44, J47), diabetes (ICD-9: 249-250; ICD-10: E10-E14), and cardiometabolic diseases (cardiovascular disease or diabetes). The end of follow-up for mortality was until 2011-15, depending on the cohort (supplementary table S1). Statistical analysis We analysed the associations between air pollution and mortality using Cox proportional hazards models stratified by sex and cohort or subcohort, with age as underlying timescale. Censoring occurred at the time of the event of interest, death from other causes, emigration, loss to follow-up for other reasons, or end of follow-up, whichever came first. Strata for cohorts or subcohorts were applied because of concerns about differences not fully accounted for by the available covariates and departures from the proportional hazard assumption. 15 We specified three confounder models a priori, with an increasing level of adjustment for individual and area level variables. Model 1 included age (as the timescale), sex (strata), and year of enrolment. Model 2 further included smoking status, duration and intensity of smoking (linear and squared for intensity), body mass index, marital status, and employment status. Model 3 further expanded model 2 with neighbourhood or municipal level mean income in 2001. We determined models 2 and 3 based on the ESCAPE confounder models 12 and detailed sensitivity analyses, in which we balanced the need to adjust for a comprehensive set of covariates and the availability of these covariates for most participants. Model 3 was considered the main model. In addition to the main linear model, we assessed the shape of the concentration-response association between air pollution and mortality using both natural cubic splines with three degrees of freedom and the shape constrained health impact function (SCHIF). The SCHIF method assesses several different shapes of the association as variations of sigmoidal functions to produce biologically plausible concentration-response functions, resulting in an “optimal” and “ensemble” of all fitted shapes. 16 The SCHIF shapes are smoother and less affected by sparse data than natural cubic splines. We also performed analyses with exposure grouped into quarters for natural, cardiovascular, and respiratory mortality. Furthermore, we performed several sensitivity analyses. The associations with linear models were analysed in subsets of concentrations by excluding observations above specific values. We evaluated cut-offs including current EU limit values (25 μg/m 3 PM 2.5 , 40 μg/m 3 nitrogen dioxide), US Environmental Protection Agency national ambient air quality standards (12 μg/m 3 PM 2.5 ), and WHO air quality guidelines (10 μg/m 3 PM 2.5 , 40 μg/m 3 nitrogen dioxide). To disentangle the effect of individual pollutants, we specified linear models of two pollutants for all combinations of the four pollutants. We did not specify three pollutant models because of the high correlations between pollutants within cohorts. As some potential confounders were not available in all cohorts, we tested the sensitivity of our findings by adjusting for additional variables such as education and performing a leave one out cohort analysis. We assessed effect modification by covariates available in all cohorts and subcohorts. Because we stratified for sex in our main model, we changed the formulation to include sex as a covariate in the model. We then added pollution as an interaction variable, as with the other effect modifiers (restoring strata for sex as in the main model). We assessed the sensitivity of our findings to using the exposure in 2010 by analysing the back extrapolated concentrations at each cohort’s baseline year and by time varying exposures from enrolment to end of follow-up. Residential history was incorporated in the time varying exposure analyses. In the time varying analysis, one and five year period strata were used in the Cox models to account for time trends in mortality and air pollution. Because air pollution and noise might be correlated, we conducted additional adjustment for road traffic noise. Details on assessment of noise exposure are provided elsewhere. 17 We used multiple imputation by chained equations 18 to fill in missing values for confounders, provided that a cohort had information for a variable for part of the cohort (supplementary appendix, section 2). Analyses were performed in R (version 3.4.0). 19 Supplementary appendix, section 3, lists the packages used in the analyses. Patient and public involvement As we used existing cohorts recruited more than a decade ago, we could not involve patients in the design of the study and the paper. We will prepare press releases and share the findings through publications, talks, and social media, addressing larger audiences that include members of the public, patients, health professionals, and stakeholders. Results Population and exposure characteristics Cohorts differed in all characteristics, supporting the analysis using strata for cohorts and subcohorts ( table 1 and supplementary tables S1 and S2). Observations were pooled from 381 036 participants. Owing to missing covariate data, 325 367 participants were included in the main analysis. The Austrian VHM&PP cohort contributed 45% of participants. Nearly all participants were exposed to PM 2.5 levels below the EU limit value (25 µg/m 3 ), more than 50 000 were exposed to levels below the US Environmental Protection Agency national ambient air quality standards (12 µg/m 3 ), and more than 25 000 were exposed to levels below the WHO air quality guidelines (10 µg/m 3 ). More than 310 000 participants were exposed to nitrogen dioxide levels below the EU limit values and WHO air quality guidelines (40 µg/m 3 ). Table 1 Characteristics of study populations from eight European cohorts. Values are numbers (percentages) unless stated otherwise View this table: View popup View inline Large north to south upward gradients in exposure to air pollution were observed between cohorts ( figure 1 and table S3). Variations in black carbon and nitrogen dioxide within cohorts were especially substantial. Contrast for ozone was low within cohorts. Fig 1 Annual average exposure at participant (n=325 367) addresses. In boxes, the boundary closest to zero indicates 25th centile and furthest from zero indicates 75th centile. Lines in boxes represent the median and whiskers are 5th and 95th centiles. Dashed lines for fine particulate matter (PM 2.5 ) indicate World Health Organization air quality guidelines (10 µg/m 3 ), US Environmental Protection Agency national ambient air quality standards (12 µg/m 3 ), and EU limit value (25 µg/m 3 ). Dashed lines for nitrogen dioxide indicate WHO air quality guidelines (40 µg/m 3 ) and WHO health risks of air pollution in Europe (HRAPIE) health impact quantification threshold (20 µg/m 3 ). Cohorts were ordered from north (top) to south (bottom). CEANS cohorts are from Stockholm county, Sweden, DCH from Copenhagen and Aarhus, Denmark, DNC from Denmark nationwide, EPIC-NL from four cities in the Netherlands, HNR from the Ruhr area, Germany, E3N from France nationwide, KORA from the Augsburg area, Germany, and VHM&PP from the Vorarlberg region, Austria Download figure Open in new tab Download powerpoint PM 2.5 was moderately to highly correlated with black carbon and nitrogen dioxide within most cohorts (supplementary table S4). Black carbon and nitrogen dioxide were highly correlated in most cohorts. Ozone was negatively correlated with PM 2.5 and especially nitrogen dioxide and black carbon in all cohorts, with a particularly high negative correlation in the large Austrian cohort. The within cohort correlation is important since strata were used for cohorts and subcohorts in the epidemiological analysis. Associations with mortality Main analysis Associations between PM 2.5 , nitrogen dioxide, and black carbon and almost all outcomes were significantly positive in linear analysis ( table 2 ). Effect estimates for PM 2.5 were similar for deaths from natural causes and cardiovascular disease and lower for deaths from respiratory disease but similar for nitrogen dioxide and black carbon. The highest hazard ratios were found for deaths due to diabetes, with wider confidence intervals owing to a small number of deaths. Associations were significantly negative for ozone and all outcomes, related to the negative correlation between ozone and the other pollutants (participants with high exposure to ozone had low exposures to PM 2.5 , black carbon, and nitrogen dioxide). Table 2 Risk of death associated with exposure to air pollution in 325 367 participants from eight European cohorts. Values are hazard ratios (95% confidence intervals) unless stated otherwise View this table: View popup View inline Figure 2 and supplementary figure S1 show the concentration-response functions for PM 2.5 , nitrogen dioxide, and deaths from natural causes using natural splines. Associations for natural deaths were observed over the full range of exposures. Associations tended to be steeper at low concentrations, levelling off at high concentrations. At the extremes of the distribution, patterns occurred that were difficult to interpret, related to large uncertainty about the shape of the curve as indicated by wide confidence intervals. For the association between PM 2.5 and deaths due to respiratory disease and chronic obstructive pulmonary disease, the pattern was difficult to interpret as a decreasing trend occurred associated with relatively frequently occurring exposures. Concentration-response functions for cause specific mortality were in general similar to those for natural deaths, indicating mostly supralinear curves, with associations remaining at low levels ( figure 2 and supplementary figures S2-S6). The analyses of exposure grouped into quarters confirmed the linear to supralinear curves in the splines, except for PM 2.5 and deaths due to respiratory disease (supplementary table S5). Fig 2 Natural cubic splines (three degrees of freedom) for associations between exposure to air pollution and deaths due to natural causes, cardiovascular disease, and respiratory disease. Purple shaded areas represent 95% confidence intervals. Histogram of exposure added to illustrate sparse data regions. Dashed lines for fine particulate matter (PM 2.5 ) indicate World Health Organization air quality guidelines (10 µg/m 3 ), the primary US Environmental Protection Agency standard (12 µg/m 3 ), and the secondary US EPA standard (15 µg/m 3 ), all as annual averages. For nitrogen dioxide the red lines indicate the WHO suggested limit for burden of disease quantification (20 µg/m 3 ) and EU limit value and WHO air quality guidelines (40 µg/m 3 ). Patterns at the extremes are difficult to interpret owing to wide confidence intervals Download figure Open in new tab Download powerpoint The SCHIF shapes were generally in agreement with the shapes of the natural splines, indicating that evidence still exists for an association at low levels and the positive associations between pollution and natural and cause specific mortality are generally steeper at the low end of the distribution for PM 2.5 , nitrogen dioxide, and black carbon (supplementary figures S7-S12). The SCHIF shapes for nitrogen dioxide and deaths due to respiratory disease and chronic obstructive pulmonary disease suggest a flatter slope at low than at high concentrations. Sensitivity analyses Table 3 shows the hazards ratios for natural deaths observed in subsets of successively lower air pollution concentrations. Associations remained positive and statistically significant for PM 2.5 even when all observations higher than 12 µg/m 3 were removed from the analysis. The hazard ratios for participants with exposures below 10 µg/m 3 was similar to those for all observations but with wider confidence intervals. For nitrogen dioxide, associations remained significantly positive below 20 µg/m 3 , well below current standards. Similar patterns were found for cause specific mortality ( table 3 ), with wider confidence intervals associated with smaller number of deaths. Table 3 Subset analysis of risk of death associated with exposure to air pollution View this table: View popup View inline Associations for PM 2.5 and nitrogen dioxide were attenuated but remained significant after adjustment for each other and for ozone as well as for black carbon in models of two pollutants for deaths due to natural causes and cardiovascular disease (supplementary tables S6 and S7). For deaths due to respiratory disease, only associations for nitrogen dioxide were robust to adjustment for other pollutants (supplementary table S8). The negative association with ozone attenuated towards unity but remained statistically significant. Air pollution concentrations have decreased substantially in Europe since the 1990s (supplementary figure S13). Exposures to PM 2.5 especially were substantially higher at baseline than in 2010 (supplementary figure S14); exposures to nitrogen dioxide and black carbon were moderately higher at baseline (supplementary appendix, section 7). When exposure to baseline air pollution was used instead of the 2010 exposure, hazard ratios especially for PM 2.5 were found to be smaller than in the main analysis, although still statistically significant (supplementary table S9). Hazard ratios in the time varying exposure analyses were similar to those of the analysis using the 2010 exposure (supplementary table S10). As time trends were different across Europe, different trends were specified for each cohort. Natural spline analysis conducted in time varying exposure analyses supported the findings that mortality associations remained at low levels and were not associated with the use of the 2010 exposure as an exposure estimate (supplementary figure S15). Further adjustment for education, diet, and occupational status did not affect effect estimates obtained with the main model (supplementary table S11). Effect estimates were not (PM 2.5 ) or were mildly (nitrogen dioxide, black carbon) affected by exclusion of specific cohorts, such as the large Austrian VHM&PP cohort (supplementary figure S16). The hazard ratio for ozone was substantially closer to unity when excluding the Austrian cohort. Effect estimates for deaths due to natural causes and cardiovascular disease were only mildly attenuated by additional adjustment for road traffic noise (supplementary table S12). The negative ozone associations were attenuated to unity in models of two pollutants (adjusting the ozone association for one of the other pollutants, one at a time) without the Austrian cohort and with additional adjustment for road traffic noise (supplementary tables S13 and S14). Hazard ratios were unchanged after using multiple imputation to estimate missing covariate data in the full study population (supplementary table S15). Indications of effect modification were found for sex (higher hazard ratio in men; supplementary figure S17), smoking status for PM 2.5 (higher hazard ratio in current smokers, but also significant associations in never smokers), age for nitrogen dioxide (higher hazard ratio in those aged <65 years). Effect estimates remained significant in all strata of participants except those with a very low body mass index (<18.5). Discussion By performing targeted analyses within a large European pooled cohort with detailed data on individual lifestyle covariates, we found significant positive associations between residential exposure to PM 2.5 , nitrogen dioxide, and black carbon and deaths due to natural causes, cardiovascular disease, and respiratory disease. For these pollutants, we generally observed associations that were stronger at low exposure levels. Subset analyses documented that these associations remained even at levels for PM 2.5 and nitrogen dioxide below current EU limit values, US Environmental Protection Agency national ambient air quality standards, and WHO air quality guidelines. Comparison with other studies The estimated hazard ratio for mortality associated with PM 2.5 in our study is larger than the estimate from the ESCAPE study, 12 estimates from recent North American administrative cohorts 5 6 8 20 and a recent Danish study, 21 and estimates from meta-analyses 22 23 24 (supplementary table S16), but almost identical to the results of the Canadian community health survey study. 7 The recent WHO systematic review documented heterogeneity in PM 2.5 effect estimates between studies, attributed to study location, level, and composition of particulate matter and to methodological differences. 24 In our cohort, similarly to the Canadian community health survey, individual lifestyle data were available, which are missing in large administrative cohorts. The sensitivity analysis using PM 2.5 estimates at baseline year of exposure showed clearly smaller effect estimates. These estimates are more in line with effect estimates reported in a recent systematic review, suggesting that the effect estimate using the 2010 concentrations as exposure variable might be overestimating the true effect estimate. Effect estimates from time varying exposure analyses were similar to those of our main analyses. Our effect estimates for nitrogen dioxide were also higher than those in recent meta-analyses (supplementary table S12). Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. When we applied two methods allowing non-linear concentration-response functions and linear analyses in subsets of exposure, we found no indication of a level below which no association was found. The finding of associations at low levels is consistent with that of several other recent cohort studies. 4 5 6 7 8 25 26 27 28 29 The steeper slope of the PM 2.5 mortality association at low levels is consistent with previous North American studies. 4 5 6 7 8 In some other cohort studies, the shape of the PM 2.5 function was sublinear 30 31 or linear. 25 26 27 In a comprehensive meta-analysis that combined evidence from a large number of cohorts, a supralinear association was observed for PM 2.5 . 32 Considerably less evidence is available about associations between low level nitrogen dioxide and mortality. In two large administrative cohorts in Canada and the Netherlands, associations with nitrogen dioxide were also found well below the WHO guideline value. 5 33 Our study found associations at levels twofold lower than the current WHO guideline values for long term exposure to nitrogen dioxide. Models of two pollutants showed that both PM 2.5 and nitrogen dioxide were associated with mortality. Whereas PM 2.5 primarily reflects pollution transported over large distances, nitrogen dioxide reflects local fossil fuel combustion sources, especially motorised traffic. Our results for ozone did not confirm previously reported positive associations with mortality. 34 This might be related to the very small range of ozone exposure within our study, rendering our study less informative for assessing health effects of ambient ozone. The negative associations we found in single pollutant models might reflect the high negative correlation with especially nitrogen dioxide and black carbon. Ozone and nitrogen dioxide are negatively correlated because when ozone is close to combustion sources (eg, major roads), it reacts with nitric oxide emitted from the combustion source to form oxygen and nitrogen dioxide. Ozone therefore tends to be low near roadways, whereas black carbon emitted by traffic is high. Nitrogen dioxide is in part directly emitted from traffic and in part formed by the atmospheric reactions, so it is also high near roadways. In models of two pollutants and models adjusting for noise, the negative associations with ozone were attenuated to unity, especially when excluding the large Austrian cohort and adjusting for nitrogen dioxide and noise together. The very high negative correlation between ozone and nitrogen dioxide in the Austrian cohort renders models of two pollutants difficult to interpret. Strengths and limitations of this study An important strength of ELAPSE is the pooling of data from multiple European cohorts with detailed information on individual covariates (eg, smoking, body mass index), which allowed for more statistical power and analysis of the shapes of concentration-response functions. A part of the pooling process was an extensive, highly standardised procedure of harmonisation of individual and small area level variables between all cohorts. Another strength of the study is that we used state-of-the-art models to enable a uniform assessment of exposure to air pollution at a fine, 100×100 m scale for all four pollutants. Compared with the ESCAPE study, 12 we had longer follow-up time. A limitation of our study is the use of the 2010 exposure in our main analyses. The rationale for using the 2010 exposure was that in earlier years we did not have enough monitoring stations in Europe to develop the fine spatial scale models for PM 2.5 . The 2010 exposure represents exposure towards the end of follow-up for most cohorts. Given the downward trends in air pollution, a concern is that 2010 exposure might not correctly reflect the long term exposure leading to increased mortality. Previous studies, however, have documented that spatial contrasts in nitrogen dioxide and black carbon remain constant for at least a decade, 35 36 37 38 supporting the use of 2010 exposures in the analysis. Our sensitivity analyses with time varying exposure resulted in similar findings to the main model. For PM 2.5 mainly, northern European cohorts contributed to the effect estimates in the lowest exposure range and we therefore could not distinguish between the characteristics of the particulate matter mixture or the population characteristics affecting the steeper slope at low levels. More overlap was found in exposure to nitrogen dioxide and black carbon between cohorts and we observed steeper slopes at low levels as well. The difficulty in interpreting the non-linear function observed for deaths due to respiratory disease could be related to competing risks of death that we did not account for in our study. Bias from exposure misclassification and residual confounding cannot be excluded. As exposure status was determined fully independently from the outcome, misclassification is likely non-differential and thus biased towards the null. We adjusted for several commonly used potential confounders, and adjustment for socioeconomic status might reduce confounding by other risk factors. The coding for the causes of the death in the current study were brought in line with the previous ESCAPE analyses. These differ slightly from the ones used by the Global Burden of Disease. Owing to that, we do not expect major differences. Conclusions Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines and standards, and future assessments by the Global Burden of Disease. What is already known on this topic In the framework of the update of the World Health Organization air quality guidelines, systematic reviews of studies of the effect of long term exposure to major outdoor air pollutants (fine particles, nitrogen dioxide, and ozone) have been done Findings showed that long term exposure to ambient air pollution was significantly associated with natural and cause specific mortality, but associations at concentrations below current limit values were not well understood What this study adds Long term exposure to outdoor air pollution was positively associated with mortality even at levels well below the EU limit values, US Environmental Protection Agency national ambient air quality standards, and WHO air quality guidelines for fine particles and nitrogen dioxide This new evidence supports reconsideration of existing guideline values and standards The finding of associations at low levels of air pollution and mortality also supports policies to reduce air pollution below current legal limit values Ethics statements Ethical approval All included cohort studies were approved by the medical ethics committees in their respective countries. Data availability statement No additional data available. Acknowledgments We thank Marjan Tewis for compiling the pooled cohort and Richard Burnett for supplying the code for the shape constrained health impact function and commenting on its application. Footnotes Contributors: MS performed the statistical analysis and wrote the original draft of the manuscript. GH wrote and reviewed the manuscript and edited the original version. SR, KK, and ES created the statistical analyses strategy and scripts for the statistical analyses. JC performed the statistical analysis and exposure assessments. KdH performed the exposure assessments. MS, GW, SR, KK, BB, GH, and ES conceived and designed the study. BB and GH are principal investigators of the Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) project. All authors have read and revised the manuscript for important intellectual content and contributed to the interpretation of the results. All authors have approved the final draft of the manuscript. GH is the study guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. GH and ES contributed equally to the manuscript. Funding: This work was supported by Health Effects Institute (HEI) research agreement (grant No 4954-RFA14-3/16-5-3). Research described in this article was conducted under contract to the HEI, an organisation jointly funded by the US Environmental Protection Agency (EPA) (assistance award No R-82811201) and certain motor vehicle and engine manufacturers. The contents of this article do not necessarily reflect the views of HEI, or its sponsors, nor do they necessarily reflect the views and policies of the EPA or motor vehicle and engine manufacturers. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support from the Health Effects Institute for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years, no other relationships or activities that could appear to have influenced the submitted work. The corresponding author affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. Dissemination to participants and related patient and public communities: We plan to prepare press releases and share the findings through publications, talks, and social media. We will address larger audiences that include members of the public, patients, health professionals, and stakeholders. Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: . | None | [] | [] | [] | SciNews | Medicine | Long term exposure to low level air pollution and mortality in eight European cohorts within the ELAPSE project: pooled analysis, BMJ (2021). DOI: 10.1136/bmj.n1904 Journal information: British Medical Journal (BMJ) | http://dx.doi.org/10.1136/bmj.n1904 | https://medicalxpress.com/news/2021-09-human-health-long-term-exposure-air.html | A recent study published in The BMJ suggests that long-term exposure to air pollution is still linked to higher mortality rates, even at levels below current official standards. The study, which analyzed data from over 325,000 adults in six European countries, found that people who were exposed to higher levels of particulate matter (PM2.5), nitrogen dioxide, and black carbon were more likely to die. The researchers found that even at low concentrations, an increase of 5 µg/m3 in PM2.5 was associated with a 13% increase in natural deaths, while a 10 µg/m3 increase in nitrogen dioxide was associated with an 8.6% increase. The study's findings suggest that outdoor air pollution is associated with mortality even at levels below current European and North American standards and WHO guideline values, and may contribute to the debate about revising air quality limits, guidelines, and standards.
Long-term exposure to air pollution appears to still be linked to higher mortality despite the existence of air quality standards that restrict levels of pollution, suggests a study published online in The BMJ today. Researchers found evidence of higher death rates amongst people who had been exposed to more air pollution even though the levels were allowed under current official standards. Previous studies have found an association between long term exposure to outdoor air pollution such as those in the form of fine particles in the air (known as particulate matter or PM2.5) and nitrogen dioxide (NO2) and poor health or death. Air pollution concentrations have fallen substantially in Europe since the 1990s, but it is unclear whether there still is a link between pollution and ill health or death at concentrations of pollution that are below current permitted limits. Therefore, an international team of researchers led by the Institute for Risk Assessment Sciences at Utrecht University in the Netherlands, set out to investigate if there was an association between low levels of air pollution concentrations and natural and cause specific deaths. Low level air pollution was defined as concentrations below current limit values as set by the European Union, US Environmental Protection Agency and the World Health Organization (WHO) air quality guidelines. The researchers analysed data on eight groups of people within six European countries—Sweden, Denmark, France, the Netherlands, Germany and Austria—totalling 325,367 adults collectively. Their study, known as the Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) recruited participants in the 1990s or 2000s. Of the 325,367 participants who were followed up over an almost 20-year period, around 14.5% (47,131 people) died during the study period. Analysis of the results showed that people who had higher exposure to particulate matter (PM2.5), nitrogen dioxide, and black carbon were more likely to die. An increase of 5 µg/m3 (a concentration measure of particulate matter) in PM2.5 was associated with a 13% increase in natural deaths while the corresponding figure for a 10 µg/m3 increase in nitrogen dioxide was 8.6%. Associations with PM2.5 and nitrogen dioxide were largely independent of each other. Moreover, associations with PM2.5, nitrogen dioxide, and black carbon remained significant at low to very low concentrations. For people who were exposed to pollution levels below the US standard of 12 µg/m3, an increase of 5 µg/m3 in PM2.5 was associated with a 29.6% increase in natural deaths. People exposed to nitrogen dioxide at less than half the current EU standard of 40 µg/m3, a 10 µg/m3 increase in nitrogen dioxide was associated with a 9.9% increase in natural deaths. This is an observational study, and as such, can't establish cause. The study also has some limitations, say the authors, such as the fact that it focused on exposure in 2010 which was towards the end of the follow-up period for most participants and, given the downward trend in air pollution, this measure might not exactly reflect the concentrations experienced during follow-up. However, this was a large study from multiple European groups of people with detailed information provided. As such, the authors conclude: "Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. "These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines and standards, and future assessments by the Global Burden of Disease [study]." |
10.1038/s41467-023-36896-0 | Study reveals mechanism by which a circadian clock molecule leads to lung fibrosis | Abnormal sleep patterns, like those of night-shift workers, disrupt the body's natural biological clock and have been linked to lung health issues. A new study by University of Rochester Medical Center (URMC) researchers shows how a biological clock molecule, called REV-ERBα, contributes to lung scarring, uncovering new potential drugs and drug targets along the way. Pulmonary fibrosis, or lung scarring, is a serious condition in which connective tissue builds up in the lungs, making them thick and rigid, and causing difficulty breathing. While medications can ease the symptoms of pulmonary fibrosis, none can repair the lung damage caused by this sometimes-fatal disease. The URMC study, published in Nature Communications, confirms a previously-discovered link between the body's biological clock (or circadian rhythm) and lung diseases and uncovers a new mechanism underlying this link. Study authors show that a lack of the circadian rhythm protein, REV-ERBα, contributes to lung scarring in mice by increasing production of collagen, a major component of connective tissue, and lysyl oxidase, which stabilizes connective tissue and makes it more rigid. The team, which was led by Irfan Rahman, Ph.D., Dean's Professor of Environmental Medicine at URMC, found low levels of REV-ERBα and large amounts of collagen and lysyl oxidase in lung samples from patients with pulmonary fibrosis. Inducing lung injury in mice had a similar outcome: reduced REV-ERBα levels and increased levels of collagen, lysyl oxidase, and other markers of fibrosis. As a circadian rhythm protein, REV-ERBα expression normally fluctuates throughout the day, peaking at noon and dipping to its lowest levels at midnight. When the team induced lung injury at night, mice had larger increases in lysyl oxidase and collagen proteins, more extensive lung damage, and lower survival rates compared to mice injured in the morning. Rahman said this could be relevant to night-shift workers who are exposed to lung irritants at work. "Night-shift work usually occurs during the midnight timeframe when the expression of REV-ERBα is lowest," he said. "Our study suggests there is less protection against lung fibrosis generated from REV-ERBα activation at night." When the team induced lung injury in genetically modified mice that express low levels of REV-ERBα, the mice had worse outcomes that appeared to be mediated by increased collagen and lysyl oxidase. After 15 days of infection with influenza A, these mice had greater upregulation of collagen and lysyl oxidase gene expression, worse flu infections, and worse lung injury compared with mice who expressed normal levels of REV-ERBα. Activating REV-ERBα with a drug 14 days after lung injury in mice that express normal levels of REV-ERBα slightly reduced collagen and lysyl oxidase gene expression and improved lung health in the mice, though not significantly. When tested in cell cultures, the REV-ERBα-activating drugs had an anti-fibrotic effect. "Currently, there are only two drugs approved by the FDA to treat fibrosis, and they only delay the process, they don't cure the disease," said study author Qixin Wang, Ph.D., a postdoctoral fellow working in Rahman's lab. "REV-ERBα-activating drugs could serve as potential therapeutics to help prevent fibrosis and stop the disease process." But, he adds, a better REV-ERBα drug or a more direct way to deliver the drug is needed. In their studies, mice treated with the REV-ERBα-activating drug SR9009 lost more weight and had lower survival than untreated mice. While further research is needed, Rahman and Wang believe their findings open new possibilities for developing treatments for all sorts of fibrotic diseases—especially those with a circadian component, like nighttime alcohol consumption causing liver fibrosis. | A new study by University of Rochester Medical Center researchers has uncovered a link between the body's biological clock and lung scarring, a serious condition that causes difficulty breathing. The study found that a lack of the circadian rhythm protein REV-ERBα contributes to lung scarring in mice by increasing production of collagen and lysyl oxidase, which stabilize connective tissue and make it more rigid. The researchers also found that inducing lung injury at night, when REV-ERBα levels are lowest, led to more extensive lung damage and lower survival rates in mice. The study suggests that REV-ERBα-activating drugs could serve as potential therapeutics to help prevent fibrosis and stop the disease process, and could have implications for developing treatments for all sorts of fibrotic diseases, including those with a circadian component. | None | Abstract Molecular clock REV-ERBα is central to regulating lung injuries, and decreased REV-ERBα abundance mediates sensitivity to pro-fibrotic insults and exacerbates fibrotic progression. In this study, we determine the role of REV-ERBα in fibrogenesis induced by bleomycin and Influenza A virus (IAV). Bleomycin exposure decreases the abundance of REV-ERBα, and mice dosed with bleomycin at night display exacerbated lung fibrogenesis. Rev-erbα agonist (SR9009) treatment prevents bleomycin induced collagen overexpression in mice. Rev-erbα global heterozygous (Rev-erbα Het) mice infected with IAV showed augmented levels of collagens and lysyl oxidases compared with WT-infected mice. Furthermore, Rev-erbα agonist (GSK4112) prevents collagen and lysyl oxidase overexpression induced by TGFβ in human lung fibroblasts, whereas the Rev-erbα antagonist exacerbates it. Overall, these results indicate that loss of REV-ERBα exacerbates the fibrotic responses by promoting collagen and lysyl oxidase expression, whereas Rev-erbα agonist prevents it. This study provides the potential of Rev-erbα agonists in the treatment of pulmonary fibrosis. Introduction Idiopathic pulmonary fibrosis (IPF) is a chronic interstitial lung disease characterized by progressive lung scar tissue formation that is typically accompanied by impaired lung function and difficulty breathing 1 . The onset of pulmonary fibrosis is usually initiated by the dysregulation of tissue repair mechanisms which can be induced by various causes, such as air pollution (asbestos), antineoplastic drugs, and respiratory viral infections such as influenza A virus (IAV) and even coronavirus (SARS-CoV-2) infection 2 , 3 . In previous decades, rigorous basic studies have improved our understanding of pro-fibrotic pathogenesis and developed many candidates for anti-fibrotic therapy. However, there are no effective therapeutics for IPF, and the detailed molecular mechanism of fibrogenesis is still poorly understood 4 , 5 , 6 . Currently, nintedanib and pirfenidone are the only Food and Drug Administration (FDA)-approved drugs for the treatment of pulmonary fibrosis, which only serve to slow the progression of pulmonary fibrosis 7 . Investigating new promising molecular pathways involved in fibrogenic responses is urgently needed, and Rev-erbα has become a promising candidate 8 , 9 . REV-ERBα is a transcriptional repressor that regulates mRNA transcriptions involved in circadian rhythms, metabolism, and inflammatory responses 10 , 11 , 12 , 13 . Oscillations in circadian rhythm are controlled by the competition of two nuclear receptors, REV-ERBα, and retinoic acid-like orphan receptor alpha (RORα) 14 . REV-ERBα inhibits the transcription and translation of circadian locomotor output cycles kaput (CLOCK)/brain and muscle ARNT-like 1 ( BMAL1 , also known as ARNTL ), which will form a heterodimer and bind to E-box and promote the transcription/translation of either core clock molecules or downstream targets 15 . For regulating BMAL1 and CLOCK expression, RORα competes with REV-ERBα to bind with ROR response elements (ROREs) to activate the transcription of BMAL1 and CLOCK 15 forming an auto-feedback system with REV-ERBα and providing stability and precision to molecular clock regulation. Interestingly, the downstream gene targets of E-box include various fibrotic markers such as α-smooth muscle actin (αSMA) and vimentin (VIM) 16 . Moreover, the removal of REV-ERBα has been associated with increased risks of lung inflammation and premature senescence, which has been confirmed by our and others’ previous studies 17 , 18 , 19 . Circadian clock molecules are identified as essential mediators of pulmonary injuries with various causes, such as cigarette smoke (CS) and IAV 20 , 21 , 22 , 23 . Previous studies have described the importance of circadian molecules in key cell subtypes, including club cells, alveolar macrophages, and fibroblasts, in the lung microenvironment in response to injury and inflammatory mediators 8 , 19 , 24 , 25 . Previous findings showed that CS exposure and IAV infection-induced lung injuries are associated with disruption of the circadian clock and impaired lung function, survival rate, and daily ambulatory activity 26 , 27 . Various studies to date demonstrate the fundamental interactions of core clock molecules, such as REV-ERBα or BMAL1, with lung inflammatory responses and the development of chronic obstructive pulmonary disease (COPD) by CS exposure 23 . Currently, only one study has shown that REV - ERBα deficiency in lung fibroblasts exaggerates bleomycin-induced lung fibrogenesis 8 . However, the mechanism and role of REV-ERBα in lung fibrogenesis via collagen synthesis and its regulation during IAV infection are not known. Stabilization of collagen fibers is regulated by lysyl oxidase, a copper-dependent amino oxidase, via crosslinking the extracellular matrix proteins (collagen and elastin), thereby preventing collagen degradation 28 . Our previous study has identified the potential of REV-ERBα in regulating epithelial-mesenchymal transition (EMT) and fibroblast differentiation induced by CS and TGFβ 27 . We, therefore, hypothesize that REV-ERBα is important in regulating fibrotic progression in the lungs, by targeting collagen synthesis and its stabilization pathways. Here we show, the abundance of REV-ERBα is decreased during fibrogenesis, and loss of REV-ERBα augments the fibrotic responses caused by IAV infection. Furthermore, enhanced REV-ERBα activity/abundance will reduce abnormal collagen accumulation by inhibiting the expression of lysyl oxidases during myofibroblast differentiation. Results Dysregulated protein abundance of REV-ERBα, COL1A1 and LOX were observed in IPF patients compared with healthy controls It is well known that excessive extracellular matrix (ECM) protein production occurs during fibrosis and is deposited within the lesion areas. Human lung sections with verified pathology were purchased from Origene Inc. All the healthy controls were within the normal limits with 100% normal area and at least 80% alveoli area, while the IPF samples were composed of at least 40% lesion area (Supplementary Table 1 ). As shown in Fig. 1a , we observed high expression of type 1 collagen (COL1A1) over the injured tissue area and elevated lysyl oxidase (LOX) protein in IPF patients compared with healthy controls. Both COL1A1 and LOX were highly expressed among ECM in the lesion tissues in IPF samples, whereas limited COL1A1 and Lox were expressed in healthy controls. Consistent with previous data, we observed the diminished protein abundance and distribution of REV-ERBα in the fibrotic lesions from IPF samples, whereas REV-ERBα was highly expressed in the nuclei of healthy controls with limited protein abundance observed in the cytoplasmic area. Similar results were observed in the healthy and lesion area in IPF samples as well (Fig. 1b ). A decreased trend of REV-ERBα was found in the lesion area compared to the healthy sections, and upregulation of COL1A1 and LOX was found in the lesion area compared to the healthy area. Compared to the control groups, the protein abundance of REV-ERBα was decreased in the healthy area (Control: 21.294% vs. healthy area from IPF: 11.296%) from IPF samples, and slightly increased protein levels of COL1A1 (Control: 37.074% vs. healthy area from IPF: 52.604%) and LOX (Control: 30.439% vs. healthy area from IPF: 40.836%) in the healthy areas from IPF samples compared to control group were observed (Fig. 1 ). A previous study has identified that REV-ERBα is fundamental in IPF progression 8 , and we determined how Rev-erbα affects the development of pulmonary fibrosis. Fig. 1: Decreased REV-ERBα protein abundance and increased protein levels of COL1A1 and LOX in IPF lungs compared to healthy control. Healthy control and IPF formalin fixed-paraffin embedded (FFPE) lung samples were purchased from Origene Inc. Healthy controls contained 100% normal lung architecture with 85% alveoli surface area. IPF patient samples contained at least 50% lesion surface area. The protein abundance of REV-ERBα, COL1A1, and LOX were visualized and determined by IHC. a The comparisons of protein distribution and abundance were performed between healthy control and IPF patient ( n = 10 per group), b or between the healthy area and lesion area from the same IPF patient. The images were taken, and the positive stained area was calculated by ImageJ ( n = 5 per group). Data were shown as mean ± SEM, unpaired t -test was used for a and b . Bar size: 50 µm. (* p < 0.05, *** p < 0.001; scale bar: 50 μm). Full size image Circadian clock genes, including REV-ERBα were dysregulated in bleomycin-induced fibrosis To understand the expression of Rev-erbα and related circadian genes in the in vivo model of fibrosis, we treated C57BL/6J wild-type (WT) mice with bleomycin (1.5 units/kg) to induce fibrosis and determine the gene expression of circadian and fibrotic-related genes. According to previous studies, most of the fibrotic markers were dysregulated significantly at day 14, and there was no significant difference at day 14, 21, and 28 post-injury 29 , 30 , 31 . Another report also described that variable outcomes appeared after day 21 post-injury, and even recovered to baseline 32 . Hence, we have selected day 14 post-injury as our end-time point for bleomycin-induced lung injury. After 14 days of bleomycin-induced lung injury, we found decreased gene expression of REV-ERB α (Gene symbol: NR1D1 ), REV-ERB β (Gene symbol: NR1D2 ), RORα (Gene symbol: RORA ), CLOCK , CRY1/2 , PER1/2/3 and DBP (Fig. 2 a, b and Supplementary Fig. 1 ). There was no change in gene expression of BMAL1 (Gene symbol: ARNTL ), or NFIL3 transcript level (Supplementary Fig. 1 ). As expected, gene expression of fibrotic markers, such as COL1A1 , COL1A2 , COL3A1 , COL5A2 , TGFB1 , TGFB2 , VIM1 , FN1 , and MMP2 was increased at 14 days post bleomycin injury (Fig. 2 a, b and Supplementary Fig. 1 ). Decreased expression of gene levels of OCLN , TJP1 , TJP3 , and CDH1 were also observed, while SMAD2 and TJP2 showed no significant changes in the bleomycin group compared with PBS control (Fig. 2 a, b and Supplementary Fig. 1 ). Similarly, we also observed the decreased protein expression of REV-ERBα in the bleomycin-treated group, as well as increased protein levels of LOX (total and activated) and COL1A1 (Fig. 2c ). Fig. 2: Altered circadian and profibrotic mRNA and protein expressions were observed in bleomycin induced fibrotic responses. Lungs from C57BL/6J WT mice (Combined male and female ( n = 2–3 each) for analysis) dosed with bleomycin at day 14 were snap-frozen and used for RNA isolation. RNA isolated from lung homogenates were used to identify the circadian and profibrotic related gene expressions by our customized nanostring panel through nCounter SPRINT Profiler. The transcripts levels of RNA targets (Normalized Count) were normalized and visualized by nSolver software. a Dysregulated genes are shown as a heatmap with circadian genes on top and profibrotic genes on the bottom. b Selected gene expressions were shown as a bar graph ( n = 6 mice per group). c Proteins isolated from lung homogenates were used to detect the abundance of REV-ERBα, LOX, Activated LOX, and COL1A1. Represented blots are shown here, and protein expression fold change was calculated based on the normalization of β-ACTIN ( n = 4–6 mice per group). Data were shown as mean ± SEM, unpaired two-side t -test was used for b and c . (* p < 0.05, ** p < 0.01, *** p < 0.001 vs. PBS). Full size image Exacerbated fibrotic progression and lung injury induced by bleomycin dosed at night Since REV-ERBα expression occurs in circadian oscillation, the expression of REV-ERBα starts to increase from 6 a.m. (ZT0) and starts to decrease at 6 p.m. (ZT12) 19 . Thus, we dosed the mice at the beginning of the day (lights on) and night (lights off) cycles to determine whether the oscillation of REV-ERBα expression affects the fibrotic progression induced by bleomycin injury (Fig. 3 ). Interestingly, we found that mice treated with bleomycin at 7 p.m. exhibited exacerbated body weight loss compared to those dosed at 7 a.m. from days 11–14 post-injury (Fig. 3a ). In addition, mice dosed at 7 a.m. had a 100% survival rate, whereas only 75% of the mice dosed at 7 p.m. survived (Fig. 3a ). We also noticed that mice dosed with bleomycin at both 7 a.m. and 7 p.m. showed dramatic lung injury, and mice dosed at 7 p.m. showed more injury area in lung sections compared to mice dosed at 7 a.m. (Fig. 3a and Supplementary Fig. 2a ). Fig. 3: The health status of mice, circadian genes and fibrotic genes and protein expressions were affected by bleomycin injury in different time points (7 a.m. vs. 7 p.m.). C57BL/6J WT female mice were used for testing. a The body weights and survival rate were monitored until day 14 post-injury. ( n = 3–5 mice per group, * p < 0.05, ** p < 0.01 vs. bleomycin 7 a.m. group). Lungs were harvested, and H&E staining was performed to identify the injured area percentage. b RNA was isolated from lungs homogenates, and gene expression analysis was conducted using customized nanostring panel through nCounter SPRINT Profiler, and transcripts levels were normalized and visualized by nSolver software. The dysregulated genes were shown as a heatmap with circadian and profibrotic genes. Selected gene expressions were shown as bar graph ( n = 3 mice per group). c Proteins isolated from lung homogenates were tested via western blot (REV-ERBα, LOX, Activated LOX, LOXL2, and COL1A1) represented blots were showing and change fold was normalized to β-ACTIN ( n = 4–5 mice per group). d Lung sections were used for IHC, and the abundance and localization of COL1A1 and LOX were detected ( n = 3–4 mice per group). Data were shown as mean ± SEM, two-way ANOVA followed Tukey’s multiple comparisons test was performed in a (Body weight change (%)) and one-way ANOVA followed Šídák’s multiple comparisons test was used in a (injured area (%); and b – d ). Bar size: 1000 µm in a , and 25 µm in d . (* p < 0.05, ** p < 0.01, *** p < 0.001 between groups; ## p < 0.01 vs. Bleo 7 a.m. group; &&& p < 0.001 vs. PBS 7 a.m. group). Full size image We also investigated the gene expression from mouse lungs dosed with bleomycin at different times of day (Fig. 3b ). Interestingly, the gene expression of REV-ERB α/β ( NR1D1 / NR1D2 ) was decreased after bleomycin injury during the night (dusk), whereas no significant difference was observed when dosed during the day (dawn) (Fig. 3b ). The other circadian genes inhibited by REV-ERB α/β , such as BMAL1 ( ARNTL ), and CLOCK , were differentially decreased during the day and had no change during the night (Supplementary Fig. 2 b, c ). The REV-ERBα/β competitor RORA showed decreased expression levels after bleomycin injury in either daytime or nighttime. We also observed the decreased gene expression levels of PER1/2 and CRY1/2 either bleomycin was dosed during the daytime or nighttime (Supplementary Fig. 2 b, c ). The gene expression of fibrotic markers, such as COL1A1 , COL5A2 , FN1 , and SERPINE1 was significantly upregulated when dosed at night (Fig. 3b and Supplementary Fig. 2 b, c ). Bleomycin injury increased VIM regardless of the time of dosing (Fig. 3b ). The gene expression of COL1A2 and COL3A1 was upregulated after bleomycin dosing, and there was no time of a day difference during nighttime or daytime (Supplementary Fig. 2 b, c ). The gene expression of tight junction proteins responsible for cell-cell interaction: TJP1 and TJP3 showed significant downregulation when bleomycin was dosed at both day and nighttime, and TJP3 showed further decreased during nighttime dosing (Supplementary Fig. 2 b, c ). To further identify the expression level of target genes, we have detected the protein expression by western blot and IHC (Fig. 3 c, d ). Similarly, higher protein expression of REV-ERBα was observed in PBS 7 p.m. group compared to the PBS group at 7 a.m., and bleomycin injury significantly downregulated the protein abundance of REV-ERBα, whereas no changes in 7 a.m. groups (PBS vs. Bleo) (Fig. 3c ). We also observed an increasing trend of protein level of total LOX after dosing bleomycin at either 7 a.m. or 7 p.m., while activated LOX only showed an increasing trend when bleomycin was dosed at 7 p.m. (Fig. 3c ). The significantly increased protein abundances of LOXL2 and COL1A1 were observed in mice dosed with bleomycin at 7 p.m. while non-significant increase when dosed at 7 a.m. (Fig. 3c ). Except for the western blot, we also detected the protein abundance and localization of LOX and COL1A1 via IHC (Fig. 3d ). Bleomycin dosed at 7 p.m. showed higher protein levels of COL1A1 and LOX, especially in the injured area compared to mice dosed with bleomycin at 7 a.m. (Fig. 3d ). Rev-erbα agonist attenuated the collagen overexpression during bleomycin-induced fibrogenesis Decreased abundance of REV-ERBα has been noticed after bleomycin injury, we treated mice with Rev-erbα agonist (SR9009, 100 mg/kg, intraperitoneally (i.p.)) for 14 days to determine it’s protective potential against fibrotic progression (Fig. 4 and Table 1 ). During 14 days post bleomycin injury, there was a significant reduction in body weight starting from day 1, and there is no significant difference between bleomycin and bleomycin + SR9009 groups (Fig. 4a ). Surprisingly, only a 60% survival rate was observed in mice that received bleomycin + SR9009 while there was no death in the bleomycin-treated group (Fig. 4a ). From the H&E stained sections, we identified that bleomycin-induced significant lung injury, and SR9009 treatment helped alleviate the injury but without significant difference (Fig. 4b ). Since we were interested in how REV-ERBα is involved in pro-fibrotic progression, we measured the gene and protein expressions of fibrotic markers in the lungs (Fig. 4 c– e and Supplementary Fig. 3 ). Although the ACTA2 gene level was not significantly increased after bleomycin injury, the bleomycin + SR9009 group showed a significantly reduced expression of the ACTA2 gene (Fig. 4c ). We have noticed the significant upregulation of collagens ( COL1A1, COL1A2, COL3A1, COL4A1, COL4A2, COL5A1 , and COL5A3 ), and SR9009 treatment helped to reduce the levels of COL1A1 , COL1A2 , and COL5A1 without significant difference, and the gene level of COL4A1 was significantly downregulated after SR9009 treatment (Fig. 4c ). Gene expression of lysyl oxidases ( LOX , LOXL1 , LOXL2 , and LOXL4 ) were significantly increased after bleomycin injury, but SR9009 treatment did not help reduce the gene abundances (Fig. 4c and Supplementary Fig. 3 ). Other ECM proteins, such as ELN and FN1 , were upregulated after bleomycin injury and SR9009 treatment helped to lower the transcript level but without a significant difference (Fig. 4c and Supplementary Fig. 3 ). As potential regulators of ECM remodeling and dysregulated repair, there were upregulated gene levels of TGFB1 , TGFBR1 , and TGFBR2 after bleomycin injury, but no difference was observed between bleomycin and bleomycin+SR9009 treatment groups (Fig. 4c and Supplementary Fig. 3 ). Based on the gene expression results, we have performed a pathway analysis. Most significantly, ECM degradation, collagen biosynthesis and modification, and ECM synthesis were activated after bleomycin injury and slightly inhibited by SR9009 treatment (Table 1 ). Hence, we focused on how protein levels of collagen were affected. Fig. 4: Rev-erbα agonist (SR9009) treatment helped to reduce the collagen overexpression occurred in bleomycin induced lung fibrosis. C57BL/6J WT mice (equal number of male and female mice) were dosed with bleomycin for 14 days, and SR9009 was given via i.p. injection at a dose of (100 mg/kg) daily. a The body weights and the survival rate was monitored until day 14 post-injury ( n = 8–12 mice per group). b Lungs were harvested, and H&E staining was performed to identify the injured area percentage ( n = 8 mice per group). c RNA was isolated, and gene expression analysis was conducted using nCounter Fibrosis panel via nCounter SPRINT Profiler, and transcripts levels were normalized and visualized by nSolver. The dysregulated genes focused on collagen dynamics and ECM remodeling were shown as a heatmap and selected gene expressions were shown as bar graphs ( n = 8 mice per group). d Proteins isolated from lung homogenates were detected via western blot (COL1A1, COL4A1, LOXL2, and Activated LOX), represented blots were shown and fold change was normalized to β-ACTIN ( n = 8 mice per group). e Lung sections were stained by COL1A1 and COL4A1 via IHC, and the abundance and localization were determined by ImageJ ( n = 8 mice per group). Data were shown as mean ± SEM, multiple unpaired t -test was used for a , and one-way ANOVA followed Šídák’s multiple comparisons test was used in b – d . Bar size: 1000 µm in b and e , ×4 magnification, and 50 µm in e , ×20 magnification. (* p < 0.05, ** p < 0.01, *** p < 0.001 vs. PBS group; # p < 0.05, ### p < 0.001 vs. Bleo group). Full size image Table 1 Dysregulated pathways after Bleomycin injury with or without SR9009 in C57 mice Full size table Since we have observed the decreased transcript levels of multiple collagens in bleomycin + SR9009 group compared to bleomycin group, we tested the protein abundances of COL1A1, COL4A1, LOX, and LOXL2 as well (Fig. 4 d, e ). Similarly, we have observed increased protein levels of COL1A1 and COL4A1, and non-significant decreased trends of COL1A1 and COL4A1 after SR9009 injection (Fig. 4 d, e ). Interestingly, we have noticed a decreased trend of activated LOX, and a significantly decreased level of LOXL2 in the bleomycin + SR9009 group compared to the bleomycin group (Fig. 4d ). From IHC staining, we observed overexpression protein levels of COL1A1 and COL4A1 in the injured sections from both the bleomycin group and the bleomycin + SR9009 group. However, the positive distribution of COL1A1 and COL4A1 were inhibited by SR9009 injection, and abundances of both collagens were slightly decreased in the bleomycin + SR9009 group compared to bleomycin group (Fig. 4e ). There was no significant sex-dependent difference between bleomycin group and bleomycin + SR9009 group, hence we have combined male and female mice for further analysis. Rev-erbα deficiency exaggerated IAV-induced lung injury To directly understand the role of Rev-erbα in pulmonary fibrogenesis and lung injury, we infected WT and Rev-erbα Het mice with IAV (10 3 PFU) for 15 days to induce lung injury and fibrotic responses (Fig. 5 ). At 6–10 days post infection (p.i.), we found that IAV-induced weight loss was exacerbated in Rev-erbα Het mice compared with WT (Fig. 5a ). We also monitored locomotor activity after IAV infection, and observed reduced ambulatory counts during the nighttime at 5 days to 9 days p.i. The locomotor activity showed no change during the daytime, and there was no significant difference between WT and Rev-erbα Het mice (Supplementary Fig. 4 ). After 15 days p.i., we collected the serum to detect IAV-specific antibody (IgG2a and IgA) levels. Both WT and Rev-erbα Het mice infected with IAV showed detectable levels of IgG2a and IgA in serum, and Rev-erbα Het mice showed higher levels of IgG2a and IgA compared to WT mice (Fig. 5a ), most likely because of the highest level of infection. We also looked at the viral replication on 2 days and 4 days p.i. There was a significant increase in the viral titer at 4 days p.i., compared to 2 days p.i., while there was no significant difference in viral harboring and replication in the lungs between WT and Rev-erbα Het mice (Supplementary Fig. 5 ). Fig. 5: IAV induced lung injury and profibrotic responses exaggerated in Rev-erbα Het mice compared to WT mice. WT and Rev-erbα Het mice were infected (10 3 PFU/mouse) with IAV or PBS control for 15 days. a Body weights were monitored during infection, and virus-specific antibodies in serum were detected by ELISA ( n = 5–19 mice per group, * p < 0.05, ** p < 0.01, *** p < 0.01 vs. IAV infected WT mice). b During sacrifice, lung mechanics (resistance, compliance, and elastance) were measured. ( n = 3–4 mice per group). c H&E stained lung sections were used to analyze the injured area induced by IAV infection. Regions within the black squares were shown with ×20 magnification ( n = 4–6 mice per group). Data were shown as mean ± SEM, two-way ANOVA followed Tukey’s multiple comparisons test was performed in a (bodyweight change (%)), multiple unpaired t -test was used for a (virus-specific antibodies titer), one-way ANOVA followed Šídák’s multiple comparisons test was used in b , c , unpaired two-side t -test was used in b (Resistance IAV-WT vs. IAV Rev-erbα Het; Elastance PBS-WT vs. IAV-WT; Compliance PSB-WT vs. IAV-WT and IAV-WT vs. IAV Rev-erbα Het). Bar size: 1000 µm in c (×4 magnification), and 50 µm in c (×20 magnification) (* p < 0.05, ** p < 0.01, *** p < 0.001 between groups; # p < 0.05, ## p < 0.01 vs. IAV infected WT mice). Full size image Furthermore, we determined the lung mechanical properties (airway resistance, elastance, and compliance). We have observed increased resistance and elastance, as well as decreased compliance after IAV infection compared with PBS control, both in WT and Rev-erbα Het mice. Intriguingly, the Rev-erbα Het mice exhibited increased resistance, elastance, and decreased compliance compared with WT mice in response to IAV infection (Fig. 5b ). The H&E-stained lung sections showed IAV infection-induced dramatic lung injury with scaring progression in the alveoli of both WT and Rev-erbα Het mice. More importantly, larger injury areas were observed in Rev-erbα Het mice infected with IAV compared with IAV infected WT mice (Fig. 5c ). Rev-erbα deficiency aggravated dysregulated gene expression during IAV-induced fibrogenesis After sacrificing the mice at 15 days p.i., we collected the lung tissues for RNA expression analysis (Figs. 6 and 7 ). At the gene transcript level, the most significantly dysregulated genes due to Rev-erbα knockdown were downregulated. There are a significant number of genes that were dysregulated when infected with IAV in both WT and Rev-erbα Het mice. Intriguingly, IAV infection in Rev-erbα Het mice led to an upregulation of significantly dysregulated gene transcripts compared to WT mice infected with IAV, which suggests that the altered gene expressions were not due to genotype differences (most dysregulated genes are decreased), but that Rev-erbα affects the specific gene expression during lung injury induced by IAV (Fig. 6a ). Following the cutoff filters (10% fold change with p value < 0.05) used for volcano plots, we also analyzed the gene cluster via Venn diagrams analysis. Compared to WT mice treated with PBS, a total of 67 genes were dysregulated because of the genotype difference (vs. Rev-erbα PBS group) (Fig. 6b ). In WT mice infected with IAV, 486 genes were significantly altered, and 430 genes were similarly altered in both WT and Rev-erbα Het mice. Intriguingly, 71 genes started to show a significant difference in Rev-erbα Het mice infected with IAV with no change in WT mice, and 56 genes showed a significant difference in WT mice with no change in Rev-erbα Het mice infected with IAV (Fig. 6b ). By comparing IAV vs. PBS in the same genotype (WT IAV vs. WT PBS, and Rev-erbα Het IAV vs. Rev-erbα Het PBS), a total of 414 genes were commonly dysregulated when infected with IAV (Fig. 6c ). Specifically, 121 genes were statistically dysregulated in Rev-erbα Het mice infected with IAV, and 72 genes showed a significant difference only in WT groups (PBS vs. IAV). The detailed gene lists corresponding to each comparison are listed in Supplementary Data 1 . Based on the dysregulated gene lists, we identified pathways modified by IAV and Rev-erbα, which included collagen dynamics, EMT, TGFβ, myofibroblast regulation, as well as M1/M2 macrophage activation. These pathways were upregulated after IAV infection and were further exacerbated when Rev-erbα was diminished (Table 2 ). Additionally, collagen biosynthesis and modification, ECM degradation, and ECM synthesis were among the most upregulated pathways in Rev-erbα Het IAV-infected mice compared to IAV-infected WT mice (Table 2 ). Hence, we focused our study on the alteration of specific genes/proteins related to collagen biosynthesis, modification, and degradation. Fig. 6: IAV infection induced dysregulation of profibrotic gene expression exacerbated in Rev-erbα Het mice. WT and Rev-erbα Het mice (equal number of male and female mice) were dosed with IAV (10 3 PFU) for 15 days, and lungs were homogenized for RNA isolation. Gene expression analysis was conducted using nCounter Fibrosis Panel via nCounter SPRINT Profiler. RNA expressions were normalized and analyzed via nSolver software and ROSALIND service. a The dysregulated gene expressions between groups were shown as volcano plots, the cut off filter is at least 10% change (up or downregulation), and p < 0.05. b , c Overlapping gene expression changes among groups were shown by Venn diagrams with the same cutoff line used for volcano plots. d The overview of gene expression focused on collagen dynamics were shown as a heatmap, and the selected gene transcript levels (collagens and lysyl oxidases) were shown as a bar graph separately. Data are shown as mean ± SEM, one-way ANOVA followed Šídák’s multiple comparisons test was used in d , unpaired two-side t -test was used in d ( COL1A1 PBS-WT vs. IAV-WT and COL3A1 PBS-WT vs. IAV-WT). ( n = 6 mice per group; * p < 0.05, ** p < 0.01, *** p < 0.001 between groups; ## p < 0.01 compared with IAV infected WT group). Full size image Fig. 7: IAV infection induced dysregulation of profibrotic progression exacerbated in Rev-erbα Het mice. WT and Rev-erbα Het mice (equal number of male and female) infected (10 3 PFU/mouse) with IAV for 15 days, and lungs were separated for RNA/protein isolation, or fixed with 10% formalin for FFPE sections. a The protein abundance of COL1A2, VIM and activated LOX were measured by western blot. Representative blot images were shown. Different targets were run on the same membrane: COL1A2, VIM and activated LOX were probed in the same membrane and β-ACTIN was used as an endogenous control ( n = 5–6 mice per group). b The localizations of COL1A1 and LOX were determined by immunohistochemical staining, and red arrows were used to indicate the regions of interest. The positive staining area was calculated via ImageJ ( n = 4–6 mice per group). c RNA isolated from lung homogenates was used to measure the gene expression ( COL1A1 , FN1 , TJP1 and TGFB1 ) via qRT-PCR, and GAPDH was used as an endogenous gene for normalization ( n = 5–6 mice per group). Data are shown as mean ± SEM, one-way ANOVA followed Šídák’s multiple comparisons test was used in a – c . Bar size: 50 µm in b . ( n = 4–6; * p < 0.05, ** p < 0.01, *** p < 0.001 between groups; # p < 0.05, ## p < 0.01 compared with IAV infected WT group). Full size image Table 2 Dysregulated pathways after IAV infection in both WT and Rev-erbα Het mice Full size table Absence of Rev-erbα exacerbated the activated collagen stabilization and modification during IAV-induced fibrogenesis To further determine the role of Rev-erbα on collagen dynamics, we measured gene expression related to collagen modification, ECM markers, matrix metalloproteinases (MMPs), and TGFβ pathways (Fig. 6d and Supplementary Fig. 6 ). We noticed that collagens were significantly upregulated after IAV infection in both WT and Rev-erbα Het mice at the gene expression level. In particular, COL1A1 , COL1A2 , COL3A1 , and COL5A1 , were significantly increased in IAV-infected Rev-erbα Het mice compared with the WT IAV group (Fig. 6d and Supplementary Fig. 6a ). Interestingly, we observed decreased COL14A1 and COL16A1 in IAV-infected mice, but there was no difference between the WT IAV group and Rev-erbα Het IAV group (Fig. 6d ). We also noticed that lysyl oxidases ( LOX , LOXL1 , and LOXL2 ) were upregulated only in IAV-infected Rev-erbα Het compared to PBS-treated Rev-erbα Het mice, and there were no significant difference between IAV infected and PBS-treated WT mice (Fig. 6d ). In addition, we noticed the upregulation of other ECM-related genes, such as FN1 , ELN , VIM , ITGA4 , ITGA9 , and HSPG2 , and genes responsible for focal adhesion such as LAMA3 and OCLN were downregulated after IAV infection in either genotype, and there is no significant difference between IAV infected WT and Rev-erbα Het mice (Fig. 6d and Supplementary Fig. 6a ). One of the key genetic pathways activated during fibrotic progression is the TGFβ pathway, and we found increased activation of TGFβ signaling following IAV infection, which showed increased TGFB1 , TGFB1I1 , TGFBR1 TGFBR2 , SMAD2 , SMAD3 , and SMAD4 . However, there was no significant difference between the IAV WT and the IAV Rev-erbα Het groups (Supplementary Fig. 6b ). Since we observed increased collagen abundance, we also determined the expression of related collagenases, MMPs (Supplementary Fig. 6c ). Increased gene expression of MMP2 , MMP12 , MMP14 , and MMP12 was observed only in Rev-erbα Het mice infected with IAV compared with PBS in the same genotype. The gene transcript levels of MMP2 and MMP14 showed a significant increase in Rev-erbα Het mice infected with IAV compared to WT mice infected with IAV (Supplementary Fig. 6c ). Other MMPs, such as MMP 9 , MMP8 , and MMP3 , showed decreased gene transcript levels when IAV infection occurred, and there is no difference between the two genotypes. The inhibitor of MMPs: TIMPs, showed increased TIMP2 in Rev-erbα Het IAV group compared to Rev-erbα Het PBS group (Supplementary Fig. 6c ). Lack of Rev-erbα augments collagen overexpression during IAV-induced fibrogenesis Since we observed that type 1 collagen and lysyl oxidases were upregulated in gene transcript levels, we also tested the protein abundance and localization (Fig. 7 ). Overall, from lung homogenates, we found an increasing trend of type 1 collagen (COL1A2) without statistical significance. Meanwhile, we noticed a significant increase in LOX and VIM in Rev-erbα Het mice infected with IAV compared to either the Rev-erbα Het PBS group or the WT mice infected with IAV (Fig. 7a ). Further, we looked at the protein abundance and localization of COL1A1 and LOX. We performed IHC staining of COL1A1 and LOX (Fig. 7b ). The distribution of COL1A1 in PBS treated group, either WT or Rev-erbα Het mice, was around the small airways or bronchial. When IAV infection occurred, COL1A1 was augmented in the injured area, mainly around the alveoli. Furthermore, no COL1A1 was observed in alveoli in the PBS-treated group (Fig. 7b ). However, for the protein expression of LOX, relatively lower level of LOX were observed in IAV-infected WT mice, while the abundance of LOX were increased in Rev-erbα Het mice infected with IAV, primarily localized to the injured area (Fig. 7b ). Since lysyl oxidase is responsible for collagen stabilization via crosslinking the collagen fibers, the co-localization of LOX and collagen in areas of injury was observed as expected (Fig. 7b , red arrow). We also applied the qRT-PCR to detect the gene expression fold change, and we noticed a similar trend compared with NanoString analysis (Fig. 7c ). The gene transcript level of COL1A1 , FN1 , and TGFB1 showed an increasing trend after IAV infection, and Rev-erbα knockdown exacerbated the upregulation. The gene expression of TJP1 was decreased in the WT group with IAV infection (Fig. 7c ). Rev-erbα agonist attenuated the TGFβ-induced abnormal collagen stabilization and fibrotic responses in lung fibroblasts To determine the role of Rev-erbα in the abnormal collagen modification via lysyl oxidase, we treated primary adult human lung fibroblasts (HLF) and human fetal lung fibroblasts (HFL1) with TGFβ (2 ng/ml) with or without Rev-erbα agonist (GSK4112, 20 μM) and antagonist (SR8278, 20 μM) for 2 days (Fig. 8 and Supplementary Figs. 7 and 8 ). Fig. 8: Rev-erbα agonist inhibits TGFβ induced fibroblast differentiation and antagonists exacerbate it. Human primary lung fibroblast were treated with TGFβ (2 ng/ml) with or without Rev-erbα agonist (GSK4112, 20 µM) or antagonist (SR8278, 20 µM) for 2 days. a Protein was isolated for western blot analysis (αSMA, COL1A1, LOX, and Fibronectin (FN)). Represented blots are shown with densitometry analysis ( n = 3–4 cells per group). b Immunofluorescence staining showed the distribution and protein abundance of COL1A1 and αSMA, DAPI was used for nuclear staining (×20). Relative fluorescence intensity was calculated in ImageJ, as fluorescence intensity per cell ( n = 4 cells per group). c RNA was isolated for gene expression measurement via qPCR ( ACTA2 , COL1A1 , COL4A1, FN1, LOX , LOXL1, LOXL2 , and NR1D1 ). GAPDH was used as an endogenous control for RNA and protein fold change normalization ( n = 4 cells per group). Data are shown as mean ± SEM, one-way ANOVA followed Šídák’s multiple comparisons test was used in a – c , unpaired two-side t -test was used in a (COL1A1 Ctrl vs. TGFβ, LOX TGFβ vs. TGFβ + GSK4112, FN TGFβ vs. TGFβ + SR8278). d Schematic demonstrating how both Rev-erbα agonist and antagonist regulates ECM deposition in lung fibroblast induced by TGFβ, and the schematic is created with Biorender.com. Bar size: 50 µm in b . (* p < 0.05, ** p < 0.01, *** p < 0.001 vs. Ctrl group; # p < 0.05, ### p < 0.001 vs. TGFβ group). Full size image We previously found that Rev-erbα agonist, GSK4112, inhibited the myofibroblast differentiation induced by TGFβ 27 . Here we noticed that TGFβ induced myofibroblast differentiation was inhibited by GSK4112, while exacerbated by SR8278 (Fig. 8 ). Significantly, GSK4112 inhibited the TGFβ induced overexpression of αSMA and COL1A1 in protein levels (Fig. 8 a, b ), and TGFβ upregulated gene levels of ACTA2 , COL1A1 , FN1 , and LOX were inhibited by GSK4112 treatment (Fig. 8c ). In addition, treatment of SR8278 exacerbated the TGFβ upregulated COL1A1 and FN in protein levels (Fig. 8a ), and augmented the TGFβ increased transcript levels of COL1A1 , COL4A1 , FN1 , LOX , and LOXL2 (Fig. 8c ). Interestingly, we found that both GSK4112 and SR8278 increased the gene level of NR1D1 (Fig. 8c ). Based on our results, we concluded that Rev-erbα agonist helped to attenuate the TGFβ-induced fibroblast differentiation and collagen overexpression, while Rev-erbα antagonist exacerbated it (Fig. 8d ). We also tested our hypothesis in HFL1 (Human fetal lung fibroblast) (Supplementary Figs. 7 and 8 ). GSK4112 treatment showed a significantly increased gene transcript level of NR1D1 while SR8278 showed no change in HFL1 (Supplementary Fig. 7 ). In addition, GSK4112 inhibited TGFβ - induced ACTA2 and slightly decreased gene expression of COL1A1 and FN1 without significant difference in TGFβ + GSK4112 group compared to TGFβ treatment alone (Supplementary Fig. 7 ). Intriguingly, we also found that GSK4112 alleviated the upregulated gene expression of lysyl oxidases ( LOX , LOXL1 , and LOXL2 ) (Supplementary Fig. 7 ). We also measured the protein abundance of COL1A1 and LOX. Similarly, GSK4112 suppressed the upregulated protein level of LOX induced by TGFβ. In contrast to the gene expression results, TGFβ-induced COL1A1 protein was significantly inhibited by GSK4112 treatment, and the protein fibers overexpressed by TGFβ were also significantly repressed by GSK4112 (Supplementary Fig. 8 ). Treatment with Rev-erbα antagonist (SR8278) showed no significant effects on TGFβ-induced fibroblast differentiation or collagen stabilization in HLF1 (Supplementary Fig. 8 ). Rev-erbα agonist and antagonist exacerbated the TGFβ-induced epithelial-mesenchymal transition (EMT) in lung epithelium We have also treated primary human small airway epithelial cells (SAEC) and human bronchial epithelial cell line (BEAS-2B) with TGFβ (2 ng/ml) with or without Rev-erbα agonist (GSK4112, 20 μM) and antagonist (SR8278, 20 μM) for 2 days (Supplementary Figs. 9 and 10 ). SAEC treated with TGFβ showed activated EMT tendency via increased VIM , LOXL2 , and COL1A1 , as well as decreased CDH1 , TJP1 , and OCLN . The treatment of GSK4112 and SR8278 showed exacerbated gene dysregulation of both epithelial and mesenchymal markers (Supplementary Fig. 9a ). Protein levels of COL1A1 and VIM were upregulated by TGFβ and the upregulation was prevented by GSK4112 (Supplementary Fig. 9b ). Similar to HLF, both agonist and antagonist treatment showed increased gene expression of NR1D1 . In BEAS-2B, increased gene levels of COL1A1 and FN1 were noticed after TGFβ treatment, and GSK4112 attenuated the upregulation, whereas SR8278 exacerbated the gene levels of FN1 with significance and COL1A1 without significance (Supplementary Fig. 10 ). A significantly increased LOX gene level was observed after GSK4112 or SR8278 treatment compared to the TGFβ group. Either GSK4112 or SR8278 inhibited increased LOXL1 by TGFβ treatment. TGFβ inhibited the gene expressions of LOXL2 and ACTA2 , and SR8278 treatment eliminated the downregulation, but there was no effect with GSK4112 treatment (Supplementary Fig. 10 ). Discussion Pulmonary fibrosis is a lethal chronic lung disease without effective therapeutic options, and the pathogenesis of fibrogenesis remains unclear 6 , 33 , 34 . Recent studies have demonstrated the novel role of the circadian molecular clock in the pathobiology of chronic lung diseases and highlighted the potential for circadian clock-based therapeutics 23 , 35 . Targeting specific circadian clock genes has been implicated with anti-fibrotic potential in vitro in cells or in vivo in mouse models of lung injury 8 , 36 , 37 , 38 . In previous studies, Rev-erbα deficiency exacerbated the EMT induced by CS and fibrogenesis induced by bleomycin, and Rev-erbα agonist inhibited the fibroblast differentiation induced by TGFβ 8 , 27 . In this study, we have characterized the Rev-erbα abundance in human IPF patients histologically, as well as in a bleomycin mouse model, and we found the decreased REV-ERBα protein abundance, especially in IPF lesion areas and bleomycin-induced fibrogenesis. Based on our results, the lower protein abundance of REV-ERBα in the healthy portion of IPF patients could promote the progression of fibrogenesis toward a lesion phenotype. Since the abundance of REV-ERBα is in circadian oscillation, mice dosed with bleomycin when REV-ERBα expression naturally starts to decrease (dark phase/nighttime) exhibited higher mortality and exacerbated fibrotic progression compared with those dosed in the daytime. We have administrated the Rev-erbα agonist (SR9009) to mice treated with bleomycin, and we noticed that SR9009 injection helped ease the collagen overexpression during bleomycin-induced fibrogenesis. We also analyzed whether diminished REV-ERBα exacerbated the fibrotic progression induced by IAV infection. Our results show that Rev-erbα regulated collagen stabilization via lysyl oxidase, and its agonist prevented TGFβ induced overexpression of collagen. Circadian clock molecules RORα (nuclear receptor), REV-ERBα, BMAL1, and CLOCK, have been implicated in the crosstalk between inflammation and lung tissue injuries 19 , 22 , 39 , 40 . The critical circadian molecules: BMAL1 and CLOCK form a heterodimer that binds to E-Box and subsequently promotes the expression of Rev-erbα. Rev-erbα binds to RORE to prevent the expression of BMAL1 and CLOCK, while RORα activates the expression of BMAL1 and CLOCK. Both RORE and E-box are associated with EMT 41 , 42 , which is initiated at the early stage of fibrosis. Hence, the regulators of RORE and E-box (RORα, REV-ERBα, BMAL1, and CLOCK) are equally critical in fibrogenesis. In our results, we noticed that the gene level of BMAL1 ( ARNTL ) in the bleomycin model depended on the time of dosing. Decreased BMAL1 was observed when dosed during the day, while an increasing trend of BMAL1 transcript level was observed when treatment occurred during the nighttime. Similar time-dependent changes also occurred with CLOCK expression. Upregulated BMAL1 was identified in fibrotic mouse lungs induced by TGFβ transfection, and BMAL1 silencing helped to inhibit the fibrotic progression induced by TGFβ in the lung epithelium 36 . In the same report, TGFβ transfected into mouse lungs decreased the gene levels of REV-ERB α . REV-ERBα was also shown to be inhibited during fibrotic progression 36 . These published results agree with our data that show decreased REV-ERBα and increased BMAL1 expression in bleomycin-induced fibrosis. Bleomycin-induced downregulation of REV-ERBα and increased BMAL1 during the night might be one of the reasons for fibrotic progression and exacerbation. Interestingly, the gene expression of CLOCK showed very similar results to BMAL1 gene alterations. It is known that CLOCK disruption exacerbates fibrotic progression, which partially agrees with our data on night dosing that CLOCK level showed lower expression during the night 37 . In the same study, bleomycin dosing at night showed more collagen deposition in the injured area, which supports our results 37 . Another study demonstrated that infection with IAV that occurred during the night showed worse body weight loss, higher mortality, and more severe tissue injury 22 . Our data also indicate the possibility that targeting BMAL1 or CLOCK as a potential candidate might need to consider the dosing time, and inhibition or activation of BMAL1 or CLOCK might be time-dependent. Our results showed decreased REV-ERBα after bleomycin injury during the nighttime. The expression of REV-ERBα starts to decrease naturally during the night (dusk) and dosing with bleomycin-induced significantly decreased REV-ERBα levels could dampen the basal expression of REV-ERBα which might result in worse fibrotic phenotypes and health status. Since Rev-erbα is a key component of circadian molecular clock that shows a rhythmic expression 21 , i.e., the level of REV-ERBα starting to decrease from 6 p.m. could result in less protection against bleomycin-induced lung inflammation in mice dosed at ZT13, while increasing oscillation of REV-ERBα during the day could attenuate the inflammatory response caused by bleomycin dosed at ZT1. As mentioned before, dosing with bleomycin at night exacerbated the collagen deposition in the lungs, which agrees with our gene and protein expression results 37 . Surprisingly, we have observed the augmented protein expression of LOX in the lung injured area when dosed at 7 p.m. compared to the 7 a.m. group, and LOX is responsible for crosslinking collagen fibers to prevent the degradation of collagens 43 , 44 . To measure the REV-ERBα expressions in IPF patients, we stained for REV-ERBα in pulmonary fibrotic lesion areas. We observed a decreased abundance of REV-ERBα especially in the lesion area, while REV-ERBα was fully expressed in the healthy samples. Currently, limited studies directly report the expression levels of REV-ERBα ( NR1D1 ) in IPF patients or bleomycin-induced fibrosis 45 , 46 . The single-cell RNA sequencing comparison between IPF and healthy patients identified the significant downregulation of NR1D1 in ATII cells in IPF patients compared to healthy controls 45 . Our results showed significantly decreased REV-ERBα protein abundance, especially within the injured areas, which partially agrees with the previous study. Another study reported that the REV-ERBα mRNA level was decreased in bleomycin-induced lung fibrosis in young mice, as well as in the naturally aged mice lungs 46 . Our results show decreased REV-ERBα after bleomycin injury. Moreover, our study shows that bleomycin-induced downregulation of REV-ERBα occurs only during the nighttime. The level of REV-ERBα was unchanged when dosing in the daytime. Decreased REV-ERBα has been reported as a cause of exacerbation of fibrotic progress in either mouse or human lung fibroblasts 8 . Our results and previous publications suggest that REV-ERBα is inhibited during fibrogenesis and that decreased REV-ERBα either by transgenic methods or natural circadian oscillation exacerbates the fibrotic progression and worsens the lung injury. We administered Rev-erbα agonist (SR9009) into mice dosed with bleomycin, and tested the protective effect of Rev-erbα agonist against fibrosis. We did not observe any difference in body weight decline between bleomycin alone and bleomycin with SR9009, however, we observed a lower survival rate in mice when received SR9009 post bleomycin injury. It has been proven that Rev-erbα agonist could increase body weight loss and fat mass loss 12 . Moreover, SR9009 has been shown to decrease cell viability and dysregulate cellular metabolism 47 . There are clinical reports describing that body weight loss and lower body mass index could worsen IPF progression and even lower the survival probability 48 , 49 . SR9009 accelerated body weight loss could be one of the reasons for the higher death rate in the bleomycin + SR9009 group compared to the bleomycin group. However, other side-effects of SR9009 might be contributing to the cause of death as well. More detailed studies should be conducted to understand the molecular mechanism of the off-target effects of SR9009 during fibrogenesis. Besides the side effects of SR9009, injection of the agonist helped inhibit the collagen contents at the gene and protein levels, which agrees with our results from the cell model. Our and other results showed that SR9009 had specificity in regulating the overexpression of collagen and helped to prevent fibrogenesis while the side effects need further investigation for the pre-clinical trial. Previously, we have shown that Rev-erbα was associated with fibrotic responses during IAV infection in Rev-erbα Het mice, which led to fibrogenesis. After 15 days p.i., we noticed that Rev-erbα Het mice infected with IAV showed worse health status. The exaggerated upregulation of lung elastance was observed in Rev-erbα Het mice, demonstrating that Rev-erbα deficiency exacerbated the fibrotic progression functionally. To support our hypothesis, we have measured multiple fibrotic markers, such as type 1/3/5 collagens and lysyl oxidases ( LOX , LOXL1 , and LOXL2 ), which were only significantly upregulated by IAV in Rev-erbα Het mice. A previous study proved that Rev-erbα knockdown could exacerbate the fibrotic response by increasing αSMA protein expression 8 . Collagens are equally important in pulmonary fibrosis as αSMA, both of which are overexpressed during fibrogenesis and induce irreversible scarring. Our results elaborate on the previous reports and demonstrate that Rev-erbα is essential in regulating αSMA and correlated with collagen expression. As we mentioned before, Rev-erbα starts to decrease naturally during the nighttime, and it has been identified that IAV infection during the night is associated with worse health outcomes in mice, as well as higher mortality and more severe lung injury compared to daytime infection 22 . Another study reported that dosing bleomycin during the night increased collagen deposition compared to dosing during the day 37 , which also concurred with our findings here. Our data support previously published results and provide a possible explanation for why IAV infection at nighttime induces worse lung injury and higher mortality than during the day, when Rev-erbα starts to decrease. Our conclusion raises a possibility that working during the night shift could be more vulnerable to environmental hazards, which could contribute to developing fibrosis. To understand the correlated signaling pathways involved with Rev-erbα in IAV infection-induced pulmonary fibrotic responses, we analyzed the directed enrichment scores to determine the related pathway. We noticed an exacerbated upregulation of multiple biological processes, such as collagen biosynthesis and modification, ECM degradation and synthesis, M2 macrophage activation, myofibroblast regulation, TGFβ pathway, and EMT. The most abnormal activation was in collagen synthesis and modification pathways, and we found the exaggerated upregulation of lysyl oxidases in Rev-erbα Het mice compared with WT mice infected with IAV. Both gene and protein expressions of lysyl oxidases were upregulated in Rev-erbα Het mice infected with IAV, but not in WT mice infected with IAV. Lysyl oxidases are known to stabilize the collagen fibers via crosslinking to prevent collagen degradation and improve tissue scarring 43 , 44 . Other than collagen, lysyl oxidases are also responsible for crosslinking elastin, which was further upregulated in Rev-erbα Het mice infected with IAV. Besides the collagen stabilization and synthesis, we also determined the expression of related collagenases (i.e., MMPs). We found the exacerbated increased MMP2 , MMP12 , and MMP14 in Rev-erbα Het mice infected with IAV. The substrates of MMP2, MMP12, and MMP14 include gelatin, type 1 and 4 collagen, and elastin; 50 Upregulated MMPs could be a self-regulating method for digesting the overexpressed ECM. Other MMPs, such as MMP9 and MMP8, which are responsible for digesting gelatin, collagen, and elastin, showed downregulation after IAV infection. The balance of MMPs as ECM regulators during fibrosis progress needs more detailed studies to understand how MMPs are involved in collagen dynamics, particularly during episodes of fibrogenesis. Our previous study showed the therapeutic potential of Rev-erbα agonist in preventing EMT induced by cigarette smoke (CS) and fibroblast differentiation induced by TGFβ 27 . In this study, we found that Rev-erbα agonist treatment can prevent the abnormal collagen modification induced by TGFβ, and inhibits the overexpression of collagen. A previous study showed that Rev-erbα agonist could attenuate the fibrotic responses in vivo, ex vivo, and in vitro by measuring traditional fibrotic markers: ACTA2 and COL1A1 8 . Our results further support the role of Rev-erbα in the fibrotic response. Rev-erbα agonist prevents the overexpression of collagen 1 and 4, lysyl oxidase, fibronectin, and αSMA, whereas Rev-erbα antagonist augments it. We found that Rev-erbα agonist treatment suppressed mRNA and protein expression of collagen caused by TGFβ with significance. Our results further described that Rev-erbα involvement in fibrotic progression might be through lysyl oxidase, which is known for stabilizing collagen content. It has been shown that SR8278 could promote myogenesis in myoblasts, but it has a very poor half-life: 0.17 h 51 , 52 . Similarly, we noticed that the Rev-erbα antagonist (SR8278) exacerbates the myofibroblast differentiation by augmenting the expression of collagen, lysyl oxidase, and fibronectin. Surprisingly, either Rev-erbα agonist or antagonist exacerbated the EMT induced by TGFβ in SAEC, while the cell-type specific role of Rev-erbα in vitro in lung cells needs further investigation. From our results, we showed a decreased Rev-erbα abundance during lung fibrogenesis, loss of Rev-erbα could exacerbate the fibrotic process induced by IAV infection via collagen-lysyl oxidase interaction, and pharmacological activation of Rev-erbα prevented the overexpression of collagens. Our and others studies demonstrate that mice dosed with bleomycin at night show worse fibrotic progress than during the day, which might be the result of decreasing Rev-erbα levels 37 . Based on our and others findings, circadian clock is critically involved in disease development, and night shift workers could face a higher chance of fibrotic disease development 22 , targeting the clock molecule Rev-erbα might be one potential therapeutic strategy to overcome the risk. Current FDA-approved anti-fibrosis drugs: Nintedanib and Pirfenidone, are not targeting the lysyl oxidase mediating collagen stabilization 53 , 54 . Our findings suggest that Rev-erbα agonists possess great potential in protecting fibrogenesis by disrupting collagen fibers. Currently, there is a drug, PXS-5505: Pan-Lysyl Oxidase Inhibitor, that is in phase 1 clinical trials for myelofibrosis 55 . Rev-erbα agonists also preserve the possibility of treating pulmonary fibrosis, while the first generation of agonist: GSK4112 has poor pharmaceutical properties, and new Rev-erbα ligands are needed 12 , 51 . Based on the chemical structure of GSK4112, there are new agonists currently designed and available, such as SR9009, SR9011, GSK2945, SR12418, GSK2667, and GSK5072 12 . We have proven that SR9009 daily injection can prevent the EMT in lungs induced by 10 days of CS exposure 27 . Moreover, SR9009 attenuated liver fibrosis in mice with inhibited collagen expression 56 . Other agonists, such as SR12418, GSK5072, and GSK2667, have been identified that can inhibit inflammatory responses in THP1 cells 57 , 58 . Although there are disadvantages of Rev-erbα agonists in in vivo models, such as short half-life and off-target effects 47 , 51 , numerous reports including this study demonstrate the fundamental role of Rev-erbα in lung injury, and Rev-erbα agonists can prevent lung inflammation and injury induced by CS or IAV infection. A detailed study of the anti-fibrotic properties of different Rev-erbα agonists is needed to identify a proper agonist with suitable pharmaceutical characteristics for in vivo study, and even clinical trials. Overall, Rev-erbα abundance was decreased in fibrotic progression, and naturally reduced Rev-erbα exacerbated the fibrogenesis. Rev-erbα deficiency exaggerated the fibrotic responses and lung injury induced by IAV infection, and Rev-erbα was involved in the activation of collagen stabilization via lysyl oxidase during the fibrotic progression caused by IAV. Treatment with Rev-erbα agonist can prevent the induction of collagen-lysyl oxidase interactions and stabilization. Our results support the fundamental role of Rev-erbα in fibrogenesis development. Rev-erbα agonists offer promising potential in preventing collagen overexpression and may help break down collagen fibers by inhibiting lysyl oxidase overexpression. Investigating other circadian clock molecules in fibrogenic progression might help us understand the molecular mechanism as well as discover novel therapeutic targets for treating pulmonary fibrosis. Methods Ethical approval The experiments performed in this study were approved by the Animal Research Committee of the University of Rochester, University of Rochester Institutional Biosafety Committee, and the ethical standards from United States Animal Welfare Act and NIH. Human lung tissue slides declaration Human lung samples (formalin fixed-paraffin embedded (FFPE) blocks, both Normal and IPF patients) were purchased from Origene (OriGene Technologies Inc). The detailed patient information and Sample/Label IDs are listed in Supplementary Table 1 . Lung sections were prepared from the FFPE blocks with 5 μm thickness using a microtome. The sections are used for immunohistological chemistry (IHC) staining. Animals and treatments Rev-erbα global heterozygous (Rev-erbα Het) mice (male and female mice, 2–4 months old) were purchased from Jackson laboratory (Strain #:018447), and adult C57BL/6 wild-type (WT, male and female mice, 2–4 months old) were bred in the vivarium at the University of Rochester Medical Center. Before treatment, mice were transferred to the inhalation core facility and allowed 1 week acclimatization period. The mice were housed on a 12/12 h light–dark cycle with ad libitum access to water and food. WT C57BL/6 mice were used for bleomycin dosing. Mice received 1.5 units/kg for 14 days, and bleomycin (Cat#1076308, Sigma) was delivered by oropharyngeal inhalation, after anesthetizing with isoflurane. During 14 days of dosing, Rev-erbα agonist (SR9009, Cat#554276, Sigma) was injected intraperitoneally (i.p.) between 11 a.m.–12 p.m. every day, at the dosage of 100 mg/kg body weight. SR9009 was prepared in 15% Kolliphor EL (Cat#: C5135, Sigma) as described previously 27 . The mice were sacrificed 14 days post-dosing, and the lungs were snap-frozen for further analysis. For IAV dosing, mice were anesthetized with isoflurane, and a total amount of 10 3 plaque-forming units (PFU)/mouse of influenza A/Puerto Rico 8/1934 H1N1 virus (PR8) was given to mice intranasally 59 . A total of 3 female mice were housed individually ad libitum food and water were supplied in a special cage with a running wheel connected to an automatic counter. Mice were accommodated in the wheel running cage for 1 week, and counters were adjusted during this week. The locomotor activity was monitored from day 0 to day 14, and the mice were sacrificed on day 15 post-infection (p.i.). During 15 days of infection, body weights were monitored daily. A separate group of mice was placed in cages with the running wheel assembled, and the running wheel was connected to an automatic counter. Each cage has one mouse with access to regular water and food. The locomotor activity was recorded during 14 days of infection. During sacrifice, mice were anesthetized with pentobarbital (100 mg/kg) via i.p. injection. Lung function parameters (resistance, compliance, and elastance) were measured during the sacrifice via the Flexivent FX1 Legacy system (Scireq) following the manufacturer’s instructions. Each measurement was performed 3 times per animal. Mice lungs were also inflated with 1% low melting agarose and fixed with 10% formalin overnight for histological staining. Bleomycin and IAV dosing was regularly conducted between 11 a.m.–1 p.m. and sacrificed during a similar time. The time of day bleomycin dosing (7 a.m. and 7 p.m.) was performed in C57BL/6 female mice, and mice were sacrificed at the same time of day as their respective dosing. During 14 days, body weight was monitored. Another group of mice was dosed with an equal volume of PBS as the control group. Viral titer in lungs and IgG2a and IgA in serum measurement Mice were sacrificed at 2 and 4 days p.i., lungs were collected and snap frozen for preparing the lung homogenates for viral titer measurement according to our previous publication 60 . Mice were sacrificed at 15 days p.i., and whole blood was collected through the posterior vena cava vein. Serum was separated from whole blood by centrifugation (12,000 × g , 10 min at room temperature). The IAV-specific IgG2a and IgA antibodies in serum were determined using ELISA via serial dilution as described in our previous publication 60 . Cell culture and treatment Primary human lung fibroblast (Cat# CC-2512) and small airway epithelial cells (SAEC) (Cat# CC-2547) were purchased from Lonza. Lung fibroblasts were cultured in FGM-2 Fibroblast Growth Medium (Cat# CC-3132), and SAEC were cultured in SABM Small Airway Epithelial Cell Growth Basal Medium (Cat# CC-3119). Cells were seeded into 6 well plates for the treatment of 2 ng/ml TGF-β with or without 20 μM GSK4112 (Cat#: 3663; TOCRIS) and SR8278 (Cat#: S9576 Sigma) for 2 days. Human Fetal Lung fibroblast (HFL-1, Cat#: CCL-153) and human bronchial epithelial (BEAS-2B, Cat#: CRL-9609) cells were purchased from the American Type Culture Collection (ATCC) and stored in liquid nitrogen. The cells were thawed and cultured in DMEM/F12K medium (Cat#:113-20033; Thermo Fisher Scientific) with 1% Penicillin-Streptomycin-Glutamine (Cat#: 103-78016; Thermo Fisher Scientific), and 10% FBS (Cat#: 10082147; Thermo Fisher Scientific) for HFL-1 and 1% Penicillin-Streptomycin-Glutamine, 5% FBS for BEAS-2B. Cells were maintained under 5% CO 2 and 95% humidity. Before treatment, HFL-1 cells were starved in serum-free DMEM/F12K medium for 12 h, and BEAS-2B cells were serum-deprived in DMEM/F12K medium with 1% FBS. Then, the cells were treated with 2 ng/ml TGF-β with or without 20 μM GSK4112 (Cat#: 3663; TOCRIS) and SR8278 (Cat#: S9576 Sigma) for 2 days. After treatment, the cells were either lysed for protein/RNA quantification or fixed with 4% paraformaldehyde for immunofluorescence staining. RNA isolation and qRT-PCR Frozen lungs or cells were homogenized and lysed by QIAzol reagent (Cat#:79306, Qiagen), and mixed with chloroform for 10 s. The mixtures were centrifuged at 12,000 × g for 30 min, at 4 °C. Then, the aqueous phase was transferred into a new tube. Equal volumes of isopropanol were added to the samples and mixed universally, then incubated at −20 °C for 2 h. The mixtures were centrifuged at 15,000 × g for 15 min at 4 °C, and the supernatants were removed. A total 1 ml 75% EtOH was added to wash the RNA pellet and then spun down at 15,000 × g , for 30 min at 4 °C. The EtOH was removed and the RNA precipitates were resuspended in 50 μl of RNase-free water. The concentrations and qualities of all the samples were quantified by Nano-drop spectrophotometer (ND-1000, NanoDrop Technologies). Equal amounts of RNA samples were used for reverse transcription via RT2 First Strand Kit (Cat# 330401, Qiagen) and real-time PCR quantification based on SYBR green expression master-mix (Cat# 330509, Qiagen). The primers used in this study were purchased from BioRad: COL1A1 (Mouse, qMmuCED0044222), FN1 (Mouse, qMmuCEP0054113), TJP1 (Mouse, qMmuCID0005277), TGFB1 (Mouse, qMmuCED0044726), NR1D1 (Mouse, qMmuCID0014284), ARNTL (Mouse, qMmuCED0049609), CLOCK (Mouse, qMmuCED0046959), GAPDH (Mouse, qMmuCEP0039581), COL1A1 (Human, qHsaCEP0050510), ACTA2 (Human, qHsaCIP0028813), FN1 (Human, qHsaCEP0050873), LOX (Human, qHsaCED0043469), LOXL1 (Human, qHsaCED0044245), LOXL2 (Human, qHsaCED0044522), and GAPDH (Human, qHsaCEP0041396). A qRT-PCR thermal cycle is 10 min at 95 °C, 40 cycles of 95 °C, 15 s, and 60 °C, 1 min, the fluorescence intensity was checked at the end of 60 °C incubation. A melting curve was performed for a quality check of cDNA amplification. The BioRad CFX96 qPCR machine was used, and the change fold was calculated based on 2 −ΔΔCt methods with GAPDH as the endogenous control. NanoString measurement RNA samples isolated from lungs were used for NanoString measurement with a total of 100 ng RNA for each group. Our customized codeset (circadian genes and fibrotic markers) was used for bleomycin treatment groups, and nCounter Fibrosis Panel was used in Bleomycin + SR9009-treated group as well as IAV infected groups. All the RNA samples were mixed with the master mix and incubated at 65 °C for 16 h for RNA hybridization. All the samples were loaded into a NanoString running cartridge, and profiling reading was performed by nCounter SPRINT Profiler (NanoString Technologies, Inc.). All the gene expressions were normalized by nSolver 4.0 software, and normalized counts were used for data representation. The RLF files generated by the profiler were uploaded to ROSALIND ( ) for advanced analysis to generate volcano plots and pathway direct enrichment scoring. The significantly dysregulated genes were filtered and uploaded to an online tool ( ) to generate the Venn diagram and overlapped dysregulated gene list. Protein isolation and Western blot Snap frozen lung lobes or cells were lysed in RIPA buffer with a protease inhibitor cocktail, and the protein concentrations were measured by Pierce BCA Assay Kit (Cat#: 23227, Thermo Fisher Scientific). A total 20 µg protein for each sample was used for analysis. The protein samples were separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE), and then transferred to a nitrocellulose membrane (Cat# 1620112, BioRad). The membranes were then blocked with EveryBlot Blocking Buffer (Cat#: 12010020, BioRad) for 20 min, and incubated with primary antibody diluted in blocking buffer overnight at 4 °C. Primary antibodies used here included anti-REV-ERBα (1:1000, 13418, Cell Signaling), anti-COL4A1 (1:1000, ab227616, Abcam), anti-LOXL2 (1:1000, ab197779, Abcam), anti-E-Cadherin (1:1000, 3195, Cell Signaling), anti-Fibronectin (1:1000, ab, Abcam), anti-vimentin (1:1000, ab92547, Abcam); anti-COL1A2 (1:1000, NBP2-92790, Novus Biologicals), anti-COL1A1 (1:1000, NBP1-30054, Novus Biologicals), anti-activated LOX (1:1000, NB100-2527, Novus Biologicals) for Fig. 7 only, and anti-LOX (1:1000, ab174316, abcam). Then, the primary antibody was removed, and the membranes were washed with Tris-buffered saline containing 0.1% Tween 20 (TBS-T) 3 times, 10 min each. Then, membranes were incubated with secondary antibody (goat-anti-rabbit, 1:5000, #1706515, BioRad) for 1 h at room temperature. The membranes were then washed with TBS-T 4 times, 15 min each. The membranes were developed with Pierce ECL Western Blotting Substrate (Cat#: 32106, Thermo Scientific), and the signals were detected by Bio-Rad ChemiDoc MP imaging system Densitometry was calculated using ImageLab software (BioRad), and fold changes were calculated based on PBS groups, with normalization to β-actin (1:2500, ab20272, Abcam) for mice and GAPDH (1:1000, ab9482, Abcam) for human samples. H&E staining Lung sections (5 µm) were prepared through the microtome, then deparaffinized and rehydrated with xylene, and 100, 95, and 70% EtOH. Then, the sections were stained with hematoxylin for 1 min, rinsed with water for 5 min, and blued with 0.1% ammonia-water for 10 s. The slides were washed with running water for 10 min. Then, the slides were incubated with 95% EtOH for 1 min, then stained with Eosin for 1 min, and quickly washed with 95% EtOH. Then, the slides were sequentially dehydrated with 95%, 100% EtOH, and xylene. Then, all the slides were mounted with Permount, ×4 and ×20 pictures were taken with a light microscope (Nikon ECLIPSE Ci), and the total injured area was measured via ImageJ. Immunohistological chemistry (IHC) staining Lung sections (5 µm) were deparaffinized and rehydrated via xylene, 100, 95, and 70% EtOH, and washed with water for 5 min. Slides were incubated with antigen retrieval solution (Cat#: S1699, Dako, Denmark) at 95 °C for 30 min. Then, the slides were cooled to room temperature and washed with TBS + 0.25% triton-100 (wash buffer) 2 times, 5 min each. Sections were then blocked with 10% normal goat serum and incubated with anti-COL1A1 (1:100, NBP1-30054, Novus Biologicals), anti-Lox (1:100, NB100-2527, Novus Biologicals), anti-Col4A1 (1:200, ab227616, Abcam), and anti-Rev-erbα (1:100, NBP1-84931, Novus Biologicals) at 4 °C overnight. Slides were washed with wash buffer 10 min, 2 times, then incubated with 0.3% hydrogen peroxide for 15 min. Slides were washed with TBS 10 min, 2 times, and washed with wash buffer 5 min, 3 times. Slides were incubated with secondary antibody (1:1000, ab7090, Abcam) at room temperature for 1 h. Then, washed with wash buffer 10 min, 2 times, and developed with DAB Quanto Chromogen and Substrate (Cat#: TA-125-QHDX, Thermo Fisher Scientific) for 10 min. Excess DAB substrate was washed away with water, and counter stained with hematoxylin. Then, the sections were dehydrated and mounted for light microscopy (×20 and ×40 with Nikon ECLIPSE Ci and ×4 with BioTek Cytation 5). All the antibodies were prepared in 10% normal goat serum. ImageJ was used to calculate the positively stained area percentage via color deconvolution. Immunofluorescence (IF) staining Cells were seeded in chamber slides, treated with TGFβ and Rev-erbα agonist/antagonist for 2 days, and then fixed with 4% paraformaldehyde for 15 min. The slides were washed with TBS for 10 min, 2 times, stored at 4 °C, and then blocked with 10% normal goat serum. Cells were incubated anti-COL1A1 (1:100, NBP1-30054, Novus Biologicals) and anti-αSMA (1:200, A2547-2ML, Sigma Life) Sciences at 4 °C overnight, and washed with TBS 3 times, 10 min each. The chamber slides were then incubated with goat anti-rabbit IgG (H + L) secondary antibody Alexa Fluor 488 (1:1000, Catalog # A-11008, Thermo Fisher) and goat anti-Mouse IgG (H + L) Cross-Adsorbed Secondary Antibody-Alexa Fluor 488 (1:1000, Catalog # A-11001, Thermo Fisher) for 1 h at room temperature. Then, cells were washed with TBS 3 times, 15 min each, and the slides were mounted by Diamond Antifade Mountant with DAPI (Cat#: S36964, Fisher Scientific). Slides were imaged by fluorescence microscopy, and ImageJ was used to quantify the fluorescence intensity with the following equation: integrated Density (IntDen) − (Area of cells * Mean fluorescence of background). The intensity was normalized to cell number, and cell number was counted based on DAPI staining via cell counter in ImageJ. Statistical analysis The significant difference was calculated by one-way ANOVA or Student’s t test via GraphPad Prism software (V.9.0), and p < 0.05 was considered a significant difference. All the data were presented as mean ± SEM. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability All data support the results of this manuscript are available in the article or Supplementary Information, and raw data will be available upon request. Source data are provided with this paper. | None | [] | [] | [] | SciNews | Medicine | Qixin Wang et al, Circadian clock molecule REV-ERBα regulates lung fibrotic progression through collagen stabilization, Nature Communications (2023). DOI: 10.1038/s41467-023-36896-0 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-023-36896-0 | https://medicalxpress.com/news/2023-04-reveals-mechanism-circadian-clock-molecule.html | A new study by University of Rochester Medical Center researchers has uncovered a link between the body's biological clock and lung scarring, a serious condition that causes difficulty breathing. The study found that a lack of the circadian rhythm protein REV-ERBα contributes to lung scarring in mice by increasing production of collagen and lysyl oxidase, which stabilize connective tissue and make it more rigid. The researchers also found that inducing lung injury at night, when REV-ERBα levels are lowest, led to more extensive lung damage and lower survival rates in mice. The study suggests that REV-ERBα-activating drugs could serve as potential therapeutics to help prevent fibrosis and stop the disease process, and could have implications for developing treatments for all sorts of fibrotic diseases, including those with a circadian component.
Abnormal sleep patterns, like those of night-shift workers, disrupt the body's natural biological clock and have been linked to lung health issues. A new study by University of Rochester Medical Center (URMC) researchers shows how a biological clock molecule, called REV-ERBα, contributes to lung scarring, uncovering new potential drugs and drug targets along the way. Pulmonary fibrosis, or lung scarring, is a serious condition in which connective tissue builds up in the lungs, making them thick and rigid, and causing difficulty breathing. While medications can ease the symptoms of pulmonary fibrosis, none can repair the lung damage caused by this sometimes-fatal disease. The URMC study, published in Nature Communications, confirms a previously-discovered link between the body's biological clock (or circadian rhythm) and lung diseases and uncovers a new mechanism underlying this link. Study authors show that a lack of the circadian rhythm protein, REV-ERBα, contributes to lung scarring in mice by increasing production of collagen, a major component of connective tissue, and lysyl oxidase, which stabilizes connective tissue and makes it more rigid. The team, which was led by Irfan Rahman, Ph.D., Dean's Professor of Environmental Medicine at URMC, found low levels of REV-ERBα and large amounts of collagen and lysyl oxidase in lung samples from patients with pulmonary fibrosis. Inducing lung injury in mice had a similar outcome: reduced REV-ERBα levels and increased levels of collagen, lysyl oxidase, and other markers of fibrosis. As a circadian rhythm protein, REV-ERBα expression normally fluctuates throughout the day, peaking at noon and dipping to its lowest levels at midnight. When the team induced lung injury at night, mice had larger increases in lysyl oxidase and collagen proteins, more extensive lung damage, and lower survival rates compared to mice injured in the morning. Rahman said this could be relevant to night-shift workers who are exposed to lung irritants at work. "Night-shift work usually occurs during the midnight timeframe when the expression of REV-ERBα is lowest," he said. "Our study suggests there is less protection against lung fibrosis generated from REV-ERBα activation at night." When the team induced lung injury in genetically modified mice that express low levels of REV-ERBα, the mice had worse outcomes that appeared to be mediated by increased collagen and lysyl oxidase. After 15 days of infection with influenza A, these mice had greater upregulation of collagen and lysyl oxidase gene expression, worse flu infections, and worse lung injury compared with mice who expressed normal levels of REV-ERBα. Activating REV-ERBα with a drug 14 days after lung injury in mice that express normal levels of REV-ERBα slightly reduced collagen and lysyl oxidase gene expression and improved lung health in the mice, though not significantly. When tested in cell cultures, the REV-ERBα-activating drugs had an anti-fibrotic effect. "Currently, there are only two drugs approved by the FDA to treat fibrosis, and they only delay the process, they don't cure the disease," said study author Qixin Wang, Ph.D., a postdoctoral fellow working in Rahman's lab. "REV-ERBα-activating drugs could serve as potential therapeutics to help prevent fibrosis and stop the disease process." But, he adds, a better REV-ERBα drug or a more direct way to deliver the drug is needed. In their studies, mice treated with the REV-ERBα-activating drug SR9009 lost more weight and had lower survival than untreated mice. While further research is needed, Rahman and Wang believe their findings open new possibilities for developing treatments for all sorts of fibrotic diseases—especially those with a circadian component, like nighttime alcohol consumption causing liver fibrosis. |
10.1038/s41467-021-24089-6 | Rising greenhouse gases pose continued threat to Arctic ozone layer | There is a race going on high in the atmosphere above the Arctic, and the ozone layer that protects Earth from damaging ultraviolet (UV) radiation will lose the race if greenhouse gas emissions aren't reduced quickly enough. A new study from an international team of scientists, including University of Maryland Professor Ross Salawitch, shows that extremely low winter temperatures high in the atmosphere over the arctic are becoming more frequent and more extreme because of climate patterns associated with global warming. The study also shows that those extreme low temperatures are causing reactions among chemicals humans pumped into the air decades ago, leading to greater ozone losses. The new findings call into question the commonly held assumption that ozone loss would grind to a halt in just a few decades following the 2010 global ban on the production of ozone depleting chemicals called chlorofluorocarbons (CFCs) and halons. The study—which was jointly conducted by UMD, the Alfred Wegener Institute's Helmholtz Centre for Polar and Marine Research, and the Finnish Meteorological Institute—was published in the journal Nature Communications on June 23, 2021. "We're in a kind of race between the slow and steady decline in CFCs, which take 50 to 100 years to go away, and climate change, which is causing polar vortex temperature extremes to become colder at a rapid pace," said Ross Salawitch, who is a professor in the UMD Department of Atmospheric and Oceanic Science, the Department of Chemistry and Biochemistry, and the Earth System Science Interdisciplinary Center. "The increasingly cold temperatures create conditions that promote ozone depletion by CFCs. So, even though these compounds are slowly going away, Arctic ozone depletion is on the rise as the climate changes." New data from the study showed the lowest Arctic polar vortex temperatures and the highest ozone losses on record in 2020, beating the previous records set nine years ago in 2011. The polar vortex is a relatively self-contained, low-pressure system that forms in the stratosphere—at an altitude of about 12 to 50 kilometers (7.5 to 31 miles)—over the Arctic every autumn and stays for varying durations throughout the winter to spring. The pattern of warm and cold winter temperatures in the polar vortex is very irregular, so not every winter is extremely cold. But the trend toward more frequent and more extreme low temperatures in the polar vortex concerns the researchers, because those conditions promote the formation of clouds, and that promotes ozone loss in the polar stratosphere. Most of the chlorine and a significant amount of the bromine in the stratosphere comes from the breakdown of CFCs, halons and other ozone-depleting substances. Normally within the Arctic polar vortex the chlorine is non-reactive, but clouds provide the right conditions for the chlorine to change form and react with bromine and sunlight to destroy ozone. Despite drastic reduction of the industrial production of CFCs and halons since the Montreal Protocol in 1987 and the global ban that followed in 2010, these long-lasting compounds are still abundant in the atmosphere. According to the World Meteorological Organization, atmospheric chlorine and bromine produced by humans is not expected to fall below 50% of their highest levels until the end of this century. To determine what this situation means for the future, the researchers projected ozone loss out to the year 2100 based on the long-term temperature trend in the polar vortex and the expected decline in chlorine and bromine compounds. They based their predictions on the output from 53 top climate models used by the Intergovernmental Panel on Climate Change. "All but one of the climate models we looked at show that exceptionally cold winters in the polar vortex will get colder over time," Salawitch said. "And the more greenhouse gas emissions there are, the steeper the trend, which means greater ozone depletion." Combining these projections with analyses of meteorological data from the past 56 years, the researchers confirmed that the Arctic is already experiencing a significant trend toward lower stratospheric temperatures and associated increases in ozone losses. What's more, their observations reveal that these trends are occurring at rate consistent with the fastest climate models. "We have been saying that a train is coming for a number of years now," said Salawitch, pointing to research papers he published in 2004 and 2006 that showed extreme winters in the Arctic were becoming colder. "We've now seen the train whizzing by with record ozone loss in 2011 and now in 2020. So, this paper is really a wake-up call that something is happening in the atmosphere that's really important for ozone, and it looks like greenhouse gases are driving it." Salawitch and his colleagues do not yet fully understand how increasing greenhouse gas emissions and the associated changes to global climate are causing the extreme cold winters in the stratospheric layer of the polar vortex. But some of the underlying mechanisms are understood. Global warming occurs in part because greenhouse gases trap heat closer to Earth's surface, which allows cooling of the upper layers in the stratosphere, where the ozone layer is located. Warming at the surface causes changes to prevailing wind patterns, and the researchers suggest that these changes also produce lower temperatures in the polar vortex. The researchers also note that recent years have seen a rapid increase in methane, a more powerful greenhouse gas than carbon dioxide, in the lower atmosphere. As this gas travels to the stratosphere, it increases humidity, which also leads to conditions that promote ozone-destroying chemical reactions in the Arctic. Because ozone filters much of the sun's potentially harmful UV radiation, a depleted ozone layer over the Arctic can result in more UV radiation reaching the surface of the Earth over Europe, North America and Asia when the polar vortex dips south. But there is hope for avoiding future ozone depletion, according to the researchers. Their study shows that substantial reductions in greenhouse gas emissions over the coming decades could lead to a steady decline in conditions that favor large ozone loss in the Arctic stratosphere. The research paper, Climate change favours large seasonal loss of Arctic ozone, Peter von der Gathen, Rigel Kivi, Ingo Wohltmann, Ross J. Salawitch, Markus Rex, was published in the journal Nature Communications on June 23, 2021. | A new study by an international team of scientists, including University of Maryland Professor Ross Salawitch, has found that the ozone layer above the Arctic is at risk of depletion due to climate change. The study shows that extremely low winter temperatures in the polar vortex, which are becoming more frequent and extreme due to global warming, are causing reactions among chemicals pumped into the air decades ago, leading to greater ozone losses. Despite the ban on ozone-depleting substances in 2010, these long-lasting compounds are still abundant in the atmosphere and will not decline significantly until the end of the century. The researchers project that ozone loss will continue to increase over the next century, with the steepest trend occurring with higher greenhouse gas emissions. However, the study also suggests that substantial reductions in greenhouse gas emissions over the coming decades could lead to a steady decline in conditions that favor large ozone loss in the Arctic stratosphere, offering hope for avoiding future ozone depletion. | None | Abstract Chemical loss of Arctic ozone due to anthropogenic halogens is driven by temperature, with more loss occurring during cold winters favourable for formation of polar stratospheric clouds (PSCs). We show that a positive, statistically significant rise in the local maxima of PSC formation potential (PFP LM ) for cold winters is apparent in meteorological data collected over the past half century. Output from numerous General Circulation Models (GCMs) also exhibits positive trends in PFP LM over 1950 to 2100, with highest values occurring at end of century, for simulations driven by a large rise in the radiative forcing of climate from greenhouse gases (GHGs). We combine projections of stratospheric halogen loading and humidity with GCM-based forecasts of temperature to suggest that conditions favourable for large, seasonal loss of Arctic column O 3 could persist or even worsen until the end of this century, if future abundances of GHGs continue to steeply rise. Introduction Variations in ozone within the Arctic polar vortex during winter and spring (hereafter: winter) are driven by anthropogenic chemical loss and dynamical resupply 1 , 2 . Chemical loss and dynamical resupply of stratospheric ozone show large inter-annual variability, driven by meteorology. Colder, more isolated vortices are associated with smaller values of total column ozone 3 , 4 , less resupply and larger chemical loss of ozone (due to low temperatures). Colder vortices are caused by a weaker Brewer-Dobson Circulation, reduced planetary-scale wave activity and lower eddy heat flux in the extratropical lower stratosphere 5 . The coldest Arctic winters experience the smallest values of total column ozone, due in part to a larger amount of chemical loss 3 , 4 . Chemical loss of O 3 in the Arctic stratosphere occurs following the activation of chlorine on or within cold sulphate aerosols 6 , 7 and supercooled ternary (H 2 SO 4 -HNO 3 -H 2 O) solution droplets 8 (STS), and on the surfaces of nitric acid trihydrate (NAT) particles 9 or water ice when air is exceptionally cold. When temperatures fall during Arctic winter, STS and NAT particles 10 , 11 , 12 are the first types of PSCs to form. The timescale for chemical processing of chlorine reservoir gases on STS droplets transitions from weeks to days near the temperature at which NAT becomes thermodynamically stable ( T NAT ) 7 , which is governed by the vapour pressure of nitric acid (HNO 3 ) and water (H 2 O) 9 . The volume of air cold enough to allow for the existence of polar stratospheric clouds (PSCs) in the Arctic polar vortex, averaged over an ozone loss season ( V PSC ), exhibits a compact, near-linear relation with chemical loss of column ozone 13 , 14 , 15 , 16 , 17 during recent winters. Rex et al. 13 postulated that the maximum value of V PSC during Arctic winters had risen in a statistically significant manner between 1966 and 2003, and suggested this increase was caused by radiative and dynamical effects of rising levels of greenhouse gases (GHGs). New record values of V PSC were set in the winters of 2005 (ref. 14 ), 2011 (ref. 3 ), 2016 (refs. 18 , 19 ), and 2020 (ref. 20 ). An early evaluation using a general circulation model (GCM) with coupled active chemistry (a chemistry climate model, or CCM) suggested decreases in planetary wave activity reaching the mid-latitude stratosphere due to increased westerly winds in the subtropics, driven by rising levels of GHGs, would lead to stronger, colder Arctic vortices 21 . More recently, a simulation using another CCM suggested that future cooling of the Arctic lower stratosphere during early winter would result from direct radiative cooling driven by GHGs and indirect effects related to declining Arctic sea ice and rising sea surface temperatures 22 . Simulations conducted using a third CCM showed modest cooling (~0.15 K decade −1 ) of the future Arctic stratosphere at 50 hPa also driven by GHGs, with high interannual variability that complicates the assessment of statistical significance 23 . Here we examine trends in the PSC formation potential (PFP), which represents the number of days a volume of air equal to the volume of the polar vortex was exposed to PSC conditions for each Arctic ozone loss season based on T NAT (similar to ref. 24 ). We show that positive, statistically significant trends in the local maxima (LM) of the PFP timeseries (PFP LM , the upper quartile of PFP relative to a trend line) over the past four decades are apparent in data from four meteorological centres. A central component of our analysis is the examination of output from GCMs that provide estimates of stratospheric conditions until the end of this century, with a focus on models that submitted output for the Shared Socioeconomic Pathways SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6 runs of Climate Model Intercomparison Project Phase 6 (CMIP6) 25 . We combine GCM forecasts of PFP with projections of stratospheric halogen loading and stratospheric humidity to evaluate how the chemical loss of Arctic ozone may evolve, as a function of future levels of atmospheric GHGs and stratospheric H 2 O. We find that if the future abundance of GHGs continues to rise steeply as in either the SSP3-7.0 or SSP5-8.5 scenario, then continued growth in the atmospheric conditions favourable for large, seasonal loss of column ozone could persist or even worsen until the end of this century, despite the decline in the abundance of anthropogenic halogens that is expected to occur due to compliance with the Montreal Protocol. Results Chemical loss of ozone Figure 1a shows values of column ozone loss between 380 and 550 K potential temperature (ΔO 3 ) at the end of winter, based on ozonesonde measurements in the Arctic vortex, plotted as a function of PFP (see “Methods” for the detailed definition of PFP). Data values are shown for all of the cold winters that have occurred since the inception of regular ozonesonde launches. The estimates of ΔO 3 are based either on Match events (situations where individual air masses are usually probed twice above different measurement stations) 13 , 14 , 17 , 26 or on the difference between a passive ozone tracer and the vortex mean, observed profile of ozone 20 . Figure 1a also shows computations of ΔO 3 found using the ATLAS Chemistry and Transport Model 27 for meteorological conditions of Arctic winters 2005, 2010, 2011, and 2020. This model includes a comprehensive treatment of stratospheric chemistry, constrained by the abundance of stratospheric chlorine and bromine from long-lived lived source gases (Fig. 2a ) for these four winters 28 plus a constant 5 parts per trillion (pptv) from very short-lived (VSL) bromocarbons 29 (see “Methods”). Fig. 1: Chemical loss of Arctic Ozone. a Chemical loss of column ozone (ΔO 3 ) in Dobson Units (DU; 1 DU = 2.687 × 10 16 molecules cm −2 ) inside the Arctic polar vortex determined by ozonesonde campaigns for various winters since 1993 versus PSC formation potential (PFP) computed from ERA5/ERA5.1 (closed symbols), calculated as the vertical integral of loss profiles between 380 and 550 K potential temperature, which is ~14 and ~24 km altitude. The error bars representing 1σ uncertainty for ozone loss are based upon considerations such as uncertainties in the calculated cooling rates and the potential impact of mixing across the edge of the vortex edge as described in Harris et al. 17 ; the 1σ uncertainty for PFP is derived by assuming an error of ±1 K in the ERA5/ERA5.1 temperature field (see “Methods”). Computations of ΔO 3 are found using the global ATLAS Chemistry and Transport Model that includes a comprehensive treatment of stratospheric chemistry, for the halogen loading and meteorological conditions of winter 2005, 2010, 2011, and 2020 as well as halogen loading for 2060 and 2100 with meteorological conditions for 2020 (symbols with crosses). The ATLAS values of ΔO 3 are also based on integrals between the 380 and 550 K potential temperature 20 . b Same as panel a except ozone loss potential (OLP) is used for the abscissa. The variance in observed (data) and modelled (ATLAS) ΔO 3 explained by PFP and by OLP is reported as the square of the correlation coefficient in both panels. The solid line on both panels shows a linear, least-squares fit to the 15 ozonesonde data points, forced through the origin. Full size image Fig. 2: Polar Stratospheric EESC and H 2 O. a EESC (equivalent effective stratospheric chlorine) for the polar stratosphere computed using fractional release factors from Newman et al. 30 and values of the abundances of long-lived halogen source gases from Table 6 - 4 of most recent WMO Ozone Assessment Report 28 (black line). Throughout, we use a slightly modified version of polar EESC, found by accounting for a 5 ppt contribution from very short-lived (VSL) bromocarbons 29 (red line; circles denote years of the ATLAS simulations shown in Fig. 1 ). The contribution to this modified polar EESC from stratospheric chlorine and bromine are shown by the violet and blue lines, respectively. b – d polar stratospheric H 2 O (in several SSP scenarios) found accounting for: variations in atmospheric CH 4 ( b ); the temperature rise of the tropical tropopause layer (TTL) ( c ); both CH 4 and warming of the TTL ( d ) (see “Methods”). The circle denotes H 2 O = 4.6 ppm, used to compute PFP whenever time-invariant H 2 O is specified. (Historical part: black lines; SSP1-2.6: green lines; SSP2-4.5: blue lines; SSP3-7.0: brown lines; SSP5-8.5: red lines). Full size image Measured and modelled values of ΔO 3 display a compact, near-linear relation with PFP for 1993–2020 (data) and 2005–2020 (ATLAS) (Fig. 1a ). This behaviour occurs because over this time period, the abundance of stratospheric halogens, commonly represented by equivalent effective stratospheric chlorine (EESC) 30 (Fig. 2a ), varies by only ~11% between the value in early 1993 and the maximum in mid-2001. Modelled values of ΔO 3 lie either close to measured ΔO 3 (2011 and 2020) or just below the 1σ uncertainty (2005 and 2010), demonstrating that the primary control on interannual variations in ΔO 3 over the past 15 years has been the exposure of air to PSC temperatures. The near-linear relation between ΔO 3 and V PSC is a robust relation for the contemporary Arctic stratosphere 16 , 17 , despite the fact that in early winter, a small volume of the Arctic vortex can exist below the temperature threshold for chlorine activation and affect a large portion of the vortex 31 . Figure 1a also contains values of ΔO 3 for years 2060 and 2100 computed using the ATLAS model, for projected stratospheric chlorine and bromine for both years, and meteorological conditions for 2020. Modelled ΔO 3 for 2060 and 2100 falls below the compact relation observed and simulated for the contemporary atmosphere due to the projected future decline in EESC (Fig. 2a ). Figure 1b shows measured and modelled values of ΔO 3 as a function of a term we shall refer to as ozone loss potential (OLP), defined as: $${\rm{OLP}}({\rm{yr}})=\frac{{{\rm{EESC}}({\rm{yr}})}^{1.2}}{{{\rm{EESC}}}_{{\rm{MAX}}}^{1.2}}\times {\rm{PFP}}({\rm{yr}})$$ (1) where EESC MAX (4.45 ppbv) is the maximum yearly value of EESC in the polar stratosphere. The variance, r 2 , in ΔO 3 explained by OLP is quite large, exhibiting values of r 2 of 0.89 and 0.96 for measured and modelled ΔO 3 , respectively (Fig. 1b ). Our OLP is defined in a manner nearly identical to the potential for activation of chlorine term of Tilmes et al. 32 , except for the use of 1.2 rather than 1 as the exponent of EESC in Eq. ( 1 ). Hassler et al. 33 conducted an analysis of ozone depletion and recovery at the South Pole assuming a linear relation between ozone loss rate and EESC, even though they state the actual relation may be more complicated. Harris et al. 17 examined model estimates of accumulated ozone losses at the 500 K potential temperature level in the Arctic stratosphere as a function of the abundance of activated chlorine, and reported a small positive non-linearity in this relationship. Here we use an exponent of 1.2 for EESC because this choice leads to the largest value of r 2 for the six ATLAS runs shown in Fig. 1b (see “Methods”). The linear, least-squares regression of the ozonesonde-based estimates of ΔO 3 versus OLP in Fig. 1b will be used below to relate estimates of the future evolution of OLP inferred from GCMs to the seasonal loss of Arctic ozone, which we denote ΔO 3 REG . We assess the uncertainty in ΔO 3 REG using lower and upper limits of 1 and 1.4 for the exponent in the expression for OLP (see “Methods”). Observed PSC formation potential Figure 3 shows time series of PFP found using data from four meteorological centres (see “Methods”). Our primary source of meteorological data is ERA5/ERA5.1/ERA5 BE (preliminary version) provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) 34 . We also use meteorological fields from Climate Forecast System Reanalysis (CFSR/CFSv2) provided by the National Centers for Environmental Prediction of the U.S. National Oceanic and Atmospheric Administration 35 , 36 , the Modern-Era Retrospective analysis for Research and Applications (MERRA-2) product provided by the U.S. National Aeronautics and Space Administration Goddard Earth Observing System Model 37 , 38 , as well as the Japanese 55-year Reanalysis (JRA-55) provided by the Japanese Meteorological Agency (JMA) 39 . We calculate V PSC based on temperature and wind fields from these meteorological reanalyses to evaluate the consistency of our estimates of V PSC and to assess the robustness of inferred trends in PFP. Diagnostics for the existence of PSCs can vary substantially between reanalyses, such that conclusions based on the often marginal conditions for PSC condensation in the NH could be affected by small differences among the reanalyses 40 . Fig. 3: PFP as a function of time. a – f Time series of PSC formation potential (PFP) for reanalysis data from: ERA5/ERA5.1 from 1980 to 2020 ( a ) and ERA5/ERA5.1 combined with the ERA5 back extension (BE) (preliminary version) from 1965 to 2020 ( b ); JRA-55 from 1980 to 2020 ( c ) and from 1965 to 2020 ( d ); MERRA-2 from 1981 to 2020 ( e ); CFSR/CFSv1 from 1980 to 2020 ( f ). The solid red circles indicate the coldest winters in the record selected using the ISA trend detection procedure (see “Methods”). A linear, least-squares fit (solid line) and 1σ uncertainty of the fit (dashed lines) to the solid red circles are shown in each panel, along with numerical values of the slopes ( S PFP−LM ), the 1σ uncertainties of these fits (Δ S PFP−LM ), as well as p -values for the quantity S PFP−LM /Δ S PFP−LM (last column, Table 1 ). Full size image Meteorological fields from ERA5 have recently been extended back to 1950 and data from JRA-55 are available from 1958 to 2020, whereas the other data sets are available from 1979 (or 1980) to 2020. Stratospheric data in the Arctic mainly rely on radiosonde soundings before 1979 and on satellite data thereafter, which could introduce potential bias (see “Methods”). We use ERA5 and JRA-55 only back to 1965 since this year marks the start of regular radiosonde coverage of the Arctic stratosphere. Finally, reanalyses transitioned from the use of space-borne data from SSU and TOVS to AMSU and ATOVS systems in the 1998 to 1999 timeframe 40 . We obtain similar results for trends in PFP LM (differences within respective uncertainties) when considering data obtained prior and after this transition (see “Methods”). As noted in the Introduction, we had previously suggested a tendency for the highest values of V PSC to have risen over time. These analyses 13 , 14 were based upon the selection of maximum values of V PSC over successive 5 year time intervals, a trend detection procedure we term here the Maximum in the Interval Method (MIM). Since the publication of these papers, we have developed a more accurate and robust trend detection procedure as documented by a series of Monte-Carlo (MC) simulations (see “Methods”), termed the Iterative Selection Approach (ISA). The slope of the LM of PFP ( S PFP−LM ) selected by ISA is strongly positive over 1980 to 2020 based upon analysis of data from all four meteorological centres, ranging from a high of 4.77 ± 0.48 d decade −1 (CSFR) to a low of 3.85 ± 0.40 d decade −1 (MERRA-2) (Fig. 3 ). The mean and 1σ standard deviation of S PFP−LM over 1980 to 2020 from these four centres is 4.26 ± 0.45 d decade −1 . The values of S PFP−LM over the longer time period of 1965 to 2020 are 3.84 ± 0.34 d decade −1 and 3.50 ± 0.29 d decade −1 based on ERA5 and JRA-55, respectively, the only data sets that extend further back than 1979, the start of the modern satellite era. In other words, during particularly cold winters over the past half century, the Arctic polar vortex has tended to experience between 3.5 and 4.8 more days per decade of exposure to conditions cold enough to sustain PSCs and activate chlorine, an increase of about 40% compared to the values that occurred a half century ago. We have conducted MC simulations to assess the statistical significance of S PFP−LM and the 1σ uncertainty in S PFP−LM (Δ S PFP−LM ) found using the ISA selection procedure (see “Methods”). These simulations indicate statistical significance at better than the 2σ confidence level for this important metric of the trend in PFP LM , based upon p -values for S PFP−LM /Δ S PFP−LM from all four meteorological data centres that are <0.001 (see “Methods”, Table 1 ). Table 1 PFP LM trend results for the reanalyses and CMIP6 GCM output. Full size table PSC formation potential from GCMs In this section, we calculate PFP from the output of all 26 GCMs in CMIP6 that archived results for the SSP5-8.5 scenario 25 . The numerical value after the dash in the SSP designation represents the rise in radiative forcing of climate (RF; units W m −2 ) at end of the century relative to pre-industrial, due to GHGs including ozone-depleting substances as well as tropospheric aerosols 41 . Temperature fields within these GCMs often exhibit biases with respect to observed temperature that can approach 5 K, with most models being biased warm 42 . Stratospheric H 2 O tends to be biased low in many models 43 , which together with a high-temperature bias will lead to an underestimation of the accumulated exposure to PSCs in the Arctic. To compensate for the temperature biases, the temperature threshold for the existence of PSCs has been offset by a constant value specific to each model such that the overall magnitude of PFP LM in the GCM matches the observed magnitude of PFP LM over the modern satellite era. Furthermore, the computation of PFP uses profiles for H 2 O and HNO 3 for the contemporary stratosphere (see “Methods”). Values of PFP for the SSP5-8.5 run of 16 of the 20 GCMs that submitted results for all four SSPs highlighted in our study (SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6) are shown in Fig. 4 . PFP for the remaining SSP5-8.5 GCM runs are shown either in Fig. 5 or in the Supplementary Information (SI). The suggestion that the coldest Arctic winters are getting colder is also apparent in GCM simulations without adjusting the PSC temperature threshold (see SI). We highlight results with adjusted thresholds to place all of the GCMs on a common scale for assessing PFP in the Arctic stratosphere. Fig. 4: PFP, 1950–2100, from CMIP6 GCMs for SSP5-8.5 scenario and time-invariant H 2 O. a – p Time series of PSC formation potential (PFP) from 16 CMIP6 GCMs (as indicated on top of each panel), based on archived output from the SSP5-8.5 scenario (2015–2100) combined with output from the historical scenario (1950–2014). The solid circles indicate the coldest winters in the record (local maxima) selected using the ISA trend detection procedure (see “Methods”). A linear, least-squares fit (solid line) and 1σ uncertainty (dashed lines) to the solid red circles are shown in each panel, along with numerical values of the slopes ( S PFP−LM ) and 1σ uncertainties of these fits. The blue line shows the best fit to PFP of the radiative forcing time series for each model run, and the grey line is a 21-year running mean (±10 years) to PFP from each GCM. The temperature threshold for the formation of PSCs has been offset by a constant number, specific to each model, so that the overall magnitude of PFP LM in the GCM matches the observed magnitude of PFP LM , over the modern satellite era (see “Methods” and Table 1 ). Full size image Fig. 5: PFP, 1950–2100, from CMIP6 GCMs for various SSP scenarios and time-invariant H 2 O. a – p Time series of PSC formation potential (PFP) from 4 CMIP6 GCMs (as indicated on top of each panel), based on archived output from various historical (1950–2014) and SSP scenarios (2015–2100) for radiative forcing of climate. See Fig. 4 for more details. Full size image Values of S PFP−LM found for each of the 26 GCM simulations with archived results for SSP5-8.5 are all positive, ranging from a high of 3.66 ± 0.16 d decade −1 (IITM-ESM) to a low of 0.62 ± 0.09 d decade −1 (BCC-CSM2-MR) (Table 1 ). The majority of these slopes lie between about 1.0 and 2.5 d decade −1 ; statistical significance at better than the 2σ level is exhibited for S PFP−LM in 16 and for S PFP−LM /Δ S PFP−LM in 24 of these 26 runs. The similarity of the long-term running mean of PFP and regression of PFP versus RF in each of the panels (Fig. 5 ) suggests the Arctic stratosphere is cooling in a manner that follows the rise in RF of climate. This provides further support that rising GHGs are the primary factor driving increasing PFP. Nearly all of the GCMs exhibit maximum values of PFP towards the end of the century. The progressive tendency towards colder Arctic winters is also exhibited in GCMs that participated in the earlier CMIP5 project 44 . For CMIP5, archived output from 27 GCM simulations that ran the Representative Concentration Pathway (RCP) 8.5 (ref. 45 ) is considered. The frequency distribution function of the ISA-based value of S PFP−LM over 1950–2100, for 26 CMIP6 GCMs and 27 CMIP5 GCMs, is shown in Fig. 6 . The mean and standard deviation of S PFP−LM are 1.71 ± 0.7 d decade −1 and 1.48 ± 1.0 d decade −1 for the CMIP6 and CMIP5 GCMs, respectively (Fig. 6b ). The CMIP5 GCMs exhibit a greater tendency towards both low and high values of S PFP−LM compared to the CMIP6 GCMs. Most importantly, values of S PFP−LM over 1950–2100 are positive for 52 of the 53 CMIP5/6 GCM simulations forced by an 8.5 W m −2 rise in RF by end of the century. These GCM runs provide numerical support for the contention that rising levels of GHGs will lead to cooler conditions in the polar stratosphere that are conducive to the chemical loss of ozone by anthropogenic halogens. The GCM simulations in Figs. 4 and 5 also show a tendency for PFP associated with the warmer Arctic winters (open circles at bottom of the data envelope) to rise slightly over time, a projected trend not yet apparent in observations 46 due perhaps to the generally small values of PFP for the warmest winters over the observational period as well as the lower limit of zero for PFP. Fig. 6: Modelled and measured values of S PFP−LM . a Mean and 1σ standard deviation of the slope of local maxima of PFP ( S PFP−LM ) selected using the ISA trend detection procedure, for 1980–2020, based upon analysis of output from 26 CMIP6 GCM simulations (blue), 27 CMIP5 GCM runs (grey) (see “Methods”) as well as reanalysis data from four meteorological centres (red) b , Mean and 1σ standard deviation of S PFP−LM selected using the ISA trend detection procedure, for 1950–2100, based upon analysis of output from 26 CMIP6 GCM simulations (blue points with error bars) and 27 CMIP5 GCM runs (grey points with error bars) as well as the frequency distribution of S PFP−LM from the individual CMIP6 simulations (blue vertical bars) and CMIP5 runs (grey vertical bars). Full size image The mean and standard deviation of the empirical value of S PFP−LM over 1980 to 2020 from the four reanalysis datasets is compared to GCM-based values (for the same time period) in Fig. 6a . The rationale for this comparison is the models have undergone a similar rise in the RF of climate over these four decades as the atmosphere. The observationally based trend lies near the upper 1σ value of the GCMs. Over this short period internally generated climate variability may play a substantial role and the one realisation that developed in earth’s climate system may have coincidentally followed a path that led to S PFP−LM at the upper range of the GCM values. On the other hand, tropospheric climate exhibited a shift in the early 2000s that weakened the intensity of planetary wave activity propagating into the stratosphere 47 , which could be responsible for a portion of the larger observed value of S PFP−LM compared to results from GCMs. Shifts in patterns of sea surface temperature in the North Pacific have also been implicated as a causal factor in decreased planetary wave activity and the strengthening of the Arctic vortex 48 . The potential association of these drivers of Arctic, stratospheric temperature with climate change is an area of active research 47 . We interpret the results in Fig. 6a as follows: there is a strong similarity in the four observationally based estimates of S PFP−LM , and this value is consistent with a subset of the GCMs (i.e., those with the largest values of S PFP−LM ). It is difficult to attach further meaning to this comparison; because of the potential role of internal variability in planetary wave activity, we caution against asserting that GCMs with the best match to the empirically based value S PFP−LM will provide a more realistic forecast of the future. As further support for the notion that larger values of PFP towards the end of the century are driven by rising levels of GHGs, we analyse results for the 20 GCM simulations that have provided an output for SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6 (ref. 41 ). A comparison of PFP for four of these GCMs is shown in Fig. 5 . Results for the other 16 GCMs exhibit similar behaviour, as shown below using the multi-model ensemble mean projections. Nearly without exception, the ISA-based value of S PFP−LM over 1950–2100 for a particular GCM is largest for the SSP5-8.5 simulation and lowest (in many cases, near zero) for the SSP1-2.6 run. This finding provides further evidence that stratospheric cooling caused by the human release of GHGs is the primary driver of rising LM values of PFP within these GCMs. The projections of PFP shown in Fig. 5 have been found assuming profiles for H 2 O and HNO 3 appropriate for the contemporary atmosphere. However, future levels of stratospheric H 2 O will likely rise due to increasing tropospheric CH 4 as well as the warming of the tropical tropopause 49 , 50 . Figure 2 shows estimates of polar, stratospheric H 2 O for changes driven by the oxidation of CH 4 (Fig. 2b ), warming of the tropical tropopause (Fig. 2c ), and the combination of both effects (Fig. 2d ). Our CH 4 -based estimate is derived from the relation between CH 4 and H 2 O in the contemporary Arctic stratosphere 51 combined with historical and future projections of CH 4 from the SSP-database, and the thermodynamic-based estimate results from an analysis of CMIP6 GCM output 43 (see “Methods”). Accounting for the future rise in stratospheric water for the computation of T NAT has a profound effect on PFP as well as S PFP−LM . Figure 7 shows results from one of the four GCMs highlighted in Fig. 5 . The first column of Fig. 7 shows the effect on PFP and S PFP−LM of projected future increases in stratospheric H 2 O due to CH 4 , the second shows the effect due to thermodynamics, and the third column shows the full effect of rising stratospheric H 2 O. The sensitivity of future PFP to the projected change in H 2 O is large within the EC-Earth3 GCM, as shown by comparing the first three columns of Fig. 7 (variable H 2 O) to the first column of Fig. 5 (time-invariant H 2 O), particularly for SSP5-8.5 and SSP3-7.0. The trend in S PFP−LM found using archived output from the EC-Earth3 GCM for SSP5-8.5 increases from 2.27 ± 0.13 d decade −1 for time-invariant H 2 O (Fig. 5a) to 3.93 ± 0.13 d decade −1 when both of the factors driving the potential future rise in stratospheric H 2 O are considered (Fig. 7i ), because a more humid future stratosphere is more conducive to the chlorine activation and the formation of PSCs. Conversely, as expected, the impact of future stratospheric H 2 O on PFP and S PFP−LM is small for SSP2-4.5 and SSP1-2.6. The other GCMs that have archived results for all four SSPs exhibit similar behaviour (see SI). Fig. 7: PSC formation potential (PFP) and Ozone Loss Potential (OLP), 1950–2100, from EC-Earth3 model for variable H 2 O, various SSP scenarios. a – l Same as Fig. 5 for the EC-Earth3 GCM, for variable H 2 O accounting for: tropical tropopause warming ( a - d ), changes in atmospheric CH 4 ( e – h ), and both effects ( i – l ). m – p OLP from the EC-Earth3 GCM, for variable H 2 O due to both tropopause warming and CH 4 oxidation. The grey line shows a 21-year running mean (±10 years) to OLP from each simulation, conducted for various SSPs. Figures showing results for the other GCMs that appear in Fig. 5 are included in the SI. Full size image Projections of conditions conducive to Arctic ozone loss As shown in Fig. 1b , measured and modelled values of the chemical loss of column ozone in the Arctic stratosphere are well described by OLP. For the EC-Earth3 GCM constrained by GHGs abundances for SSP5-8.5 and SSP3-7.0, the largest values of OLP occur towards the latter half of this century, particularly when the full effect of rising stratospheric H 2 O is considered (Fig. 7m, n ). This projection suggests stratospheric cooling combined with moister conditions, driven by future rises in the atmospheric abundance of anthropogenic GHGs, could prolong the conditions that lead to significant chemical loss of column O 3 within the Arctic vortex until late in this century. Conversely, if GHGs follow either the SSP2-4.5 or SSP1-2.6 scenario, the value of OLP is projected to decline from close to present time until the end of the century (Fig. 7o, p ). We now turn to the multi-model ensemble mean values of PFP, rather than the LM of PFP from a single GCM. Figure 8 shows the time series of ensemble-mean values of ΔO 3 REG and OLP from the 20 CMIP6 GCMs that have archived output for GHG abundances from SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6, assuming constant stratospheric H 2 O. Commonly, year 1980 is used as a benchmark for studies of polar ozone recovery 23 . For fixed H 2 O, the multi-model mean value of OLP remains well above the 1980 level until the end of the century for SSP5-8.5 and SSP3-7.0, approaches the 1980 level for SSP2-4.5, and reaches the 1980 level at end of the century for SSP1-2.6. For SSP5-8.5 and SSP3-7.0, the seasonal loss of ozone (i.e., ΔO 3 REG ) in the range of 70–100 DU persists until the end of this century at an amount comparable to contemporary values. Fig. 8: Ensemble model mean regressed column ozone loss and Ozone Loss Potential (OLP), time-invariant H 2 O. The value of OLP (right ordinate) and ΔO 3 REG computed from OLP (left ordinate) from the 20 CMIP6 GCMs (CanESM5, CESM2-WACCM, CNRM-CM6-1, CNRM-CM6-1-HR, CNRM-ESM2-1, EC-Earth3, EC-Earth3-Veg, FGOALS-g3, IITM-ESM, INM-CM4-8, INM-CM5-0, IPSL-CM6A-LR, MIROC6, MIROC-ES2L, MPI-ESM1-2-HR, MPI-ESM1-2-LR, MRI-ESM2-0, NorESM2-LM, NorESM2-MM, UKESM1-0-LL) that archived results for the SSP5-8.5 ( a ), SSP3-7.0 ( b ), SSP2-4.5 ( c ), and SSP1-2.6 ( d ) scenarios, computed assuming a constant volume mixing ratio for stratospheric H 2 O of 4.6 ppmv. The same temperature threshold offsets specified in Table 1 and Figs. 4 and 5 have been used. The grey solid line shows a 21-year running mean (±10 years) to the ensemble mean of ΔO 3 REG for each SSP, the grey shaded area represents a 21-year running mean of the range in ΔO 3 REG for exponents of 1 (upper boundary) and 1.4 (lower boundary) of the expression for OLP, and the grey dashed horizontal lines denoted the 1980 value of ΔO 3 REG . The right-hand ordinate shows the scale of the multi-model mean values of OLP, which are the initial quantities computed from the GCM output. Note, this right-hand ordinate does not correspond to the grey shaded area, since an exponent different from 1.2 was used. Full size image Stratospheric humidity is expected to rise due to an increased source from the oxidation of CH 4 and a warmer tropical tropopause, particularly for climate scenarios with high RF of climate towards the end of the century, which will lead to further increases in ΔO 3 REG and OLP. Figure 9 shows ensemble mean values of ΔO 3 REG and OLP for the GCMs also represented in Fig. 8 , allowing for variations in stratospheric H 2 O in addition to temperature. When the effect of rising H 2 O on the future occurrence of PSCs is considered, ΔO 3 REG and OLP at end of the century are higher than contemporary values of these quantities for the SSP5-8.5 and SSP3-7.0 simulations. This analysis suggests that despite a projected decline in stratospheric halogen loading, the potential for significant chemical loss of Arctic column ozone could not only persist until the end of the century but might actually exceed contemporary loss if the atmospheric abundance of GHGs follows either SSP5-8.5 or SSP3-7.0 (Fig. 9a, b ). The multi-model mean values of ΔO 3 REG and OLP at end of the century for SSP2-4.5 (Fig. 9c ) also lie above the 1980 levels. Both quantities drop below the 1980 level for SSP1-2.6 (Fig. 9d ), because the suppressed abundance of CH 4 towards the end of the century within this scenario leads to a decline in stratospheric H 2 O relative to today (Fig. 2d ). Fig. 9: Ensemble mean regressed column ozone loss and Ozone Loss Potential (OLP), variable H 2 O. Same as Fig. 8 , except OLP from the archived GCM output of each GCM has been computed using the time series for polar stratospheric H 2 O shown in Fig. 2d , which accounts for increasing stratospheric humidity due to both variable CH 4 and warming of the tropical tropopause. a SSP5-8.5, b SSP3-7.0, c SSP2-4.5, and d SSP1-2.6 scenarios. Full size image The multi-model ensemble values of ΔO 3 REG and OLP shown in Figs. 8 and 9 capture the general tendency of projections of stratospheric temperature within 20 GCMs, the result of an enormous computational effort by the climate modelling community. On the other hand, this averaging procedure masks the strong year to year variability in Arctic conditions conducive for major ozone depletion, as represented in Fig. 7m–p (for EC-Earth3) and in SI for other GCMs, and as noted by an analysis of a seven-member ensemble from the United Kingdom Chemistry and Aerosols (UM-UKCA) CCM 23 . Discussion There are a number of factors that affect the accuracy of lower stratospheric temperature within GCMs, such as the maximum altitude and vertical resolution 52 as well as model representation of planetary wave activity that transports energy from equatorial to poleward regions 53 . One important marker of the usefulness of a GCM to simulate stratospheric dynamics is whether the model generates an oscillation of the direction of the zonal wind in the tropical lower stratosphere with a period of about 28 months, known as the quasi-biennial oscillation (QBO) 53 . Our examination of the tropical zonal wind from the models suggests CMIP6 GCMs tend to provide a better representation of the QBO than was evident in CMIP5 GCMs (see “Methods”), consistent with the more formal analysis of Richter et al. 54 . We see little difference in our projections of column ozone loss for the Arctic stratosphere (ΔO 3 REG ) (Figs. 8 and 9 ) when the CMIP6 GCM output is examined in groups of models that provide a reasonable representation of the QBO versus other models (see “Methods”). Richter et al. 54 note that while the number of models with an internally generated QBO has increased substantially from CMIP5 to CMIP6, the multi-model mean amplitude for atmospheric levels below a pressure of 20 hPa is still much lower than observed. Given the importance of the QBO in stratospheric dynamics, substantial effort is being directed towards improving the representation of this process within GCMs 55 . Ideally, GCMs would include interactive chemistry, as there are numerous feedbacks and interactions between the photochemical processes that regulate stratospheric ozone and the dynamical and radiative drivers of PFP. Four of the 20 CMIP6 GCMs considered above have fully interactive chemistry: the other 16 models use prescribed fields of ozone. The temporal evolution of OLP found using results from the four GCMs with interactive chemistry is about 20–25% lower at end of century than that found for the other 16 GCMs; nonetheless, ΔO 3 REG remains close to the contemporary value until the end of century for the SSP3-7.0 and SSP5-8.5 simulations conducted using these interactive GCMs (see “Methods”). Finally, CCMs that have been used to assess the evolution of Arctic ozone have interactive chemistry with vertically resolved stratospheres and better spatial resolution than most of the CMIP6 GCMs 56 . These CCMs tend to exhibit a more realistic representation of planetary wave activity and are capable of representing the impact of the intensification of the Brewer Dobson Circulation (BDC) and upper stratospheric cooling on ozone, two factors that result in the projection of future increases in Arctic column ozone during winter and spring 56 . However, the multi-model mean of CCMs used to project the future evolution of Arctic column ozone significantly underestimates prior observed ozone depletion, particularly during cold winters with extensive PSC activity 56 . Values of ΔO 3 REG shown in Figs. 8 and 9 represent the seasonal loss of column ozone that may occur for various GHG scenarios, rather than resulting column ozone. Future levels of Arctic column ozone during late winter and early spring are expected to increase due to factors such as intensification of the BDC, upper stratospheric cooling, as well as possible changes in planetary and gravity wave activity that exert a strong influence on the abundance of column ozone within the Arctic vortex during its formation in early winter and dynamically induced increases during winter 22 , 23 , 56 . Langematz et al. 22 project maximum V PSC to occur around 2060 with a subsequent decline due to enhanced dynamical warming of the Arctic vortex in February and March, based on simulations conducted with their CCM. Finally, future levels of N 2 O are expected to rise 41 , leading to higher levels of HNO 3 that will lead to more favourable conditions for the formation and existence of PSCs 9 . Future total column ozone during spring will reflect a balance between the initial abundance, dynamical transport, and chemical loss that is driven by a large number of factors. The strong dependence of the ensemble mean value of OLP towards the end of the century on radiative forcing of climate suggests that large, seasonal loss of column ozone in the Arctic could persist for much longer than is commonly appreciated 56 . If stratospheric H 2 O rises as projected in Fig. 2d and GHGs follow a trajectory similar to either SSP5-8.5 or SSP3-7.0, chemical loss of Arctic ozone could even be larger by end of the century than has occurred in the past. Consequently, anthropogenic climate change has the potential to partially counteract the positive effects of the Montreal Protocol in protecting the Arctic ozone layer. Methods Computation of PFP The temperature at which nitric acid trihydrate (NAT) becomes thermodynamically stable, T NAT , is governed by the vapour pressure of nitric acid (HNO 3 ) and water (H 2 O) 9 . Here, we use a constant volume mixing ratio of stratospheric H 2 O equal to 4.6 parts per million (ppmv) and a profile of HNO 3 , both based on satellite observations, to find T NAT . We compute T NAT using the saturation vapour pressure of H 2 O and HNO 3 over NAT measured by Hanson and Mauersberger 9 . A volume mixing ratio for H 2 O of 4.6 parts per million (ppmv) is used at all pressure levels, consistent with observations reported by the U.S. National Aeronautics and Space Administration Microwave Limb Sounder instrument for the lower stratosphere of the Arctic 57 , as input to the calculation of T NAT . The specified mixing ratio profile of HNO 3 , which varies as a function of pressure, is based on measurements acquired in the Arctic during January 1979 by the Limb Infrared Monitor of the Stratosphere (LIMS) on board Nimbus 7 (ref. 58 ). The quantity V PSC represents the volume of air for which temperature is less than T NAT , evaluated between potential temperatures of 400 and 700 K. The formation of PSCs in the Arctic stratosphere also depends on factors such as cooling rate, the degree of super-saturation, the chemical composition of pre-existing nuclei, as well as the surface coating of condensed particles 10 , 11 , 12 , 59 . During cold Arctic winters, the profile of HNO 3 will be altered by the sedimentation of nitrate-bearing PSCs, termed denitrification 11 , 12 , 59 , 60 . Nonetheless, our approach captures the primary factor that drives the chemical loss of Arctic O 3 : that is, temperatures low enough to allow for the existence of PSCs. As described in the main paper and detailed below, we arrive at remarkably similar conclusions based upon consideration of the temperature at which chlorine is activated on aerosols 6 , 32 , rather than T NAT , because these two temperature thresholds are similar. Our analysis requires definition of the area and volume of the Arctic polar vortex, denoted A VORTEX and V VORTEX . The horizontal boundary of the vortex is based on the value of 36 s −1 for normalized potential vorticity (nPV), which is found from the horizontal wind and temperature fields and then scaled to account for the steep altitude dependence of PV. The value of 36 s −1 for normalized PV (nPV) is used to define the edge of the polar vortex, as described in section 3.3 of Rex et al. 26 . Other studies utilize the maximum gradient in PV to define the boundary of the polar vortex 61 . We use nPV = 36 s −1 to define the vortex boundary because on some days the gradient method introduces a level of complexity, due to the existence of multiple maximum gradients of nearly equal magnitude separated by a considerable distance, which requires human judgement. We have examined maps of nPV and temperature plotted for 1 February of the years 1960–2100, in increments of every 10 years, for all 26 CMIP6 GCMs that archived results for SSP5-8.5. These maps show that the nPV = 36 s −1 boundary for the Arctic vortex is not greatly affected by climate change until the end of the century; maps for the four CMIP6 GCMs highlighted in Fig. 5 of the paper are shown in Supplementary Fig. 1 . Since PV from four reanalyses that span many decades and model output from 53 GCM simulations that span more than a century and a half are examined, it is preferable to implement a method that requires no human intervention. The next step for the computation of PFP involves calculation of the area over which temperature is below the threshold for the existence of PSCs, A PSC , as well as A VORTEX . The area for which T < T NAT and the area enclosed by the nPV = 36 s −1 contour are found on various potential temperature ( θ ) surfaces for each time step of the analysis, which are evaluated to yield A PSC ( θ , t ) and A VORTEX ( θ , t ). Next, V PSC ( t ) and V VORTEX ( t ) are computed for each time step by evaluating: $${V}_{{\rm{PSC}}}\left({\rm{t}}\right)=\int_{400\ {\rm{K}}}^{{700\ {\rm{K}}}}c\left(\theta \right){A}_{{\rm{PSC}}}\left(\theta ,t\right)\,{dt}\,$$ (2) $${V}_{{\rm{VORTEX}}}\left({\rm{t}}\right)=\int_{400\ {\rm{K}}}^{{700\ {\rm{K}}}}c\left(\theta \right){A}_{{\rm{VORTEX}}}\left(\theta ,t\right)\,{dt}$$ (3) where c ( θ ) is a factor that converts intervals of potential temperature to geometric altitude (numerical values provided in a data repository). The next step in the calculation of PFP involves evaluating the integral of the ratio of V PSC (t) and V VORTEX (t) over the Arctic ozone loss season of each winter, which are combined to yield: $${\rm{PFP}}\left({\rm{yr}}\right)=\int_{1\ {\rm{Nov}}}^{{30\ {\rm{Apr}}}}\frac{{V}_{{\rm{PSC}}}\left(t\right)}{{V}_{{\rm{VORTEX}}}\left(t\right)}{dt}$$ (4) 1 November (prior year) and 30 April (specified year) are used as limits of integration because these dates encompass the time period of possible PSC activity among reanalysis and GCM-based temperature fields. A grid for θ from 400 to 700 K, in 5 K increments, is used for the computation of V PSC from each reanalysis data set, all of which are provided at 6 h time steps. At each time step the value of the ratio V PSC / V VORTEX is capped at unity, because in rare instances the volume for PSC temperatures is larger than the volume of the vortex defined using the 36 s −1 boundary. The GCM output is generally available on a daily basis, although some modelling groups have archived output every 6 h; details are provided in Supplementary Table 1 . The models that archive output every 6 h provide high model vertical resolution fields on the native model grid, whereas the daily output is generally provided for only a limited number of pressure levels (i.e., 100, 50, and 10 hPa). In cases where the output for the SSP1-2.6, SSP2-4.5, and SSP3-7.0 scenarios are available only in low resolution (daily), we use low resolution for the SSP5-8.5 scenario from the corresponding GCM run, even if a higher resolution is available for SSP5-8.5. Values of V PSC (t) and V VORTEX (t) found using Eqs. ( 2 ) and ( 3 ) as well as the ratio of these terms are shown in Supplementary Fig. 2 . The unusual behaviour of Arctic winter 2020, such as record high values for V PSC in March and V VORTEX in March and April, is readily apparent. V PSC (t) and V VORTEX (t) are used in Eq. ( 4 ) to determine PFP. All reanalyses and GCM fields are analysed on the native horizontal resolution of the product. Finally, the 1σ uncertainty of PFP shown in Fig. 1 is based on perturbation of the reanalysis temperature field by ±1 K; this magnitude of the offset is based on our analysis of the approximate 1σ standard deviation about the mean of stratospheric temperature from the four data centres, over the modern satellite era. In the main article, we estimate PFP using the JRA-55 and ERA5/ERA5.1/ERA5 BE (preliminary version) reanalysis products over 1965–2020, as well as 1980–2020. Meteorological data in the Arctic stratosphere acquired prior to 1979 mainly rely on radiosonde measurements, and 1965 marked the beginning of regular radiosonde coverage of the Arctic stratosphere. Luers and Eskridge 62 quantified the bias in temperature reported by ten of the most common radiosondes used throughout the world since 1960, for use in climate studies. The JRA-55 reanalysis makes use of the Radiosonde Observation Correction using Reanalysis (RAOBCORE) version 1.4 (ref. 63 ) bias correction procedure for radiosonde temperature until the end of 2006, and RAOBCORE version 1.5 (ref. 64 ) thereafter. As an important check on the temporal integrity of the reanalyses prior to 1979, in Supplementary Fig. 3 we show an update to the radiosonde temperature time series acquired at Sodankylä, Finland, for each winter since 1965 (ref. 65 ). This figure shows the time evolution of the percentage of observations of temperature < −77.9 °C at 50 hPa over the months of December (prior year) and January, February, and March (indicated year) from regular radiosonde launches from Sodankylä. Supplementary Fig. 3 supports our conclusion, shown in Fig. 3d of the main article, that conditions conducive for the existence of PSCs tended to be less common between 1965 and 1979, compared to the past few decades. In the main article, we discuss an application of a threshold for the existence temperature of PSCs applied to output from the CMIP5 and CMIP6 GCMs, such that the magnitude of the LM in PFP matches the observed magnitude over the modern satellite record. Details of the specific GCMs 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 are given in the Supplement. We compute PFP from these GCMs in a similar way to that applied to the computation of PFP from meteorological data, except for the application of a temperature offset to account for either warm or cold bias. The offsets for T NAT used for CMIP6 GCMs are given in Table 1 . These offsets have been determined based on the criterion that a trend line fit to the LM of PFP (PFP LM ) from the GCM over 1980–2020 using the ISA selection procedure (described below), should have a value in year 2000 (mid-point of the data record) that lies closest to the value of the fit to PFP LM data from ERA5/ERA5.1 in year 2000, among all possible 1 K incremental offsets to T NAT (including no offset) ranging from −9 to +9 K. For CMIP6, 19 of the 26 GCMs required a positive temperature offset for the PSC threshold (Table 1 ), indicating temperature conditions computed within these GCMs tend to be warmer than climatology, particularly for winters with cold, isolated Arctic vortices. Supplementary Fig. 4 shows comparisons of PFP for each CMIP6 GCM, with and without application of this threshold. Supplementary Table 2 is similar to Table 1 of the main article, except values and statistical analysis of value S PFP−LM and Δ S PFP−LM are shown without application of any adjustment for the PSC temperature threshold. It is evident from Supplementary Table 2 and Supplementary Fig. 4 that the main thesis of our study, the coldest winters in the Arctic stratosphere are getting colder due to rising GHGs, is apparent in GCM simulations with and without this adjustment. We have chosen to show estimates of S PFP−LM upon application of a threshold correction in the main article because this is a more realistic metric to examine within the models, particularly those GCMs that have very warm biases and thus exhibit unrealistically small values of PFP. Trend detection procedures We utilise several procedures to assess the trend in LM of PFP. First, we describe the ISA, which we apply to the 41-year time series from ERA5/ERA5.1. Following the computation of PFP for all Arctic winters, all of the data are fit using a linear least-squares regression line (Supplementary Fig. 5a ). We then compute the vertical distance (i.e., the difference in PFP) between the fit line and each data point. The point (in blue) with the largest distance below the line, the most warm winter relative to the current trend line, is omitted from the subsequent analysis. The remaining data points are then fit with another linear least squares regression line (Supplementary Fig. 5b ). The same procedure of finding and removing the point (blue) with the greatest distance below the fit line is repeated leading to Supplementary Fig. 5c . The procedure is repeated until one-quarter of the points (termed the upper quartile relative to the trend line) remain; Supplementary Fig. 5d – f shows results of iterations number 28, 29, and 30. The slope ( S PFP−LM ) of 4.50 d decade −1 and 1σ uncertainty (Δ S PFP−LM ) of 0.19 d decade −1 given for the least-squares fit of data shown on Supplementary Fig. 5 f are the same as that shown in Fig. 3a . Next, we describe the Maximum in Interval Method (MIM) for assessing trends in PFP. Rex et al. 13 applied this selection procedure to their analysis of V PSC . They quantified the slope in the maximum values of V PSC that had occurred over successive 5-year long, independent time intervals. Their analysis considered 37 years of data spanning winters of 1966–2003, from which eight values of V PSC were selected. Supplementary Fig. 6b shows the resulting selections of LM (red solid points), which yields values of S PFP−LM and Δ S PFP−LM of 4.24 ± 0.34 d decade −1 for the trend in LM from the ERA5/ERA5.1 time series of PFP. Clearly, the results are quite similar to the value of S PFP−LM found using the ISA procedure, even though some of the data points selected as LM by these two techniques differ (Supplementary Fig. 6a and b ). Our development of the ISA selection procedure, rather than MIM, was also driven by our analysis of GCM output that shows steadily rising values of PFP until the end of this century when models are driven by either RCP 8.5 or SSP5-8.5 GHG scenarios, which for some models are interspersed with gaps >5 years for LM in PFP. The time interval of the MIM procedure could have been altered, but rather we offer the ISA procedure as a more robust method for the selection of LM of PFP. Supplementary Fig. 6c illustrates the value above sigma (VAS) selection procedure used by Rieder and Polvani 108 to address trends in V PSC . For VAS, one first computes the mean and standard deviation about the mean (σ) using all values of the PFP time series. Next, the slope in PFP is found using only those data points that lie 1σ above the mean. The VAS selection yields seven selected points, resulting in a slope of 3.06 ± 1.51 d decade −1 for a fit to these selected points. The selection of PFP from Arctic winter 2018 and lack of selection of any data points prior to 1995 by the VAS selection procedure illustrates the problem with this method: by design, only the highest values are selected. To test the hypothesis that the LM of a quantity has risen over time, one should apply a time-varying statistical method for the selection of points. A static selection such as VAS is not an appropriate means to assess whether the coldest winters are getting colder because VAS tends to select only the highest values, rather than the LM, from the time series of PFP. In order to further assess the selection of PFP LM by the ISA, MIM, and VAS trend detection procedures a set of MC simulations was conducted for a dataset with an imposed, positive trend in PFP. For this set of MC simulations, one million time series of PFP were generated for a 41-year long record (matching the time period 1980–2020), each with PFP distributed between a lower bound of 0 and an upper bound that starts at 13.6 d (first winter) and rises with a slope of 4.59 d decade −1 . Each PFP data point is uniformly, randomly distributed between the time-varying upper bound and the lower bounds, chosen to match the lower and upper limits of PFP from ERA5 in a statistical fashion. Supplementary Table 3 summarizes the results of this first set of MC simulations. This table gives the mean value of the slope \((\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}})\) and 1σ uncertainty \((\overline{\varDelta {S}_{{\rm{PFP}}-{\rm{LM}}}})\) of the fits to the maxima in PFP of these one million randomly generated time series. The table also provides the mean number \((\overline{k})\) and minimum number ( k MIN ) of LM points from which the slopes and uncertainties are computed. Use of the ISA approach yields a value for \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) of 4.50 d decade −1 upon selection of the upper quartile of LM points relative to the trend line. The fact this value of \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) lies within 2% of the slope of the design value of the upper bound attests to the robust accuracy of the ISA approach. The MIM selection procedure with 5-year intervals (the last interval covers 6 years) results in the selection of eight points from which S PFP−LM is computed, for each of the million cases. The MIM approach results in a value for \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) of 3.95 d decade −1 , which is 14% lower than the upper bound of the experimental design. Numerous values of S PFP−LM from the MIM ensemble are greater than the upper bound design value of 4.59 d decade −1 . Nonetheless, on average, the MIM approach tends to underestimate the true value of the prescribed upper bound of the experimental design, due to gaps in the true LM of PFP that sometimes exceed five years. Finally, for the VAS approach, the number of selected points can often be low, which is reflected in the value of \(\overline{k}\) given for the VAS entries in Supplementary Table 3 . Therefore, we have imposed criteria that VAS must select either a minimum of three, five, or seven points from each of the million artificial time series. For a final test of VAS, we have imposed a requirement that ten points (that is, the ten largest values of PFP) must be used for the computation of S PFP−LM for each time series. Values of \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) returned by VAS range from 2.08 to 2.30 d decade −1 , a factor of two less than the upper bound of the experimental design, because as noted above the VAS procedure selects the highest values rather than LM. As such, the ISA selection procedure provides a more accurate representation of the design of the underlying model than the MIM approach and a much more accurate representation than that provided by the VAS selection procedure. Statistical significance The fitting uncertainty (Δ S PFP−LM ) in the regression lines for PFP LM is not a true measure of the significance of the trend ( S PFP−LM ), because Δ S PFP−LM does not consider the selection process for obtaining LM of PFP. Therefore, we assess the statistical significance of S PFP−LM and Δ S PFP−LM using another set of MC simulations. In these MC simulations, we work with actual data for PFP from either a reanalysis or GCM to assure the basis set of our randomly generated time series are identical to the PFP time series. The time series for PFP shown in Fig. 3a consists of 41 data points, which could be arranged in more than 3 × 10 49 possible combinations. We use a random number generator to place these 41 PFP data points into 10 million combinations. The ISA selection algorithm is applied to each of the 10 million combinations of PFP, resulting in a selection of the upper quartile (that is, 10 and usually 38 points for the reanalyses and GCMs, respectively) relative to the trend line, following the same algorithm used to select the PFP LM shown in the main article. The corresponding slope ( S PFP−LM ) and uncertainty (Δ S PFP−LM ) is found for each of these combinations. The p-values given in Table 1 for S PFP−LM are equal to the probability that the slope of these random fits exceeds the slope determined from the data. In other words, 18% of the randomly generated combinations of PFP for the ERA5/ERA5.1 basis set (over the 1980–2020 time period) yield a value for S PFP−LM larger than 4.50 d decade −1 . However, for the vast majority of the time series that yields a value of S PFP−LM larger than 4.50 d decade −1 , the value of Δ S PFP−LM associated with the fit is larger than the ±0.19 d decade −1 uncertainty found from the ERA5/ERA5.1 time series. High slopes with large uncertainty are usually dominated either by several low values of PFP LM at the start of the time series of the selected points or a couple of high values of PFP LM towards the end of the time series. As explained below, very few of the randomly generated time-series yield a high value of S PFP−LM in combination with a low value of Δ S PFP−LM . We therefore examine the quantity S PFP−LM /Δ S PFP−LM as a measure of the statistical significance of both the temporal rise in PFP LM as well as the uncertainty in this rise. Of the randomly generated time series, 99.992% yield a value of S PFP−LM /Δ S PFP−LM that is smaller than the actual value of 23.6 (4.50 d decade −1 divided by 0.19 d decade −1 ). Consequently, a p-value of 8 × 10 −5 is associated with the entry for S PFP−LM /Δ S PFP−LM based upon ERA5/ERA5.1 data in Table 1 and we state, in the main article, that the value of S PFP−LM and the associated uncertainty are statistically significant at better than the 2σ confidence level. While the shape of the probability density functions of S PFP−LM and S PFP−LM /Δ S PFP−LM are not strictly Gaussian, the fall-offs of the tail of both functions are Gaussian-like (i.e., kurtosis close to 3; more specifically, the kurtosis for S PFP−LM is 2.1 and for S PFP−LM /Δ S PFP−LM is 3.4). Therefore, we are comfortable assessing better than 2σ confidence to S PFP−LM /Δ S PFP−LM since 8 × 10 −5 is so much less than 0.05, the 2σ confidence marker for a strictly Gaussian distribution. We have similarly estimated the statistical likelihood of achieving the reported values of S PFP−LM and S PFP−LM /Δ S PFP−LM from the 150-year time series of PFP from each CMIP6 GCM simulation constrained by SSP5-8.5, again using 10 million of the possible combinations of PFP from each basis set. The vast majority of the resulting p-values indicate statistical significance at close to or better than the 2σ level of confidence for both GCM-based values of S PFP−LM as well as S PFP−LM /Δ S PFP−LM (Table 1 ). Vortex boundary The vortex boundary used throughout our study is based on the value of 36 s −1 for nPV. This definition of the vortex boundary is commonly used in other studies of Arctic ozone, because nPV = 36 s −1 tends to be closely associated with the maximum, horizontal gradient of potential vorticity 20 , 26 , 109 . To check whether other definitions of the vortex boundary would yield insignificant results Supplementary Fig. 7 shows trends in S PFP−LM found by the ISA algorithm applied to data from ERA5/ERA5.1 combined with ERA5 BE (preliminary version) from 1965 to 2020 for four alternate definitions of the vortex boundary, along with the resulting trends and p-values for the quantity S PFP−LM /Δ S PFP−LM . For each alternate vortex boundary definition, the resulting trends in S PFP−LM are positive and highly statistically significant. The numerical values for PFP do vary based on how the boundary is specified and differ from those shown in Fig. 3b of the main paper, due largely to the use of the volume of the Arctic vortex in the denominator of the definition of PFP (Eq. 4 ). SSU & TOVS versus AMUS & ATOVS In the paper, we state that similar results are obtained for trends in PFP (differences within respective uncertainties) when considering temperature from the SSU and TOVS space-borne systems versus AMSU and ATOVS systems. The transition occurred in the years 1998–1999 (ref. 40 ). Supplementary Fig. 8 shows that similar results for trends in PFP LM (differences within respective uncertainties) are found when considering data obtained only prior and only after this transition. Aerosol reactivity potential The main article states: we arrive at remarkably similar conclusions based upon consideration of the temperature at which chlorine is activated on aerosols 6 , 32 , rather than T NAT , because these two temperature thresholds are so similar. The term aerosol reactivity potential (ARP) is similar to PFP, except in Eq. ( 1 ) the quantity T NAT is replaced by T ACL , which represents the temperature at which chlorine is activated. Values of T ACL are computed as a function of H 2 O and sulphate surface area density at 210 K using Eq. ( 1 ) and information in the caption of Fig. 5 of Drdla and Müller 6 . We use potential temperature as the vertical coordinate and the values of coefficients given in Table 1 to find T ACL . The entire analysis is then repeated (i.e., analogues of A PSC and V PSC , termed A ACL and V ACL , are computed and used as in Eq. ( 1 )), resulting in the term ARP being computed using an Eq. ( 2 ) with V ACL rather than V PSC . Supplementary Fig. 9 shows measured and modelled ΔO 3 as a function of ARP (panel a) and OLP found using ARP rather than PFP (panel b). Supplementary Figure 9 shows trends in ARP from the four reanalysis data centres used in Fig. 3 . The numerical values for the slope of the LM of ARP ( S ARP−LM ) differ by only a small amount (typically 10%) compared to those given for S PFP−LM in the main article. Finally, Supplementary Fig. 11 shows the time series of ARP and the local maximum in ARP selected using ISA, for the 4 GCMs highlighted in Fig. 5 . The results shown in Supplementary Figs. 9 and 10 are quite similar to those shown in Figs. 1 and 5 of the main article because T NAT is so similar to T ACL . In the actual Arctic stratosphere, denitrification (the removal of HNO 3 by the physical sedimentation of PSCs) will prolong ozone loss 60 and alter T NAT due to suppression of gas-phase HNO 3 (ref. 57 ). However, the volume of air for which chlorine is activated by heterogeneous chemistry is governed most strongly by temperature. The close visual relation between Figs. 1 , 3 and 5 and Supplementary Figs. 9 , 10 , and 11 support the validity of the definition of OLP used in the main paper, which does not explicitly represent denitrification for the computation of T NAT . Stratospheric H 2 O Figure 2 contains our projections of stratospheric H 2 O accounting for contributions from the oxidation of CH 4 (Fig. 2b ), warming of the tropical tropopause (Fig. 2c ), and the sum of both forcings (Fig. 2d ). The effect of oxidation of CH 4 on stratospheric H 2 O is based upon analysis of satellite observations of CH 4 obtained by the HALOE instrument in the Arctic polar vortex, as shown in Figure 12 of Müller et al. 51 for April 1993. In the Arctic stratosphere, between about 450 and 600 K potential temperature, the HALOE measurement of CH 4 exhibits a near-constant (with respect to altitude) value of ∼ 0.5 ppmv. The age of air in the Arctic, lower stratosphere (i.e., the mean transit time from the tropical tropopause to the polar, lower stratosphere) tends to be about 6 years 30 . Hence, the appropriate comparison for surface conditions is the global mean abundance of CH 4 in January 1987, which was 1.639 ppmv according to . Consequently, we infer that about 70% of the available CH 4 (at the time this air parcel entered the stratosphere) has been converted to H 2 O, based on the simple calculation fraction = (1.67 ppmv−0.5 ppmv)/(1.67 ppmv) = 0.70. The time series for H 2 O shown in Fig. 2b is found from: $$\varDelta {{\rm{H}}}_{2}{\rm{O}}({\rm{yr}})=2\times 0.7\times {{{\rm{CH}}}_{4}}^{{\rm{SURFACE}}}({\rm{yr}}-6)$$ (5) $${{\rm{H}}}_{2}{\rm{O}}({\rm{yr}})=\varDelta {{\rm{H}}}_{2}{\rm{O}}({\rm{yr}})+2.306\,{\rm{ppmv}}$$ (6) where the leading 2 in Eq. ( 5 ) accounts for the production of two H 2 O molecules upon loss of every CH 4 molecule, the factor of 0.7 and the 6-year lag have been explained just above, and the constant value of 2.306 ppmv is used to force polar stratospheric H 2 O to equal 4.6 ppmv in year 1990. The historical and future surface CH 4 time series that underlie Fig. 2b have been obtained from the various SSP scenarios 41 . Since the numerical value of the 0.7 terms in Eq. ( 5 ) depends on stratospheric OH (mainly), stratospheric Cl (second order), and the strength of the BDC, this conversion factor could change over time. Our approach is simplistic, yet captures the primary first-order effect of changing CH 4 on polar stratospheric H 2 O. The satellite-based data record for H 2 O that affords coverage of the polar regions starts in 1984, but trends are difficult to discern due to offsets between retrievals from various instruments that are commonly larger than the expected increase in polar H 2 O since 1984 (ref. 110 ). The projections for the effect of warming of the tropical tropopause shown in Fig. 2c are based on the analysis of output from CMIP6 GCMs shown in Figure 15 of Keeble et al. 43 . They document results from ten CMIP6 GCMs, for the four SSP scenarios shown in Fig. 2c , plus a few additional SSPs. We have computed a multi-model mean from the time series for nine of the ten GCMs, neglecting results from the UKESM1-0-LL GCM, because the results from this GCM seem to be an outlier (large future rise in stratospheric H 2 O) compared to results from the other nine GCMs. We then apply a time-invariant, constant offset to this time series such that stratospheric H 2 O equals 4.60 ppmv in 1990. A few of the GCMs did not archive output for all four of the SSP scenarios used in our paper; in this case, we simply averaged output from all available GCMs. These ten CMIP6 GCMs tend, on average, to underestimate observed H 2 O in the tropical lower stratosphere 110 by nearly 1 ppmv from 1984 to the present, as shown in the upper panel of Figure 12 of Keeble et al. 43 . The abundance of H 2 O in the tropical lower stratospheric is governed by thermodynamics, whereas the abundance of H 2 O in the polar stratosphere is driven by this process as well as the oxidation of CH 4 . This forecast of rising polar stratospheric H 2 O shown in Fig. 2c is consistent with a recent theoretical analysis of the future evolution of height and temperature of the tropical tropopause associated with global warming 50 . ATLAS chemical transport model Simulations are performed with the ATLAS global Lagrangian Chemistry and Transport Model (CTM) 27 , 109 . Model runs are driven by meteorological data from the ERA5 reanalysis 34 . Descent rates are calculated directly from the heating rates provided by the ERA5 reanalysis. From the two different options provided by ECMWF, we use the total (all sky) heating rates and not the clear sky heating rates. The vertical range of the model domain is 350–1900 K and the horizontal resolution is 150 km. The run for winter 2020 starts on 1 September 2019 and ends on 1 May 2020, with the first 30 days consisting of model spin up. Additional runs with a similar setup for the Arctic winters of 2005, 2010, and 2011 are performed. Model values of O 3 , H 2 O, HCl, N 2 O, HNO 3 and CO are initialized from the measurements obtained by the MLS instrument for the particular year (data obtained from ), and ClONO 2 is initialized from a climatology provided by the ACE-FTS instrument at . Initialization of CH 4 , NO x and Br y are as described in Wohltmann et al. 109 . Reaction rates and absorption cross sections are from the 2015 NASA Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies compendium . A common deficiency of CTMs is a pronounced discrepancy between measured and modelled HCl mixing ratios in the Antarctic polar vortex, as described in section 6.1 of Wohltmann et al. 109 . Therefore, a temperature offset of −3 K was used for the calculation of the Henry constant of HCl, which improves this discrepancy. Two additional ATLAS runs were started with the meteorological data of 2019/2020 and with scaling factors for chlorine and bromine relative to 2020, intended to simulate conditions for 2060 and 2100, respectively (Fig. 2a ). The scaling factors for chlorine were 0.667 and 0.455 for 2060 and 2100, respectively, and the scaling factors for bromine were 0.778 and 0.694 for these two years. These scaling factors are based on the contributions of chlorine and bromine to polar EESC, found as described in the caption of Fig. 2 . The main article states: we use an exponent of 1.2 for EESC because this choice leads to the largest value of r 2 for the six ATLAS runs shown in Fig. 1b . Supplementary Fig. 12 illustrates the value of r 2 found as a function of the exponent η in the expression: $$\frac{{{\rm{EESC}}({\rm{yr}})}^{{\rm{\eta }}}}{{{\rm{EESC}}}_{{\rm{MAX}}}^{{\rm{\eta }}}}\times {\rm{PFP}}({\rm{yr}})$$ (7) The ATLAS runs for winters 2005, 2010, 2011, 2020, 2060, and 2100 exhibit a well-defined maximum in r 2 at η = 1.2, due to the large variation of EESC over these years. Conversely, the ozonesonde determinations of ΔO 3 cannot be used to constrain η because EESC varies by only ~15% from 1993 to 2020. The ozonesonde data are quite valuable for showing the near-linear dependence of ΔO 3 with PFP (Fig. 1a ). Values of r 2 as a function of η , for the expression EESC η ×ARP, are also shown in Supplementary Fig. 12 . The simulation of ΔO 3 by the ATLAS model also exhibits a maximum near η of 1.2 when T ACL is used rather than T NAT , reinforcing the statement in the main article: remarkably similar conclusions based upon consideration of the temperature at which chlorine is activated on aerosols 6 , 32 , rather than T NAT . Exponent for EESC In the main article, we assess the uncertainty in ΔO 3 REG using lower and upper limits of 1 and 1.4 as the exponent for EESC in the expression for OLP. The lower limit of 1 corresponds to a linear dependence of chemical loss of Arctic O 3 on EESC, based upon the work of Douglass et al. 111 who showed that ΔO 3 for the Arctic vortex varies linearly with EESC for fixed values of V PSC , for values of EESC spanning 1990–2016. The upper limit of 1.4 was chosen because r 2 has the same value for η = 1 and η = 1.4 in Supplementary Fig. 12, and also because Jiang et al. 112 showed that the variation of the chemical loss of Antarctic ozone varies as a function of chlorine loading to the power of 1.4 for 1980–1990, a period of rapid rise in the chlorine component of EESC. General circulation models (GCMs) and the QBO of zonal wind This paper relies extensively on archived GCM output. The computation of PFP is based upon analysis of horizontally and vertically resolved fields of temperature and pressure from 26 CMIP6 GCM simulations constrained by SSP5-8.5 projections of GHGs and 27 CMIP5 GCM runs constrained by RCP 8.5. Supplementary Fig. 13 shows the time series of PFP from CMIP5 GCMs in a manner analogous to Fig. 4 of the main article, which provides results for CMIP6 GCMs; Supplementary Table 4 provides tabular information regarding S PFP−LM , Δ S PFP−LM , the temperature threshold offset for the existence of PSCs, and p -values for CMIP5 GCMs in a manner analogous to Table 1 . The modelling centre and literature reference for each of these GCM simulations are given in Supplementary Table 1 . On the CMIP5 archive model output is stored using a nomenclature of rLiMpN, where r refers to realization, i refers to initialization method, p refers to physics version, and L, M, and M are integers used to distinguish results from different runs from a particular GCM. Based upon file availability, we have used r1i1p1 output for all GCM runs except for r6i1p1 from CCSM4 for both the historical and RCP 8.5 simulations, r6i1p1 for the historical and r2i1p1 for the RCP 8.5 runs from GISS-E2-H as well as GISS-ER-2. For CMIP6 output, the nomenclature of rLiMpNfO is used, where r, i, and p are the same as described above, and f refers to the forcing index and O is a fourth integer. In this study, all output is from r1i1p1f1 files except for the use of r1i1p1f2 for historical and SSP runs from the CNRM-CM6-1, CNRM-CM6-1-HR, CNRM-ESM2-1, MIROC-ES2L, and UKESM1-0-LL GCMs, and the use of r1i1p1f3 for the historical and SSP runs from the HadGEM3-GC21-LL and HadEM3-GC21-MM GCMs. Figure 7 of the paper shows the effect of time-dependent stratospheric H 2 O on the time series of PFP and OLP from the EC-Earth3 GCM. Supplementary Fig. 14 shows the effect of variable H 2 O on PFP and OLP from the other three GCMs that appear in Fig. 5 . These other GCMs exhibit similar behaviour to the results from EC-Earth3 illustrated in Fig. 7 , supporting the robustness of the time series for PFP and OLP across numerous GCMs. In the main article, we state that examination of the tropical zonal wind from the GCMs indicates that the CMIP6 models tend to provide a better representation of the QBO than was evident in output from CMIP5 GCMs. This feature of the GCMs is illustrated in Supplementary Fig. 15 (reanalysis data) and 16 (GCMs). The model output shown in Supplementary Fig. 16 was mainly based upon archived monthly mean zonal wind fields from each GCM and complemented above 10 hPa by corresponding computed monthly means from daily/six-hourly data where needed; data for each panel are shown up to the highest altitude of each GCM. As can be seen from this figure, the representation of the QBO is considerably more realistic within the CMIP6 GCMs than the CMIP5 models. Supplementary Fig. 17 is similar to Fig. 9 , except trends are shown for ΔO 3 REG and OLP from the 20 CMIP6 GCMs that submitted results for all four SSPs to the CMIP6 archive that either: (a) exhibit a realistic QBO based upon our cursory examination or (b) do not exhibit a rendering of the QBO. There is little difference in the behaviour of ΔO 3 REG and OLP among these two groupings of the CMIP6 GCMs. As noted in the Main article, a more quantitative analysis of the representation of the QBO in these models reveals deficiencies in the mean amplitude below 20 hPa 54 and substantial effort is currently being directed towards improving the representation of the QBO within GCMs 55 . Further considerations In the main article we state: the temporal evolution of ΔO 3 REG and OLP found using results from the four GCMs with interactive chemistry is about 20–25% lower at end of century than that found for the other 16 CMIP6 GCMs; nonetheless, ΔO 3 REG remains close to the contemporary value until the end of the century for the SSP3-7.0 and SSP5-8.5 simulations conducted using these interactive GCMs. This finding is illustrated by Supplementary Fig. 18 , similar to Fig. 9 except results are shown only for the four CMIP6 GCMs with fully interactive stratospheric chemistry. Supplementary Fig. 19 is also similar to Fig. 9 , except trends are shown for the quantity: $$\frac{{{\rm{EESC}}({\rm{yr}})}^{{\rm{\eta }}}}{{{\rm{EESC}}}_{{\rm{MAX}}}^{{\rm{\eta }}}\,}\times {\rm{ARP}}({\rm{yr}})$$ (8) computed from CMIP6 GCM output. This figure reinforces the notion that remarkably similar conclusions are found upon consideration of the temperature at which chlorine is activated, rather than the PSC existence temperature. Finally, Supplementary Fig. 20 shows results similar to Figs. 8 a and 9a , in this case illustrating how ΔO 3 REG and OLP vary as a function of time for a multi-model mean of the 27 CMIP5 GCMs that archived results for RCP 8.5, the 26 CMIP6 GCMs that recorded output for SSP 5-8.5, and a grand multi-model ensemble of all 53 GCM runs conducted using an end of century RF of climate equal to 8.5 W m −2 . Supplementary Figures 17 to 20 provide further evidence that the future rise in GHGs has the potential to cause a significant cooling of the Arctic stratosphere leading to conditions conducive to large, seasonal loss of Arctic O 3 , particularly with future levels of stratospheric H 2 O as shown in Fig. 2d . Data availability The data that support the findings of this study are available in Zenodo with the identifier . ERA5/ERA5.1/ERA5 BE (preliminary version) data are available at (ERA5) as well as (ERA5 BE prelim). CFSR and CFSv2 data are provided by NOAA’s National Centers for Environmental Prediction and are available at (CFSR) and (CFSv2). MERRA-2 data are provided by the Global Modeling and Assimilation Office at NASA Goddard Space Flight Center and are available at ( ). The Japanese 55-year Reanalysis (JRA-55) project was carried out by the Japan Meteorological Agency and the data are available at . That dataset was collected and provided under the Data Integration and Analysis System (DIAS, Project No. JPMXD0716808999), which has been developed and operated by the Ministry of Education, Culture, Sports, Science and Technology. CMIP5 and CMIP6 GCM output are provided by the World Climate Research Programme’s Working Group on Coupled Modelling and are available at and . Code availability Code relating to this study is available from the corresponding author on request. | None | [] | [] | [] | SciNews | Earth | Climate change favours large seasonal loss of Arctic ozone, Nature Communications (2021). DOI: 10.1038/s41467-021-24089-6 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-24089-6 | https://phys.org/news/2021-06-greenhouse-gases-pose-threat-arctic.html | A new study by an international team of scientists, including University of Maryland Professor Ross Salawitch, has found that the ozone layer above the Arctic is at risk of depletion due to climate change. The study shows that extremely low winter temperatures in the polar vortex, which are becoming more frequent and extreme due to global warming, are causing reactions among chemicals pumped into the air decades ago, leading to greater ozone losses. Despite the ban on ozone-depleting substances in 2010, these long-lasting compounds are still abundant in the atmosphere and will not decline significantly until the end of the century. The researchers project that ozone loss will continue to increase over the next century, with the steepest trend occurring with higher greenhouse gas emissions. However, the study also suggests that substantial reductions in greenhouse gas emissions over the coming decades could lead to a steady decline in conditions that favor large ozone loss in the Arctic stratosphere, offering hope for avoiding future ozone depletion.
There is a race going on high in the atmosphere above the Arctic, and the ozone layer that protects Earth from damaging ultraviolet (UV) radiation will lose the race if greenhouse gas emissions aren't reduced quickly enough. A new study from an international team of scientists, including University of Maryland Professor Ross Salawitch, shows that extremely low winter temperatures high in the atmosphere over the arctic are becoming more frequent and more extreme because of climate patterns associated with global warming. The study also shows that those extreme low temperatures are causing reactions among chemicals humans pumped into the air decades ago, leading to greater ozone losses. The new findings call into question the commonly held assumption that ozone loss would grind to a halt in just a few decades following the 2010 global ban on the production of ozone depleting chemicals called chlorofluorocarbons (CFCs) and halons. The study—which was jointly conducted by UMD, the Alfred Wegener Institute's Helmholtz Centre for Polar and Marine Research, and the Finnish Meteorological Institute—was published in the journal Nature Communications on June 23, 2021. "We're in a kind of race between the slow and steady decline in CFCs, which take 50 to 100 years to go away, and climate change, which is causing polar vortex temperature extremes to become colder at a rapid pace," said Ross Salawitch, who is a professor in the UMD Department of Atmospheric and Oceanic Science, the Department of Chemistry and Biochemistry, and the Earth System Science Interdisciplinary Center. "The increasingly cold temperatures create conditions that promote ozone depletion by CFCs. So, even though these compounds are slowly going away, Arctic ozone depletion is on the rise as the climate changes." New data from the study showed the lowest Arctic polar vortex temperatures and the highest ozone losses on record in 2020, beating the previous records set nine years ago in 2011. The polar vortex is a relatively self-contained, low-pressure system that forms in the stratosphere—at an altitude of about 12 to 50 kilometers (7.5 to 31 miles)—over the Arctic every autumn and stays for varying durations throughout the winter to spring. The pattern of warm and cold winter temperatures in the polar vortex is very irregular, so not every winter is extremely cold. But the trend toward more frequent and more extreme low temperatures in the polar vortex concerns the researchers, because those conditions promote the formation of clouds, and that promotes ozone loss in the polar stratosphere. Most of the chlorine and a significant amount of the bromine in the stratosphere comes from the breakdown of CFCs, halons and other ozone-depleting substances. Normally within the Arctic polar vortex the chlorine is non-reactive, but clouds provide the right conditions for the chlorine to change form and react with bromine and sunlight to destroy ozone. Despite drastic reduction of the industrial production of CFCs and halons since the Montreal Protocol in 1987 and the global ban that followed in 2010, these long-lasting compounds are still abundant in the atmosphere. According to the World Meteorological Organization, atmospheric chlorine and bromine produced by humans is not expected to fall below 50% of their highest levels until the end of this century. To determine what this situation means for the future, the researchers projected ozone loss out to the year 2100 based on the long-term temperature trend in the polar vortex and the expected decline in chlorine and bromine compounds. They based their predictions on the output from 53 top climate models used by the Intergovernmental Panel on Climate Change. "All but one of the climate models we looked at show that exceptionally cold winters in the polar vortex will get colder over time," Salawitch said. "And the more greenhouse gas emissions there are, the steeper the trend, which means greater ozone depletion." Combining these projections with analyses of meteorological data from the past 56 years, the researchers confirmed that the Arctic is already experiencing a significant trend toward lower stratospheric temperatures and associated increases in ozone losses. What's more, their observations reveal that these trends are occurring at rate consistent with the fastest climate models. "We have been saying that a train is coming for a number of years now," said Salawitch, pointing to research papers he published in 2004 and 2006 that showed extreme winters in the Arctic were becoming colder. "We've now seen the train whizzing by with record ozone loss in 2011 and now in 2020. So, this paper is really a wake-up call that something is happening in the atmosphere that's really important for ozone, and it looks like greenhouse gases are driving it." Salawitch and his colleagues do not yet fully understand how increasing greenhouse gas emissions and the associated changes to global climate are causing the extreme cold winters in the stratospheric layer of the polar vortex. But some of the underlying mechanisms are understood. Global warming occurs in part because greenhouse gases trap heat closer to Earth's surface, which allows cooling of the upper layers in the stratosphere, where the ozone layer is located. Warming at the surface causes changes to prevailing wind patterns, and the researchers suggest that these changes also produce lower temperatures in the polar vortex. The researchers also note that recent years have seen a rapid increase in methane, a more powerful greenhouse gas than carbon dioxide, in the lower atmosphere. As this gas travels to the stratosphere, it increases humidity, which also leads to conditions that promote ozone-destroying chemical reactions in the Arctic. Because ozone filters much of the sun's potentially harmful UV radiation, a depleted ozone layer over the Arctic can result in more UV radiation reaching the surface of the Earth over Europe, North America and Asia when the polar vortex dips south. But there is hope for avoiding future ozone depletion, according to the researchers. Their study shows that substantial reductions in greenhouse gas emissions over the coming decades could lead to a steady decline in conditions that favor large ozone loss in the Arctic stratosphere. The research paper, Climate change favours large seasonal loss of Arctic ozone, Peter von der Gathen, Rigel Kivi, Ingo Wohltmann, Ross J. Salawitch, Markus Rex, was published in the journal Nature Communications on June 23, 2021. |
10.1038/s41587-023-01684-0 | New targets for CAR-T cell therapy against acute myeloid leukemia through AI-assisted analysis | Unlike other forms of blood cancer, acute myeloid leukemia (AML) cannot currently be treated with CAR-T cell immunotherapy. The reason is that specific molecular targets with which certain immune cells could specifically target AML cells are lacking, which would permit the immune system to attack cancer. Two research teams of Professor Dr. Sebastian Kobold with Dr. Adrian Gottschlich from the Division of Clinical Pharmacology at LMU University Hospital Munich and Dr. Carsten Marr with Moritz Thomas from the Institute of AI for Health at Helmholtz Munich have now succeeded in discovering such targets. The results have now been published in the journal Nature Biotechnology. AML—one of several forms of leukemia ("blood cancer")—is a treacherous disease. Five years after the initial diagnosis, only one-third of patients are still alive. Up to 85 percent of patients appear to be cured after intensive chemotherapy. However, in more than half of them, the disease returns within one to two years because the chemotherapy has not destroyed all leukemia cells. In the event of a relapse, a stem cell transplant is the only hope for cure for a patient. But even then, the long-term probability of survival is less than 20 percent. New treatment options are therefore urgently needed. CAR-T cell therapy is an innovative therapy. CAR-T stands for "chimeric antigen receptor in T cells." T cells are cells of the immune system. Cancer cells evade their "normal" attempts to attack them by using various molecular tricks. Thus, T cells no longer recognize their opponents, the cancer cells. During CAR-T cell therapy, T cells are first removed from the patients and then genetically engineered to produce a specific protein (CAR) on their surface. When these CAR-T cells are injected back into the patient's body, these will only engage their target: CD19, which ensures that they recognize the patient's cancer cells and bind to them in a targeted manner. The cancer cells consequently die. New targets However, the approved CAR-T cells against CD19 are not suitable for AML, because CD19 is (usually) not present on the surface of AML cells. Clinical results with CAR-T cells directed against other surface molecules of AML cells have been sobering so far, according to scientists. This is because CAR-T cells were unable to distinguish between healthy and degenerated cells—with correspondingly induced significant side effects. The physician Sebastian Kobold and the physicist Carsten Marr, together with colleagues from the LMU University Hospital Munich and the Institute of AI for Health at Helmholtz Munich, set out to find alternative molecules that would ideally be found exclusively on the surface of AML cells. With the help of extensive bioinformatic analyses and the integration of expression data from more than half a million individual cells, two candidates finally crystallized out of 25,000 potential cell surface molecules. These are known as CSF1R and CD86. "Such an analysis would not have been possible a few years ago, since the required single-cell data has been generated only very recently," says Marr, who led the AI-assisted analysis in the study at Helmholtz Munich. The researchers produced CAR-T cells in the laboratory of the LMU University Hospital Munich that precisely target these molecules. The cells were then tested on different AML models, including AML cells from patients. The results, according to Kobold are promising: "On the one hand, these CAR-T cells are effective against AML, but on the other hand, they hardly destroy healthy cells." The study impressively demonstrates how the synergy of interdisciplinary research groups can lead to breakthroughs in health research to treat patients in the best possible way. The researchers' next goal: They want to develop GMP (good manufacturing practice)-capable processes to produce CAR-T cells that can then also be used in clinical trials with AML patients. This is to take place within the framework of the "Bavarian Cell Therapy Catalyst," which is supported by the Bavarian Research Foundation. Kobold expects the first tests with patients in two to three years. | Researchers from the University of Munich and Helmholtz Munich have discovered new molecular targets that could enable CAR-T cell immunotherapy for acute myeloid leukemia (AML), a form of blood cancer that currently cannot be treated with this approach. The team, led by Professor Sebastian Kobold and Dr. Carsten Marr, used bioinformatic analyses and single-cell data to identify two potential targets, CSF1R and CD86, which are exclusively found on the surface of AML cells. They then produced CAR-T cells that target these molecules and tested them on AML models, including patient-derived cells, with promising results. The therapy showed effectiveness against AML while sparing healthy cells. The researchers aim to develop good manufacturing practice-capable processes to produce CAR-T cells for clinical trials with AML patients, with the first tests expected in two to three years. | None | Abstract Chimeric antigen receptor T cells (CAR-T cells) have emerged as a powerful treatment option for individuals with B cell malignancies but have yet to achieve success in treating acute myeloid leukemia (AML) due to a lack of safe targets. Here we leveraged an atlas of publicly available RNA-sequencing data of over 500,000 single cells from 15 individuals with AML and tissue from 9 healthy individuals for prediction of target antigens that are expressed on malignant cells but lacking on healthy cells, including T cells. Aided by this high-resolution, single-cell expression approach, we computationally identify colony-stimulating factor 1 receptor and cluster of differentiation 86 as targets for CAR-T cell therapy in AML. Functional validation of these established CAR-T cells shows robust in vitro and in vivo efficacy in cell line- and human-derived AML models with minimal off-target toxicity toward relevant healthy human tissues. This provides a strong rationale for further clinical development. Main Chimeric antigen receptor T cells (CAR-T cells) are human-derived effector cells that are genetically engineered to therapeutically target a specific epitope on malignant cells 1 . CAR-T cells targeting the B cell lineage antigens cluster of differentiation 19 (CD19) or B cell maturation antigen (BCMA) have shown clinical efficacy in heavily pretreated individuals suffering from different B cell malignancies, such as B cell lymphoma, B cell acute lymphoblastic leukemia and multiple myeloma 2 , 3 , 4 . However, CAR-T cells targeting non-B cell-associated epitopes have yet to show similar response rates 5 . For instance, in myeloid malignancies, such as acute myeloid leukemia (AML), common target structures are often coexpressed on vital tissues, such as endothelial cells or hematopoietic stem and progenitor cells (HSPCs), increasing the risk for on-target off-tumor toxicity 6 , 7 . Identifying safe target structures is thus pivotal to translate the vast potential of CAR-T cell therapy to myeloid neoplasms. AML is the most common acute leukemia in adults, and its molecular heterogeneity has complicated the successful development of new therapeutic agents 8 . Despite upfront curative intent in most individuals with combinatorial chemotherapy, disease relapse is frequent, occurring in over 50% of treated individuals 9 . After relapse, allogeneic hematopoietic stem cell transplantation (allo-HSCT) remains the only curative approach; but even then, long-term survival probabilities are below 20%. Therefore, innovative treatment options represent a high unmet medical need. Currently, CAR-T cells targeting AML-associated target antigens CD33 and interleukin-3 receptor-α (IL3RA, CD123) are undergoing clinical investigation. Due to preclinical evidence of off-tumor toxicity toward HSPCs, most clinical trials are evaluating the potential of anti-CD123 or anti-CD33 CAR-T cells as a bridge-to-transplant regimen before allo-HSCT. Early reports of these trials have shown only limited therapeutic efficacy 10 , 11 , 12 . Yet, more complete results of these clinical studies in AML are eagerly awaited. Meanwhile, other targets, such as CD70, C-type lectin-like molecule-1, FMS-like tyrosine kinase-3 (FLT3), CD44 variant 6 (CD44v6), sialic acid-binding Ig-like lectin-6 (Siglec-6) or CD117, have been tested in preclinical studies as alternative CAR targets 13 , 14 , 15 , 16 , 17 . However, clinical validation is pending, and expression profiles of most of the targets raise at least some uncertainties regarding their clinical safety and efficacy. Newly developed CAR-T cells are often directed to target structures that have already been used for antibody therapy. By contrast, unbiased de novo target screenings for CAR-T cell therapy have rarely been conducted 18 . In addition, until recently, off-tumor antigen projections could only leverage bulk sequencing data, missing detailed information about cell-type-specific target antigen expression patterns. Conveniently, the revolution in single-cell technologies in the last decade has generated massive single-cell expression datasets that provide precise information about the transcriptomic anatomy of healthy and malignant cells 19 , a mostly untapped resource for therapeutic development, at least in the context of de novo antigen predictions and CAR-T cell development. These advancements allow in-depth on- and off-tumor antigen prediction 20 , offering unique insights into healthy and malignant cells at an unmatched resolution. We thus developed a single-cell RNA-sequencing (scRNA-seq)-based approach specifically tailored to identify promising antigens for CAR-T cell therapy on a discovery AML cohort of 15 individuals 21 . We generated a transcriptomic atlas from publicly available datasets, consisting of over 28,000 healthy and malignant bone marrow cells from these individuals and over 500,000 healthy cells from nine of the most vital human tissues. We screened these data for cell surface antigens expressed on malignant cells with minimal coexpression on healthy cells, including T cells. With rigorous cutoffs, we identified two unrecognized targets for CAR-T cells in AML: colony-stimulating factor 1 receptor (CSF1R) and CD86. We developed CAR-T cells against both targets and tested their efficacy in vitro and in vivo in cell lines and human-derived models, including primary AML blasts. We assessed the safety of these CAR-T cells in vitro using advanced primary cell cultures for target-expressing cell types, demonstrating a better discriminatory capacity than established anti-CD33 CAR-T cells. In addition, we used several in vivo models to mitigate safety concerns. Our results illustrate the translational potential of an unbiased scRNA-seq-based screening approach and lay the basis for clinical development of our CAR candidates. Results Development of scRNA-seq-based screening algorithm We created an unbiased scRNA-seq-based discovery approach for identification of CAR targets. To ensure CAR efficacy, a suitable candidate is (1) overexpressed in malignant cells and (2) located on the cell surface. In terms of CAR safety, the candidate should (3) not be expressed on T cells and (4) show minimal expression across vital, healthy tissues (Fig. 1a ). Applying our approach to AML, we used publicly available scRNA-seq data from 15 individuals with AML 21 . From these, a total of 28,404 sequenced healthy and malignant bone marrow cells passed quality control (Fig. 1b,c ; see Methods for a detailed description of quality control steps). For maximal CAR efficacy, we sought to identify candidates with higher expression on malignant HSPC-like cells (herein termed hematopoietic stem cell (HSC)-like and progenitor (Prog)-like) than on healthy cells. Differential gene expression analyses between malignant and healthy HSPCs revealed 96 genes that were strongly overexpressed in HSPC-like cells and were used for further downstream analyses (Extended Data Fig. 1a ). Fig. 1: A scRNA-seq-based screening approach identifies CSF1R and CD86 as potential CAR targets in AML. a , Workflow of computational CAR target antigen identification by stepwise evaluation against a set of criteria for an ideal and effective CAR target antigen. The decreasing numbers of screened AML target genes are shown on the bottom. b , c , UMAP showing 28,404 healthy and malignant cells from data of 15 previously published individuals with AML harboring 15 different mutations 21 . Normalized gene expression values were log transformed. Colors highlight the different cell types ( b ) and condition ( c ). Cell annotations are provided; NK cells, natural killer cells; GMP, granulocyte–monocyte progenitors; ProMono, promonocytes; EarlyEry, early erythrocytes; ProB cells, pro-B cells; Mono, monocytes; cDC, conventional dendritic cells; pDC, plasmacytoid dendritic cells; LateEry, late erythrocytes. d , Summary of databases used to identify cell surface coding genes. e , Quantification of T cell expression of newly identified targets. Red crosses indicate targets with high expression on T cells, which were excluded from further analyses. Green check marks indicate no significant expression on T cells. f , Harmonization of 11 scRNA-seq datasets from nine healthy human tissues into a COOTA consisting of 544,764 cells. A detailed summary of all used datasets is provided in Extended Data Fig. 1b . Targets highly expressed in non-immune cell lineages or on cell types in direct proximity to infused T cells (critical cell clusters: arterial, capillary, venous, endothelial and smooth muscle cells) were excluded from further analysis. g , Volcano plot showing the remaining two target antigens with their respective FDR-adjusted log 10 ( P value) and log 2 (fold change) values from differential expression analysis between malignant HSPC-like cells and healthy HSPCs using a t -test with overestimated variance. Dashed lines indicate applied thresholds at a log 2 (fold change) of 2 and P value of 0.01. Full size image To identify candidates accessible for CAR-T cells on the target cell surface, we used OmniPath 22 , a large-scale molecular database, to integrate data from multiple resources 23 , 24 , 25 , 26 into a comprehensive human surface gene library of 4,924 genes (Fig. 1d ). Of the 96 genes overexpressed in HSPC-like cells, 36 were present in this library. Genes that passed all previous filters but showed high expression on T cells (for example, CD52 and CRIP1 ) were excluded from further analysis (Fig. 1e ). To minimize on-target off-tumor effects, we processed and harmonized 11 scRNA-seq datasets from nine healthy human tissues (brain, lung, lymph nodes, heart, skin, liver, kidney, colon and esophagus) into a massive cross-organ off-target transcriptomic atlas (COOTA) consisting of over 500,000 single healthy cells (Fig. 1f ) 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 . A detailed summary of all datasets used for COOTA is provided in Extended Data Fig. 1b,c . Targets highly expressed in vital non-immune cell lineages or on cell types of tissues in direct proximity to infused T cells (that is, endothelium, arteries, veins, bronchial vessels, capillary and smooth muscle cells) were excluded from further analyses (Fig. 1f ). Using this stringent and rigorous approach, 12 potential candidates for CAR development remained. Interestingly, most of the described CAR targets for AML ( n = 20) failed the thresholds of our analyses at different levels (Extended Data Fig. 1d ). For example, prototypic AML antigens CD33 and CD123 did not fulfill our strict criteria of overexpression in malignant HSPCs (see Methods for applied thresholds), most likely due to expression of both antigens on healthy HSPCs. In addition, CD123 had high expression levels across endothelial and various lung cell types (see Fig. 2d for detailed analysis). Fig. 2: CSF1R and CD86 are preferentially expressed on malignant HSPC-like cells compared to healthy HSPCs, and off-tumor expression is restricted to infiltrating or tissue-resident immune cells. a , Expression of target and reference genes ( CD123 and CD33 ) in single healthy and malignant cell types. Normalized expression values were log transformed and scaled to unit variance; Cand., candidates; Ref., references. b , Expression of CSF1R and CD86 target genes in malignant (HSC-like and Prog-like; left) and healthy (HSC and Prog; right) stem cells. For visualization purposes, normalized expression values of healthy HSPCs and a random subsample of malignant HSPCs were log transformed and scaled to unit variance. Each peak corresponds to a cell, and peak height indicates expression intensity. c , Expression of CSF1R and CD86 target genes in healthy and malignant cells from 15 individuals with AML. Normalized gene expression values were log transformed and visualized in a UMAP embedding. d , Single-cell COOTA screening for target ( CSF1R and CD86 ) and reference ( CD123 and CD33 ) genes. The single-cell transcriptomic atlas consists of a total of 544,764 sequenced cells from nine different organs. Each field represents the mean expression value per cluster. Blank fields indicate cell types not present in a study. e , Representative flow cytometry images of target gene expression on a panel of six different AML cell lines or NALM-6 control cells. Staining for target antigens was performed at least twice; MFI, mean fluorescence intensity. f , Expression of target antigens on human immune cell populations quantified by flow cytometry. Data are shown as mean ± s.e.m. from four different donors; CM, classical monocytes; IM, intermediate monocytes; NM, non-classical monocytes. Full size image To further optimize the safety profile of newly developed CAR-T cells, we reasoned that, if targeted therapies for any of the 12 identified candidates have already been approved by the Food and Drug Administration (FDA), the risk for unexpected, severe on-target off-tumor toxicities of newly developed CAR-T cells will be minimized. In addition, this could shorten the length of time and decrease regulatory hurdles for translation of newly developed CAR-T cells into clinical routines, as safety of target-directed therapies was previously demonstrated. Thus, we used an accessible database of all monitored FDA-approved drugs that contains information on the interactions, pharmacology and chemical structures of drugs and drug targets 38 . We identified two targets, CD86 and CSF1R, which have already undergone clinical investigation (Fig. 1g ). To the best of our knowledge, neither anti-CD86 nor anti-CSF1R CAR-T cells have been implicated for CAR-T cell therapy in AML. We thus decided to further investigate their potential. Both antigens were highly expressed across malignant cells in 100% of the individuals with AML with captured malignant blasts (11 of 15; Extended Data Fig. 2a,b ), despite the heterogeneous molecular profile of the participant collective (see van Galen et al. 21 for participant characteristics). To ensure the validity of our analyses and to better reflect the cytogenetic diversity of AML as a disease, we next sought to further increase the size of our cohort. Thus, we obtained a second publicly available scRNA-seq dataset of five additional individuals with AML 39 (Extended Data Fig. 2c ). For the cross-validation of our computational target identification approach, we used scANVI, a semisupervised variational autoencoder 40 , to map the data from Petti et al. 39 onto a newly generated reference map of van Galen et al. 21 (Extended Data Fig. 2e ). In line with the results above, CSF1R and CD86 were preferentially expressed in malignant cells compared to healthy hematopoietic cells (Extended Data Fig. 2d ). Next, after extending our target identification approach to these five additional individuals with AML (Fig. 1a ), both CSF1R and CD86 were again identified as suitable target antigens for CAR therapy in this second AML cohort (Extended Data Fig. 2f,g ). In summary, using two independent single-cell AML cohorts consisting of a total of 20 individuals, we identified CSF1R and CD86 as potential CAR targets for AML therapy. On- and off-tumor expression analysis of CSF1R and CD86 Next, we benchmarked the two target antigens CSF1R and CD86 to the reference genes CD123 and CD33 to ease interpretation of receptor expression on a transcriptomic level (Fig. 2a–c ). CSF1R was expressed in all six malignant cell clusters, but was expressed the highest on monocyte-like or conventional dendritic cell-like clusters. CD86 was most strongly expressed in monocyte-like, promonocyte-like and conventional dendritic cell-like clusters (Fig. 2a ). In terms of expression in malignant HSPC clusters, CSF1R expression was higher than CD86 , albeit lower than CD123 and CD33 reference genes (Fig. 2a,b ). In contrast, CD123 or CD33 were detected in healthy HSCs and progenitors, while both CSF1R and CD86 were only minimally expressed among these cells (Fig. 2b ). Visualized in a uniform manifold approximation and projection (UMAP) embedding, the expression profiles of CSF1R and CD86 were very comparable to CD123 and CD33 reference genes (Fig. 2c ). COOTA analysis revealed target antigen expression mainly in immune cells of myeloid origin (monocytes, macrophages and dendritic cells), similar to the peripheral expression profile of CD33 (Fig. 2d ). CSF1R and CD86 were not highly expressed on epithelial or stromal cells (Fig. 2d , top). In organ-specific cell clusters (Fig. 2d , bottom), expression was restricted to microglia cells in the brain, as described in the literature 41 . We next sought to assess expression of the target antigens on a protein level. We performed primary screening using a panel of six different human AML cell lines (THP-1, Mv4-11, OCI-AML-3, PL-21, MOLM-13 and U937) and B cell malignant NALM-6 cells as negative-staining control cells (Fig. 2e ). CSF1R and CD86 were detected on all screened AML cell lines. CD123 and CD33 expression was measured as a reference (Fig. 2e ). Given the similar expression profile on mature, healthy immune cells of our targets to CD33, we decided to use CD33 as the main control for all subsequent experiments. To validate the transcriptomic profiles predicted by COOTA, we assessed receptor expression of each candidate antigen on peripheral blood immune cells from healthy donors using multicolor flow cytometry (Fig. 2f ). In accordance with our transcriptomic prediction, expression of CSF1R and CD86 was mainly restricted to monocytic cell populations with no expression on granulocytes or T cells (Fig. 2f ). Anti-mouse CSF1R CAR-T cells (mCSF1R CART) do not cause toxicity in mice Despite the stringent thresholds set by our approach and our in-depth off-tumor antigen projection, expression patterns of CSF1R and CD86 were still broader than those of candidates in clinical use (CD19 and BCMA), which are almost entirely confined to B cells or B cell subsets 11 . Therefore, we first tested the safety of the developed anti-target CAR-T cells in fully immunocompetent syngeneic mouse models. To ensure similar target expression in mice and humans, we compared expression of candidates in different organs using available bulk sequencing data (Fig. 3a ). CSF1R showed higher expression in organs of both mice and humans, while CD86 was only detected in the spleen. Also, in line with our COOTA prediction, CSF1R is known to be expressed on microglia 42 , raising additional safety concerns. scRNA-seq analysis of archived mouse brain tissue 27 confirmed expression of Csf1r in microglia and similar expression patterns in tissue-resident myeloid cells (Fig. 3b ). Fig. 3: mCSF1R CART do not cause toxicity in mice. a , Target expression (transcripts per million) across organs in humans (top) or mice (bottom) quantified using bulk RNA-seq. b , Target expression in single mouse brain cells. A UMAP embedding of sequenced brain cells is shown on the left. Each peak corresponds to a cell, and peak height indicates expression intensity. Normalized, log-transformed antigen expression per cell type is shown on the right. c , Construct expression on transduced primary mouse T cells. d , Activation of mCSF1R or mEpCAM CART after incubation with plate-bound mCSF1R measured by flow cytometry. e , mCSF1R or mEpCAM CART cocultured with J774A.1-Luc + for 48 h. Cell lysis was quantified by BLI (left). Secretion of mouse IFNγ (mIFNγ) was quantified by enzyme-linked immunosorbent assay (ELISA; right). Data in d and e are shown as mean ± s.e.m of three independent experiments. For data in e (right), statistical significance was calculated by unpaired t- test. f , Treatment schedule for in vivo toxicity assessment of mCSF1R CART. g , Weight curves of mice treated with 3 × 10 6 mCSF1R CART ( n = 9) or mCherry T cells ( n = 11); 6 × 10 6 mEpCAM CART ( n = 5) were transferred as a toxicity control. Error bars indicate s.e.m.; NS, not significant. h , Quantification of mCherry + T cells of the parent population (top; parent population: CD3 + CD8 + cells) or CD11b + cells (bottom) by flow cytometry. Data are shown as mean ± s.e.m. of n = 6 mice. The shown statistical significance applies to day 7. i , Serum cytokine levels 1 d (d1) or 7 d (d7) after ACT. Cytokine levels were measured with LEGENDplex; n = 3 mice. Statistically significant increases in serum cytokine levels (mEpCAM versus mCSF1R CART or mCherry T cell-treated mice) occurred at day 7; IFNγ ( P = 0.0371), CXCL9 ( P = 0.0096) and CXCL10 ( P < 0.0001); TNFα, tumor necrosis factor-α; VEGF, vascular endothelial growth factor; GM-CSF, granulocyte–macrophage colony-stimulating factor. j , Treatment regimen to assess neurological toxicity in CX3CR1–GFP reporter mice. k , Weight curves of mice after i.c. (2 × 10 5 ) or i.v. (3 × 10 6 ) injection of mCSF1R CART or mCherry T cells. l , m , Quantification of transferred T cells ( l ) or microglia ( m ) by TPLSM. The indicated P values in m apply to comparisons between all groups. n , Mean body volume of microglia. The indicated P values apply to comparisons between mCSF1R CART i.c. and mCherry T cells i.c. o , Representative maximum intensity projection of microglia or macrophages (green) in CX3CR1–GFP mice after i.c. injection of mCSF1R CART (red, top) or mCherry T cells (red, bottom). White arrowheads indicate microglia and macrophages with higher mean density and mean body volume of microglia; depth from brain surface, 0–100 µm. Data in k , m and n are shown as mean ± s.e.m.; mCSF1R CART: i.c. n = 5 and i.v. n = 4; mCherry T cells: i.c./i.v. n = 2. Data in l are shown as mean ± s.e.m.; mCSF1R CART: i.c. n = 3 and i.v. n = 4; mCherry T cells: i.c. one control mouse. For all data, if not otherwise indicated, statistical significance was calculated by two-way analysis of variance (ANOVA) with a Sidak multiple-comparison correction. Full size image Given the above, we decided to use CSF1R to model potential off-target toxicity in mice. We sequenced an mCSF1R antibody-producing hybridoma and designed second-generation mCSF1R CART (Extended Data Fig. 3a ). Mouse anti-EpCAM CAR-T cells (mEpCAM CART) or mCherry-transduced T cells were used as negative controls for all experiments (Extended Data Fig. 3a ). mCSF1R CAR construct could be efficiently transduced into primary mouse T cells (Fig. 3c ). mCSF1R CART were dose-dependently activated through Fc-immobilized recombinant mouse CSF1R protein, as seen by upregulation of the activation marker CD69 (Fig. 3d , left) and cell surface exposure of degranulation marker CD107a (Fig. 3d , right) compared to mEpCAM CART. To further validate functionality of the developed mCSF1R CART, we investigated killing capacity toward mCSF1R-expressing cell lines. Therefore, we selected the mouse reticulum cell sarcoma cell line J774A.1, which expresses mCSF1R 43 . Using flow cytometry, we verified expression of mCSF1R on J774A.1 cells, while mEpCAM was not detected (Extended Data Fig. 3b ). Coculturing mCSF1R or mEpCAM CART with J774A.1 tumor cells demonstrated efficient lysis of J774A.1 tumor cells by mCSF1R CART (Fig. 3e , left). As a marker of selective activation, high amounts of interferon-γ (IFNγ) were secreted by mCSF1R CART (Fig. 3e , right). Next, we used in vivo experiments to assess the risk for on-target toxicities. Initially, mCSF1R CART or controls were injected intravenously (i.v.) into healthy C57BL/6 mice with limited engraftment (Extended Data Fig. 3c–e ). To enhance persistence of the T cells, mice were next preconditioned using whole-body irradiation (WBI; 5 Gy) 5 d before adoptive cell transfer (ACT) of mCSF1R CART (Fig. 3f ). High counts of mEpCAM CART were used as positive controls, while mCherry T cells were used as negative controls. Following transfer of T cells, we did not detect a measurable change of weight in mCSF1R CART-treated mice as a sensitive surrogate for toxicity in mice (Fig. 3g ). In comparison, as described in the literature 44 , mEpCAM CART-treated mice rapidly lost weight 1 week after ACT (Fig. 3g ). On day 7, when mEpCAM CART-treated mice reached the predefined experimental endpoint criteria, organs were collected for subsequent analyses. Remaining mCSF1R CART- or mCherry T cell-treated mice were killed 2 weeks after ACT, and organ-derived cell suspensions were analyzed by flow cytometry. We detected higher percentages of mCSF1R CART in all organs than mCherry-transduced T cells, indicative of better persistence (or antigen-dependent proliferation) of mCSF1R CART (Fig. 3h , top). We observed lower numbers of tissue-resident CD11b + cells in the kidney, liver and lung but not in other analyzed organs (Fig. 3h , bottom), most likely due to on-target effects of mCSF1R CART. Multiplex serum cytokine measurements on day 1 or day 7 after ACT revealed no differences in cytokine levels on d7 between mCSF1R CART- and mCherry T cell- or PBS-treated mice (Fig. 3i ). By contrast, high levels of proinflammatory cytokines, such as IFNγ, CXCL9 or CXCL10, were detected in the sera of mice that received mEpCAM CART (Fig. 3i ). Similarly, serum levels of clinically used markers of organ damage (for example, urea, bilirubin and liver enzymes) were elevated in mice treated with mEpCAM CART but not in mice that received mCSF1R CART or mCherry T cells (Extended Data Fig. 3g ). Finally, we performed histopathological analysis of organs with known high expression of CSF1R. mCSF1R CART-treated mice did not exhibit any signs of organ damage in hematoxylin and eosin-stained lungs, livers or spleens (Extended Data Fig. 3h ). Notably, as previously reported 44 , lungs of mEpCAM CART-treated mice showed thickening of the alveolar epithelium, indicative of on-target off-tumor toxicities of the transferred mEpCAM CART (Extended Data Fig. 3h ). To investigate the homing and killing potential of CAR-T cells in the brain, we made use of CX3CR1–GFP reporter mice, which enable direct visualization of CAR-T cell–microglia interaction by two-photon laser scanning microscopy (TPLSM). After implantation of cranial windows, mCSF1R CART or mCherry-transduced T cells were either i.v. or intracranially (i.c.) implanted into CX3CR1–GFP reporter mice (Fig. 3j–o ). T cell–microglia interactions, change in microglia morphology and reduction of overall microglia counts were monitored using TPLSM for a total of 28 d (Fig. 3l–o ). Again, we observed no changes in weight or behavior across all treatment groups (Fig. 3k ). We detected high T cell numbers following i.c. implantation of mCSF1R CART or mCherry T cells (Fig. 3l,o ). These numbers gradually declined over the course of 28 d, regardless of whether mice were implanted with mCSF1R CART or mCherry T cells. At day 28, no transferred T cells could be detected in any group (Fig. 3l,o ). Furthermore, microglia numbers did not substantially differ between any of the groups (Fig. 3m,o ). Following i.c. implantation of T cells, mean body volume of microglia increased, most likely due to activation of the cells 45 (Fig. 3n,o ). This activation was most pronounced in mice injected with mCSF1R CART (Fig. 3n,o ). However, by day 28, signs of microglia activation diminished in all groups (Fig. 3n,o ). After i.v. injection of T cells, we detected neither mCSF1R CART nor mCherry control T cells in the brain and observed no signs of microglia activation or depletion (Fig. 3k–o and Extended Data Fig. 3i ). Our results suggest that, despite expression of target antigens on tissue-resident immune cells in different organs and on microglia, there were no relevant safety signals that would prevent further therapeutic development. Anti-human CAR-T cells exhibit high potency in AML xenograft models After proving the safety of CSF1R in various syngeneic mouse models, we next aimed to validate the targets in human models. We cloned an anti-human CSF1R-binding single-chain fragment variable (scFv) into the preexisting anti-mCSF1R CAR backbone, which allows direct cross-comparisons of CAR-T cell activation thresholds of both anti-mouse and anti-human CAR-T cells in mice and humans. In addition, we created two fully human anti-CSF1R CAR constructs harboring either a CD8 or CD28 hinge domain (hCSF1R CART 1–3; Extended Data Fig. 4a , left). First, we extensively cross-compared the functionality of the different anti-hCSF1R CAR constructs. All constructs could be efficiently introduced into primary human T cells (Extended Data Fig. 4b ) and were dose-dependently activated by recombinant plate-bound hCSF1R protein (Extended Data Fig. 4c ). CAR products efficiently lysed all six human AML cell lines tested but not antigen-negative NALM-6 cells (Extended Data Fig. 4d ). Constructs harboring CD8 hinge domains showed a tendency for higher lytic potency at lower effector-to-target (E:T) cell ratios (Extended Data Fig. 4d ). To evaluate antigen-specific proliferation, we cocultured CSF1R CAR-T cells with AML cell lines for 4 or 7 d. All CSF1R CAR-T cells showed antigen-specific, time-dependent proliferation (Extended Data Fig. 4e ). Absolute quantification of T cell numbers revealed a more robust expansion of CD8 hinge-based anti-CSF1R CAR constructs (Extended Data Fig. 4f ). All CSF1R CAR-T cells secreted high amounts of IFNγ after coculture with THP-1, Mv4-11 or OCI-AML-3 AML cell lines but not when cocultured with NALM-6 control cells (Extended Data Fig. 4g ). Building on these results, we decided to further proceed with CSF1R CAR-T cells harboring a CD8 hinge domain (hCSF1R CART 1, herein named hCSF1R CART). Constructs for human CD86 CAR-T cells (CD86 CART) and human CD33 CAR-T control cells (CD33 CART) were similarly designed (Extended Data Fig. 4a , right). All CAR-T cell products could be efficiently introduced into primary human T cells (Fig. 4a ). To validate functionality of CD86 CART and to compare sensitivity thresholds of both newly developed therapeutics, both CAR-T cells were incubated with their respective plate-bound antigens (Fig. 4b ). Activation of CD86 CART was already observed at very low concentrations of target protein (0.01 µg ml –1 ). In comparison, hCSF1R CART required concentrations of 1 µg ml –1 or higher (Fig. 4b ). We cocultured all CAR-T cells with AML cell lines and assessed both specific lysis of AML cells and antigen-dependent proliferation (Fig. 4c,d ). hCSF1R and CD86 CART efficiently lysed all six AML cell lines, comparable to CD33 CART (Fig. 4c ), and proliferated to a similar extent (Fig. 4d ). CD19 CAR-T cells (CD19 CART) were used as control-transduced cells. Fig. 4: Anti-target CAR-T cells are functional and efficiently lyse AML cell lines in vitro and in vivo. a , Representative flow cytometric images of construct expression on primary human T cells. b , Activation of hCSF1R or CD86 CART after incubation with plate-bound hCSF1R or hCD86 protein was quantified by flow cytometry. Data are shown as mean ± s.e.m of three different donors. c , hCSF1R or CD86 CART were cocultured with luciferase-positive target antigen-expressing AML tumor cell lines or antigen-negative NALM-6 control cells expressing luciferase for 48 h at the indicated E:T ratios. CD33 and CD19 CART (CTRL-transduced) were used as positive or negative controls, respectively. Cell lysis was quantified by BLI. Data are shown as mean ± s.e.m of three different donors. d , Dye-labeled CAR-T cells were cocultured with the above indicated cell lines for 7 d at an E:T ratio of 0.5:1 to assess proliferation. One representative image of three different donors is shown. e , Diagram of the treatment scheme used for in vivo experiments. f – h , BLI images ( f ) survival curves ( g ) and quantification of tumor burden ( h ) in Mv4-11 tumor-bearing mice after treatment with different CAR-T cells; n = 5 mice per group. i – k , BLI images ( i ), survival curves ( j ) and quantification of tumor burden ( k ) of THP-1-bearing mice treated with hCSF1R CART or control-transduced T cells; n = 10 mice per group. Shown are pooled data ± s.e.m. from two independent experiments. Red crosses in f and i indicate mice that succumbed to disease. For all experiments, statistical significance was calculated by two-way ANOVA with a Sidak multiple-comparison correction. For Kaplan–Meier curves, statistical significance was calculated with a log-rank test. Full size image To prove in vivo efficacy of newly developed CAR-T cells, we injected NOD- scid Il2rg null (NSG) mice with a lethal dose of Mv4-11 AML cells and treated them with hCSF1R, CD86, CD33 or CD19 (control) CART (Fig. 4e ). We monitored tumor progression using bioluminescence imaging (BLI). Both hCSF1R and CD86 CART eliminated Mv4-11 tumor burden in vivo (Fig. 4f–h ). To deliver in vivo proof for another AML model, we injected a lethal dose of THP-1 cells i.v. into NSG mice and treated them with CSF1R, CD33 or control CART (Fig. 4e ). Again, hCSF1R CART efficiently controlled experimental leukemia with similar complete remission (CR) rates as CD33 CART (hCSF1R CART: CR in seven of ten; CD33 CART: CR in eight of ten), with overall survival of up to 80 d after tumor cell injection (Fig. 4i–k ). In summary, we were able to demonstrate both in vitro and in vivo efficacy of newly developed hCSF1R and CD86 CART toward a large panel of human AML cell lines. hCSF1R and CD86 CART are effective in primary human models We next assessed receptor expression in primary AML samples. Until now, CSF1R expression on primary AML blasts was thought to be restricted to ‘AML-supportive cells’ or only to mature leukemic cells 46 . Indeed, when analyzing surface CSF1R expression on frozen bone marrow samples immediately after thawing, we could not detect any measurable receptor expression by flow cytometry (Fig. 5a,b ). However, when primary AML cells were cocultured on MS-5 mouse bone marrow stromal cells (Extended Data Fig. 5b ), we observed a strong, time-dependent increase of CSF1R expression (Fig. 5a,b ). We hypothesized that these discrepancies in measurable surface CSF1R expression were most likely due to receptor downmodulation during the freezing and thawing process. To probe this, we analyzed receptor expression on AML cell lines after freeze–thaw cycles. Similar to the results seen on primary AML blasts, CSF1R was undetectable directly after thawing but regained high expression after 24 to 48 h of culture (Extended Data Fig. 5a ). To further exclude any cell culture artifacts, we analyzed surface receptor expression on primary AML blasts after culturing in cytokine-rich medium 47 (Extended Data Fig. 5c ). Again, CSF1R was highly expressed on malignant primary AML blasts after culture (Extended Data Fig. 5d ). We also confirmed expression of CD86 on primary AML blasts (Fig. 5c ). Fig. 5: CSF1R and CD86 are readily detected on primary AML samples, and hCSF1R CART show efficient lysis of primary AML samples in vitro and in vivo. a , Expression of CSF1R following thawing of primary AML samples over 72 h. Each line represents one individual. b , Representative histograms of CSF1R (colored) expression on primary AML samples over time in comparison to isotype control (gray). c , Expression of CD86 on primary AML samples. Each dot represents one individual. Left: percentage of CD86 + cells gated to isotype. Right: representative histograms of four different individuals. Data are shown as mean ± s.e.m. of 11 different primary AML samples. d , hCSF1R, CD86 or CD33 CART or untransduced T cells (UT) were cocultured with primary AML samples for 72 h. Specific lysis was assessed by flow cytometry. Data are shown as mean ± s.e.m. of seven different primary AML samples. Indicated P values apply to an E:T ratio of 0.5:1. e , hCSF1R CAR construct transduced into T cells of individuals with AML. Left: transduction efficiency of human AML-derived CAR-T cells. Right: representative flow cytometry image. f , Human-derived CAR-T cells or untransduced T cells were cocultured with primary AML samples from the same donor. Experiments were performed as outlined in d . Data in e and f are shown as mean ± s.e.m. of three different autologous donors. g , Summary of treatment scheme used for in vivo experiments. h – j , BLI images ( h ), survival curves ( i ) and BLI quantification of tumor burden ( j ) of PDX-573 tumor-bearing mice injected with 6 × 10 6 hCSF1R, CD33 CART or control-transduced T cells ( n = 5 mice per group). P values in j were calculated at week 8. White crosses in h indicate censored mice, while red crosses indicate mice that succumbed to disease. k – m , BLI images ( k ), survival curves ( l ) and BLI quantification of tumor burden ( m ) of PDX-388 tumor-bearing mice injected with 6 × 10 6 hCSF1R, CD86 or CD33 CART, control-transduced T cells or PBS ( n = 3–10 mice per group). CD86 CART treatment was performed separately. For all experiments, statistical significance was calculated by two-way ANOVA with a Sidak multiple-comparison correction. For Kaplan–Meier curves, statistical significance was calculated with a log-rank test. Full size image Our single-cell gene expression analysis revealed lower expression of CSF1R and CD86 in malignant HSPCs than CD123 and CD33 reference genes. Thus, we analyzed the protein expression of CSF1R and CD86 on malignant HSPC-like cells (Extended Data Fig. 5e–g ). Both, CSF1R and CD86 were expressed on malignant HSPC-like cells, showing no differences in expression between these cell types (Extended Data Fig. 5f,g ), illustrating the conserved expression of target antigens on these cells. Next, we cocultured primary AML samples with CAR-T cells and determined specific lysis by flow cytometry. hCSF1R and CD86 CART specifically lysed primary AML samples comparable to CD33 CART at low E:T ratios (Fig. 5d ). To reflect the genetic heterogeneity of AML, seven different primary AML specimens with differing cytogenetics were used for in vitro assays. To probe whether new anti-target CAR can be introduced into T cells derived from individuals with AML, we transduced anti-hCSF1R CAR constructs into T cells of individuals suffering from AML (Fig. 5e ). Human-derived hCSF1R CART were then cocultured with autologous primary AML blasts, resulting in potent lysis of primary samples (Fig. 5f ). To prove efficacy of hCSF1R and CD86 CART in more relevant in vivo models, we transplanted cytogenetically distinct human-derived xenograft (PDX) models 48 into mice and treated them with the respective CAR-T cells. First, we selected PDX-573, a model that was derived from an individual with relapsed AML with high-risk cytogenetics (European LeukemiaNet 2017, adverse prognosis; see Extended Data Table 1 for detailed characteristics). Three weeks later, we injected hCSF1R, CD86, CD33 or CD19 CART (Fig. 5g–j ). All CAR-T cells were highly effective, inhibiting tumor outgrowth in all treated mice (five of five; Fig. 5h–j ). Next, we tested the efficacy of hCSF1R, CD86 and CD33 CART in PDX-388, derived from an individual with AML at initial diagnosis with KMT2A rearrangement (European LeukemiaNet 2017, adverse prognosis; Fig. 5k–m ). Notably, expression of CSF1R on PDX-388 samples mimicked the above-described pattern; following thawing of the cells, CSF1R was not expressed on PDX-388 cells but was detectable after at least 24 h of in vitro culture (Extended Data Fig. 5h ) and also in vivo in bone marrow sections of control-treated PDX-388 mice (Extended Data Fig. 5i ). hCSF1R and CD86 CART induced sustained remission in all treated mice over a period of 85 d (CR in ten of ten for CSF1R CART and three of three for CD86 CART; Fig. 5k–m ). Interestingly, in this model, CD33 CART completely failed to control tumor growth in all mice (CR zero of ten). We excluded manufacturing failure of CD33 CART in vitro (Extended Data Fig. 5j ). Furthermore, in a separate cohort, we verified that CD33 CART were present in the circulation of treated mice (Extended Data Fig. 5k , left) and expressed the CAR on the cell surface (Extended Data Fig. 5k , right). Ex vivo flow cytometric measurement of CD33 on PDX-388 blasts revealed a strong decrease of CD33 surface expression on PDX cells of mice treated with CD33 CART compared to CD19 CART (Extended Data Fig. 5l,m ). Failure of CD33 CART to control tumor burden was thus most likely due to downregulation of surface CD33 expression on PDX-388 blasts. However, the detailed biological mechanism remains elusive and still requires further characterization. To unambiguously validate the potential of hCSF1R CART in vivo, we used a third PDX model (PDX-372; Extended Data Fig. 6a–e ) and a third cell line xenograft model (OCI-AML3; Extended Data Fig. 6f–h ). PDX-372 samples were again derived from an individual with relapsed AML with high-risk cytogenetics and TP53 mutation (Extended Data Table 1 ). In addition, to create more challenging models, we transferred reduced numbers of CAR-T cells into PDX-372-bearing mice (Extended Data Fig. 6a ). hCSF1R CART stunted AML growth in three of five mice. Detected BLI signal did not vary between hCSF1R and CD33 CART (Extended Data Fig. 6b,c ). As previously described for PDX-388, immunohistochemical analysis revealed high expression of CSF1R on PDX cells in vivo (Extended Data Fig. 6e ). hCSF1R CART transferred into OCI-AML3 tumor-bearing mice were similarly effective (Extended Data Fig. 6f–h ). To gain a better understanding of the expression patterns of CSF1R and CD86 in the complex molecular landscape of AML and of potentially differing expression patterns in different AML subtypes, we used a published large-scale dataset (the Leukemia MILE study) and analyzed the expression of CSF1R and CD86 compared to CD123 and CD33 reference genes. Similar to CD33 , CSF1R and CD86 were broadly expressed in different subtypes, with highest expression observed in KMT2A::MLLT3 (MLL::AF9), t(15;17) and inv(16)-mutated AML (Extended Data Fig. 6i ). Given the comprehensive panel of different in vitro and in vivo models used throughout our studies, we next sought to investigate whether we can determine an antigen threshold for effective CAR-T cell therapy in AML. However, we did not observe correlations between antigen site density measured with flow cytometry and lysis capacity of CAR-T cells for any of the tested antigens (Extended Data Fig. 6j ). In summary, using three different, cytogenetically distinct PDX models and three cell line xenograft models, we were able to provide strong evidence of functionality of newly developed anti-target CAR-T cells in vitro and in vivo. Toxicity analyses of hCSF1R and CD86 CART After having verified the expression on malignant AML cells, we next evaluated target antigen expression on CD34 + HSPCs. Using flow cytometry, we demonstrated lower expression of CSF1R and CD86 than CD33 on healthy HSPCs (Fig. 6a,b ). To directly assess toxicity toward HSPCs, we cocultured enriched bone marrow-derived CD34 + cells with hCSF1R, CD86 and CD33 CART or untransduced T cells for 24 h (Fig. 6c ). CD34 + HSPCs were exclusively lysed by CD33 CART (Fig. 6c , left). Also, CD33 CART secreted more IFNγ into coculture supernatant than hCSF1R or CD86 CART (Fig. 6c , right). To further validate these results, we performed conventional colony-forming unit (c.f.u.) assays. Colony counts of c.f.u.-E and burst-forming unit (BFU)-E were higher when HSPCs were cocultured with hCSF1R CART than when they were cocultured with CD33 CART, indicative of better survival of stem cells in the presence of hCSF1R CART (Fig. 6d ). Importantly, colony counts of HSPCs cocultured with either hCSF1R CART or untransduced T cells did not vary (Fig. 6d ). Fig. 6: hCSF1R CART show better discriminatory capacity toward healthy human hematopoietic cells than CD33 CART. a , Target expression on magnetic-activated cell sorting-enriched, bone marrow-derived CD34 + HSPCs. Data are shown as mean ± s.e.m. of two to three independent, pooled HSPC donors. b , Representative flow cytometric image of target expression on HSPCs. c , CSF1R, CD86 or CD33 CART or untransduced T cells were cocultured with HSPCs for 24 h at an E:T ratio of 2:1. Left: lysis of HSPCs was quantified by flow cytometry. Right: IFNγ secretion was measured by ELISA; hIFNγ, human IFNγ. d , CSF1R and CD33 CART or untransduced T cells were cocultured with HSPCs for 24 h at an E:T ratio of 2:1, and a c.f.u. assay was performed. Colony count was quantified after 14 d. Data in c and d are shown as mean ± s.e.m. from three ( c ) or four ( d ) different donors. e , CSF1R expression on HD samples. Left: percentage of CSF1R + cells gated to isotype. Right: representative histograms of CSF1R expression on HD. f , Quantified target expression on HD. Left: percentage of positive cells gated to isotype. Right: representative flow cytometric image. Data in e and f are shown as mean ± s.e.m. from three different donors. g , hCSF1R and CD33 CART or untransduced T cells were cocultured with HD for 72 h at the indicated E:T ratios. Left: off-tumor lysis of CAR-T cells assessed by flow cytometry. Right: activation of T cells quantified by IFNγ secretion. Data are shown as mean ± s.e.m. from 11 different samples. h , i , Quantification of log-transformed normalized target expression in 13,067 single human brain cells ( h ). Each peak corresponds to a cell, and peak height indicates expression intensity. A UMAP plot illustrating the expression patterns of CSF1R , CD86 and CD33 in human brain cells is shown ( i ). j , Phenotype of human iMGL. k , Representative histograms of CSF1R and CD33 expression on iMGL. l , hCSF1R CART, CD33 CART or untransduced T cells were cocultured with iMGL for 24 h at the indicated E:T ratios. Left: lysis of iMGL was quantified by flow cytometry. Right: T cell activation was quantified by ELISA. Data are shown as mean ± s.e.m. from five T cell donors. For all experiments, statistical significance was calculated by two-way ANOVA with a Sidak multiple-comparison correction. Full size image Next, we analyzed expression of target antigens on samples of healthy human bone marrow donors (HD samples; Fig. 6e,f ). Again, surface CSF1R expression could only be detected after at least 24 h of culture (Fig. 6e ), but its expression remained lower than that of CD86 or CD33 (Fig. 6f ). Cocultures of hCSF1R and CD33 CART or untransduced T cells with HD samples revealed higher lysis of HD samples (Fig. 6g , left) and increased secretion of IFNγ by CD33 CART (Fig. 6g , right). Both lysis of HD samples and IFNγ secretion did not differ between hCSF1R CART and untransduced T cells (Fig. 6g ). scRNA-seq analysis of single human brain cells confirmed expression of CSF1R in microglia (Fig. 6h,i ). On a single-cell level, CSF1R showed higher expression in microglia than CD86 or CD33 (Fig. 6h,i ). To model toxicity of CAR-T cells toward human microglia, we generated induced pluripotent stem cell (iPSC)-derived human microglia-like cells (iMGLs) 49 , 50 and verified their phenotype (Fig. 6j ). Both CSF1R and CD33 were highly expressed on iMGLs (Fig. 6k ). Cocultures of human iMGLs with CSF1R CART, CD33 CART or untransduced T cells demonstrated lysis of human iMGLs by both CAR at high E:T ratios of 1:1 (Fig. 6l , left). At more physiological E:T ratios (0.2:1), neither CSF1R nor CD33 CART were able to lyse human iMGLs, consistent with our in vivo data (Fig. 6l , left). IFNγ release mimicked the results obtained from flow cytometric analyses (Fig. 6l , right). In summary, our data suggest a superior discriminative capacity toward healthy hematopoiesis of our newly developed CAR-T cells compared to CD33 CART and indicate that microglia might not be a relevant off-tumor target of anti-CSF1R CAR-T cells. Discussion We developed an unbiased scRNA-seq approach for de novo target identification and in-depth, high-resolution off-tumor mapping across multiple tissues that is specifically tailored to predict potential candidates for CAR-T cell therapy. Applying our approach to AML, we identified two target antigens: CSF1R and CD86. Extensive in vitro and in vivo validation revealed broad expression on AML blasts, strong and durable treatment responses of newly developed CAR-T cells in vitro and in vivo and minimal toxicities toward relevant healthy cells and tissue. For primary target screening, we leveraged single-cell sequencing data from 15 primary AML specimens with differing cytogenetic properties 21 . In addition, we validated the obtained results in an independent cohort of five additional individuals with AML 39 . The top hits of the present study were reliably found overexpressed in large bulk sequencing AML cohorts ( n = 615). Given the highly complex molecular landscape of AML, rare AML subtypes might still not be fully represented in our analyses. Despite this limitation, our study clearly demonstrates the translational potential of unbiased, scRNA-seq-based screening approaches and provides proof of principle of the whole spectrum of scRNA-seq-guided drug development spanning from computational target identification to preclinical investigation of newly developed CAR-T cells. CSF1R has been previously implicated as a target for small-molecule inhibition in AML 46 . However, its expression was thought to be restricted to a small subset of AML-supportive cells in certain individuals, while the majority of human blasts are regarded as antigen negative 51 . Using various techniques, including transcriptomic analysis, flow cytometry, immunohistochemistry and comprehensive functional investigation of CSF1R-directed CAR therapy, we were able to confirm high CSF1R expression on AML blasts. These reported ambiguities of CSF1R expression on malignant AML blasts encourage the use of unbiased, RNA-based screening algorithms for target identification and prioritization, as methodological or biological confounders can easily mask protein expression analysis. Nevertheless, it is crucial to bear in mind that scRNA-seq-centered strategies come with their own limitations (for example, the zero or dropout problem of singe-cell gene expression 52 ) and in any case require protein validation. CD86 is expressed on malignant AML blasts, and high receptor expression is associated with shortened overall survival of individuals with AML 53 , 54 , but, to the best of our knowledge, CD86 has never been explored as a target for (immuno)therapy of cancer. The expression of CD86 is not limited to AML and has also been reported in numerous B cell malignancies 55 . As such, the use of CD86 CART promises not only treatment options for AML but also applications for a variety of other hematological diseases, such as multiple myeloma 56 and childhood B cell precursor acute lymphoblastic leukemia 57 . Nevertheless, CD86 is also expressed on healthy macrophages and dendritic cells 58 , 59 , 60 and might increase the risk of immunosuppression and ensuing severe infection. However, CTLA-4 fusion proteins, such as abatacept (targeting both CD80 and CD86), have received approval by the FDA and are clinically used for the treatment of autoinflammatory disorders 61 . In clinical studies, abatacept was generally well tolerated 61 . For both CSF1R and CD86, the measured antigen site densities were rather low, especially compared to CD33 expression, which was high. Yet, despite our extensive functional validation, we did not observe marked differences between CSF1R and CD86 CART compared to established CD33 CART. Along these lines, we did not observe a correlation between lysis capacity of CAR-T cells and site density of the respective target antigen. To a certain extent, these findings are in line with recent reports observed for anti-mesothelin CAR-T cells in solid tumors 62 . Several factors, such as affinity and binding properties of the used scFv and conformation of the target antigen, can positively or negatively influence these CAR–tumor cell interactions. Ultimately, while high target antigen expression undoubtedly increases killing efficacy, our data suggest that, in some cases, functional cross-comparison might help to identify promising target antigens, despite, on first glance rather low antigen expression. Similar to previous results in AML 18 , we were not able to identify target antigens with expression limited to a single immune cell lineage, as is the case for CD19 or BCMA in B cell malignancies. However, expression of our prime candidates is limited to immune cells of myeloid origin (monocytes, tissue-resident macrophages and dendritic cells), with minimal detection on stem or progenitor cells. Thus, our candidates could bear the advantage of clinical application without the risk for severe bone marrow toxicity, which is a current concern of AML-targeted treatments 10 . It should be noted, however, that to date, the clinical outcomes of off-tumor gene expression on HSPCs remains elusive. Along these lines, precise projection of off-tumor antigen expression is one of the central objectives of our single-cell approach, because unwanted toxicity may be inferred from high transcriptomic off-target antigen expression 20 , 63 . Yet, as outlined above, the risk of severe adverse effects caused by off-tumor activity of CAR-T cells is not fully understood, and different outcomes have been reported 64 . As such, the latest trials evaluating the safety of CD123 CAR-T cells did not show sustained cytopenia 64 . However, in most anti-CD123 CAR-T cell trials currently being conducted, participants eventually received allo-HSCT, which presumably eradicated CAR-T cells. Of note, the development of fatal cytokine release syndrome and capillary leak syndrome following CD123 CAR-T cell infusion, potentially due to off-target expression of CD123 on small vessels, has been reported 7 . Altogether, current clinical evidence does not support a clear definition of the critical cell types and expression thresholds that would preclude the development of CAR-T cells against a certain target to avoid unmanageable toxicities. In any case, in the long run, detailed knowledge of off-tumor expression will allow vigilant monitoring of ‘high-risk off-tumor organs’ in clinical trials and might enable rapid side effect-mitigating treatments. Similarly, clinical lessons from anti-CD19 or anti-BCMA CAR-T cell therapy deem lineage-restricted expression patterns as highly desirable, providing further strong evidence for the use of single-cell technologies for de novo target identification, as these technologies might be able to aid the search for unrecognized target antigens with minimal off-tumor expression in healthy tissues. Many of the currently investigated CAR targets in AML failed the thresholds of overexpression on malignant HSPCs compared to their healthy counterparts in our analyses. Herein, to a certain extent, our data contradict data from publications of our colleagues 13 , 65 . Sauer et al., for example, illustrated higher expression of CD70 in bone marrow biopsies of individuals with AML than in bone marrow samples of healthy donors by using immunohistochemistry 13 . This discrepancy is most likely due to our restrictive analyses, in which we have chosen rather high cutoff criteria to ensure maximal safety of identified target antigens. Dynamic adjustment of these thresholds might yield different results, and many of the previously identified target antigens (for example, CD123, CD33, CD70, FLT3, C-type lectin-like molecule-1 and CD44v6) will most likely be of aid to improve clinical care of individuals with refractory or relapsed AML. Nonetheless, our data clearly demonstrate the value of CSF1R and CD86 as targets for CAR-T cell therapy in AML, and, especially considering the complex molecular landscape of AML and its highly diverse subsets, these targets are expected to be valuable additions to the immunotherapeutic repertoire in AML. Unsurprisingly, CSF1R was expressed on microglia, which share a common monocytic precursor, as also known for CD33 (ref. 66 ). Clinical investigation of the so far only CSF1R-directed monoclonal antibody did not reveal neurotoxicity as a concern when depleting CSF1R + cells from the periphery 67 . However, given the different mode of action of cellular- versus antibody-based therapies, these results might not be directly transferable to anti-CSF1R CAR-T cell therapy. In addition, CAR-T cells are known to be able to cross the blood–brain barrier already at steady state 68 , 69 , and peak levels of proinflammatory cytokines further increase permeability of this tightly regulated barrier 70 , 71 . Because of these considerations, we rigorously tested the possibility for neurotoxicity in numerous models. These models included fully syngenic mouse models in which we implanted large quantities of CAR-T cells directly into mouse brains. Yet, we did not observe any signs of neurotoxicity. Nevertheless, future clinical validations will need to include well-designed protocols to vigilantly detect any signs of neurotoxicity. Our results highlight the potential of using unbiased, high-resolution, single-cell transcriptomic data for target selection and drug development. Leveraging these data and the appropriate high-dimensional analyses as standard operating procedures promises to improve safety and efficacy of newly engineered CAR-T cells and enables identification of new target structures for targeted immunotherapy in malignant disorders. Methods Single-cell transcriptome analysis All preprocessing and analysis steps of scRNA-seq data were run in Python 3 using Scanpy 72 v.1.4.6 to 1.9.1 and anndata 73 v.0.7.1 to 0.8.0 unless otherwise stated. All scRNA-seq figures were plotted using matplotlib and seaborn. Preprocessing publicly available scRNA-seq data of healthy and AML cells For healthy donors and individuals with AML, we obtained raw, annotated count data of healthy bone marrow cells from two resources. (1) The data from van Galen et al. 21 were downloaded from Gene Expression Omnibus ( GSE116256 ). Here, we excluded individual AML916, as it had a mixed AML phenotype expressing markers of stem cells, myeloid, T and B lineages. (2) For the scRNA-seq data of Petti et al. 39 , we obtained raw count data from . For the data from GSE116256 , barcodes were filtered for each sample for high-quality cells based on the total distributions of unique molecular identifier counts and genes. For exact threshold values, see provided code documentation on GitHub. Cells with a fraction of mitochondria-encoded genes over 20% were excluded. Barcodes that could not be confidently assigned to either healthy or tumor cells were discarded. Genes detected in less than 20 cells were excluded from further analyses. The resulting count matrix was used for normalization. Unique molecular identifier counts of each cell were normalized using the SCRAN algorithm as implemented in the R-based package 74 , 75 . Briefly, size factors were estimated by preliminary clustering of the data using the Louvain algorithm implemented in Scanpy (tl.louvain) with a resolution of 0.5 before running ComputeSumFactors (min.mean = 0.1). The estimated size factors were then used for cell normalization. Finally, the data were log transformed (log (count + 1)). Feature selection and visualization in a low-dimensional embedding The top 4,000 variable genes were identified based on normalized dispersion, as described previously 76 , using Scanpy’s pp.highly_variable_genes with flavor = cell_ranger. Briefly, genes were ordered along their mean expression in several bins and selected according to their highest variance-to-mean ratio. To efficiently capture the underlying data structure in two dimensions, principal-component analysis dimension reduction was performed by computing 15 principal components on highly variable genes using Scanpy’s pp.pca. To account for technical batches, harmony 77 was used to integrate data from the respective individuals. Next, a neighborhood graph was computed on the first 50 harmony-adjusted principal components using Scanpy’s pp.neighbors with 15 neighbors. For two-dimensional visualization, embedding the neighborhood graph via UMAP 78 was done by running Scanpy’s tl.umap with an effective minimum distance between embedded points of 0.5. Differential gene expression of AML HSPCs and T cell marker gene identification Enriched gene expression in T cells was identified by comparing the mean expression of healthy T cells to the mean expression of all other healthy cell types using a t -test with overestimated variance, as adopted in Scanpy in the tl.rank_genes_groups function. Testing was performed on the log-transformed normalized data to account for differences in sequencing depth between samples. Upregulated genes with false-discovery rate (FDR)-adjusted P values of ≤0.01 and a log (fold change) of >0 were considered for target antigen filtering. Characteristic gene signatures of AML HSPCs were identified by performing separate differential expression analysis of AML HSC-like and HPC-like cells against their healthy equivalents, respectively. Genes that were expressed in at least 2% of all cells with log (fold change) values of >2 and FDR-adjusted P values of ≤0.01 were defined as enriched marker genes. Harmonizing public databases to obtain surface protein-coding genes To obtain genes encoding proteins on the cell surface, we used OmniPath 22 (a large-scale molecular database) to access data from (1) the mass spectrometry-based cell surface protein atlas (CSPA) 23 , (2) CellPhoneDB 25 (a repository of curated receptors, ligands and their interactions), (3) the machine learning-based in silico human surfaceome 24 and (4) the human protein atlas (HPA) 26 v20.1 ( ). Permissive integration of all datasets was critical, as cell surface expression showed strong variability between databases. Consequently, the union of these databases was used for all subsequent analyses. Targets of FDA-approved drugs To identify genes that encode druggable proteins, we used DrugBank 38 , a database containing information on the interactions, pharmacology and chemical structures of drugs and drug targets. We defined druggable genes as targets with known pharmacological action of FDA-approved drugs. COOTA To quantify off-target effects, we analyzed and combined a total of 11 scRNA-seq datasets across 9 healthy tissues 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 . Raw annotated scRNA-seq data from the respective studies 27 , 29 , 31 , 33 , 34 , 35 , 36 , 37 were obtained using the Python-based data repository sfaira 79 . To quantitatively analyze the expression of possible CAR-T cell therapy targets across healthy tissues, comparable preprocessing steps were performed for each dataset separately, which involved removing low-quality cells and lowly expressed genes (see provided code documentation on GitHub for exact thresholds), normalizing cell counts using scran, selecting highly variable genes based on normalized dispersion and visualizing the cells in a two-dimensional UMAP embedding as described above. For the lung datasets of Travaglini, Madissoon and Reyfman 30 , 31 , 32 , we used publicly available data with cell annotations derived from a study integrating multiple scRNA-seq datasets 80 . To account for technical batches along the respective samples, batch-balanced k -nearest neighbors 81 were calculated for the datasets of Travaglini, Madissoon, Reyfman, Ramachandran, James, Cheng and Han 30 , 31 , 32 , 34 , 35 , 36 , 37 . Finally, processed and annotated count matrices were concatenated on union variables using Scanpy’s concatenate with join = outer, and the resulting matrix was used for target antigen filtering. Genes were excluded if they were expressed in over 2% of cells of a critical cell cluster (endothelial, arterial, bronchial, capillary, venous and smooth muscle cells). A single-gene-based count matrix with barcodes × datasets was created and used for plotting mean expression values across cell types. Reference mapping and label transfer Raw count data from Petti et al. 39 were filtered to obtain high-quality cells. Cells with less than ten expressed genes and genes detected in less than three cells were excluded from further analyses. Barcodes with a fraction of mitochondria-encoded genes of over 10% and a fraction of ribosomal genes of over 50% were excluded. For reference mapping and label transfer, we used scANVI 40 , a semisupervised variational autoencoder model, to leverage the cell-type knowledge from the data from GSE116256 (ref. 21 ) to infer the states of cells from Petti et al. 39 . Briefly, we trained the scVI model on the reference data with two hidden layers and a dropout rate of 0.2 (for the exact parameters, see provided code documentation on GitHub). Next, we initialized the scANVI model from the pretrained scVI model before training the scANVI model for 20 epochs with 100 samples per label. Afterward, we created a new query model instance before training the query data with a weight_decay of 0 for 100 epochs. The latent representation and the label predictions were obtained using get_latent_representation() and predict() functions, respectively. Finally, we computed a neighborhood graph with 15 neighbors using the scANVI representation before embedding the graph using UMAP as mentioned earlier. Bulk expression data The data used for the bulk analyses of CSF1R and CD86 expression in human and mouse tissues were obtained from the GTEX portal 82 ( E-MTAB-5214 ) and Merkin et al. 83 ( E-MTAB-2801 ) via the Expression Atlas 84 ( ). Cell lines Human AML cell lines THP-1, MV4-11, OCI-AML-3, PL-21, MOLM13, U937 and NALM-6 were purchased from ATCC. All cell lines were cultured in RPMI containing 20% fetal bovine serum (FBS), 2 mM l -glutamine, 100 U ml –1 penicillin and 100 µg ml –1 streptomycin. The mouse J774A.1 cell line was provided by P. Düwell (Institute of Innate Immunity, University Hospital, Bonn). Cells were cultured in DMEM containing 10% FBS, 2 mM l -glutamine, 100 U ml –1 penicillin and 100 µg ml –1 streptomycin. All cells were grown at 37 °C in a humidified incubator with 5% CO 2 . Short tandem repeat profiling was used to verify the identity of human cell lines. Cells were negatively tested for Mycoplasma contamination using PCR. All cell lines were lentivirally transduced with a pCDH-EF1a-eFly-eGFP plasmid 85 . After transduction, enhanced green fluorescent protein-positive cells were single-cell sorted using a BD FACSAria III cell sorter, and expression of firefly luciferase (fLuc) was verified using a Bio-Glo luciferase assay system. Cells were frozen in medium containing 90% fetal calf serum and 10% DMSO and stored at −80 °C or in liquid nitrogen for long-term storage. Generation of PDX cells was previously described 48 . Anti-mouse c-FMS (CD115)-producing hybridoma RCB4486 was acquired from the RIKEN Bio-Resource Research Center 86 . AML blast isolation and culture Primary AML blasts or healthy donor control samples (HD) were obtained from the bone marrow or peripheral blood of individuals suffering from AML after written informed consent was acquired in accordance with the Declaration of Helsinki and approval by the Institutional Review Board of the Ludwig-Maximilians Universität (LMU) or the Ethics Committee of the Medical Association of Hamburg. Bone marrow aspirates were enriched for AML blasts either through density centrifugation or lysis of red blood cells using osmotic gradient solutions and frozen in liquid nitrogen. Before T cell-based assays, bone marrow aspirates were thawed, and T cells were depleted using a CD3 + selection kit (StemCell Technologies). Primary AML samples were either cultured in IMDM basal medium supplemented with 15% BIT 9500 serum substitute and β-mercaptoethanol (10 −4 M), 100 ng ml –1 stem cell factor, 50 ng ml –1 FLT3 ligand, 20 ng ml –1 granulocyte colony-stimulating factor, 20 ng ml –1 IL-3, 1 µM UM729 and 500 nM SR1 (ref. 29 ) or alternatively in α-MEM supplemented with 12.5% horse serum, 12.5% fetal calf serum, 1% penicillin/streptomycin, 1% l -glutamine, granulocyte colony-stimulating factor, IL-3, thrombopoietin and β-mercaptoethanol on irradiated MS-5 (mouse bone marrow stromal cells) for coculture experiments 87 , 88 , 89 . Cocultures with primary AML samples were performed after a 3-d preculture of the thawed AML blasts 89 . Flow cytometry Flow cytometric analysis was performed using a BD FACSCanto, a BD LSRFortessa II or a Beckman Coulter CytoFLEX. All staining steps for identified AML target antigens were conducted on ice. Cells were centrifuged at 200–400 g for 5 min at 4 °C in a precooled centrifuge. For staining of primary AML blasts and AML cell lines, a maximum of 10 6 cells was counted and transferred to a U-bottom 96-well plate. Cells were washed twice with ice-cold PBS containing 2% FBS and incubated for 15 min with 5 µl of human TrueStain FcX. CSF1R was stained for 30 min in the dark either using unconjugated anti-human m-CSF-R/CD115 (R&D Systems, clone 61701) or mouse IgG1 isotype as a control (R&D Systems, clone 11711), followed by secondary staining with AlexaFluor 647 rat anti-mouse IgG (H + L; Jackson ImmunoResearch) or alternatively after incubation with biotinylated recombinant CSF-1 protein (Sino Biological) followed by secondary staining with streptavidin-APC. Staining for CD86 was conducted with anti-human CD86 (clone IT2.2) Dead cells were excluded after staining with a fixable viability dye (eFluor 780, eBioscience) in all experiments. Quantification of absolute cell counts was performed by using Count Bright Absolute Counting Beads (Thermo Fisher Scientific). Primary AML blasts were identified using anti-human CD45 (clone HI30) and anti-human CD33 (clone P67.6; WM53, Invitrogen/eBioscience). Staining with anti-human CD34 (clone 561) and anti-CD38 (clone HB-7) was included for gating of leukemia-initiating cells. CAR expression was quantified using anti-c-Myc-FITC (clone SH1-26E7.1.6; Miltenyi Biotec). CAR activation was measured using anti-mouse or anti-human CD69 (mouse clone H1.2F3, human clone FN50), CD107a (mouse clone 1D4B, BD Biosciences; human clone H4A3) and PD-1 (human clone PD-1, EH12.2H7). Anti-CD2 (human clone RPA-2.10), anti-CD3 (mouse clone 145-2C11, human clone UCHT1, HIT3a), anti-CD4 (mouse clone GK1.5, human clone OKT4) and anti-CD8 (mouse clone 53-6.7, human clone SK1, HIT8aa) were used to gate on T cells. Gating on tissue-resident immune cells was performed using anti-mouse CD45 (clone 30-F11) and anti-mouse/human CD11b (clone M1/70). Preparation of organs for flow cytometric analysis was performed as recently described 90 . All antibodies and reagents were purchased from BioLegend unless otherwise specified. Absolute quantification of antigen densities per molecule was performed using BD Biosciences Quantibrite phycoerythrin beads according to the manufacturer’s instructions. All flow cytometric stainings were performed at a dilution of 1:50. For flow cytometric quantification, an absolute molecule count per cell dilution of 1:20 was used. Generation of CAR constructs All CAR constructs contained Myc tags to readily detect CAR expression. Anti-CSF1R scFv was designed based on the patented sequence of anti-CSF1R heavy and light chain variable domains of the anti-CSF1R clone 2F11-e7 (ref. 91 ). Anti-human CD86 scFv was derived from 3D1 (ref. 92 ) . Anti-CD19 CAR-T cells were designed based on anti-CD19-CAR-FMC63-28Z CAR-T cells 93 . h-P67.7 scFv was used for anti-CD33 CAR-T cells. Mouse anti-CSF1R scFv (clone AFS98) was derived from anti-mouse c-FMS-producing hybridoma described earlier. Mouse CAR constructs contained an mCherry fluorescent tag separated from the CAR construct via a 2A self-cleaving peptide sequence. Constructs were either created using conventional cloning techniques or were codon optimized and cloned into pMP71 retroviral vectors using commercial cloning services (Twist Bioscience). Retroviral mouse and human T cell transduction For virus production, retroviral pMP71 vectors carrying the sequence of the relevant receptor were stably expressed in packaging cell lines 293Vec-Galv, 293Vec-Eco and 293Vec-RD114 (refs. 90 , 94 , 95 ). Human T cells were isolated from healthy donor peripheral blood mononuclear cells using density gradient centrifugation, enriched for T cells using anti-CD3 microbeads (Miltenyi Biotec) and stimulated for 48 h before retroviral transduction with human T-Activator CD3/CD28 Dynabeads (Life Technologies). Human T cells were expanded in human T cell medium (hTCM) containing 2.5% human serum, 2 mM l -glutamine, 100 U ml –1 penicillin, 100 µg ml –1 streptomycin, 1% sodium pyruvate and 1% non-essential amino acids supplemented with recombinant human IL-2 (PeproTech and Novartis) and IL-15 (PeproTech and Miltenyi Biotech). Mouse T cells were derived from splenocytes, activated using anti-CD3 and anti-CD28 and transduced in mouse T cell medium (10% FBS, 2 mM l -glutamine, 100 U ml –1 penicillin, 100 µg ml –1 streptomycin, 1% sodium pyruvate and 0.5% HEPES) supplemented with IL-2 and mouse T-Activator CD3/CD28 Dynabeads (Life Technologies), as described. Following retroviral transduction, mouse T cells were expanded in medium containing human IL-15 (PeproTech). For all experiments comparing different CAR-T cells, transduction efficiency was adjusted to the lowest measured efficiency of the respective construct. Animal experiments Animal experiments were approved by the local regulatory agency (Regierung von Oberbayern) and were performed in accordance with guidelines and regulations implemented by the Regierung von Oberbayern. Animals were housed in specific pathogen-free facilities. C57BL/6, BALB/c and NOD.Cg-Prkdc scid Il2rg tm1WjI/SzJ mice were purchased from Janvier (St. Berthevin) or Charles River Laboratories or were bred at local facilities. CX3CR1–GFP reporter mice were bred at local facilities. Mice were held in facilities with a 12-h dark/12-h light cycle including a 30-min twilight phase at noise levels below 50 dBA. Air velocity was held below 0.2 m s –1 . Air humidity in the facilities was between 45 and 60%, and the average temperature was held between 20 and 22 °C. PDX models AML-573, AML-388 and AML-372 were genetically modified to express fLuc 48 . For BLI, mice were anesthetized using an isoflurane–oxygen mixture (1.5–2.5%) following intraperitoneal injection of BLI substrate (Xenolight d -luciferin potassium salt, PerkinElmer) into each mouse, according to the manufacturer’s protocol. An in vivo imaging system platform Lumina X5 (IVIS, PerkinElmer) was used to measure BLI signal. Xenograft models using THP-1, MV4-11 or OCI-AML3 cell lines or PDX models were established by i.v. injection. T cells were transferred at the indicated times and numbers. Mice that had to be removed from animal experiments due to non-tumor-related toxicities (for example, did not have measurable BLI signal at the exclusion timepoint) were censored. Censored mice are indicated in the respective Kaplan–Meier curves and are marked with a white cross in the BLI images. Surgical procedures and stereotactic implantation Preparation of chronic cranial windows using microsurgical implantation and stereotactic CAR-T cell injection was performed as previously described 95 . After the mouse was deeply anesthetized by intraperitoneal injection of midazolam (5 mg kg –1 ), medetomidin (0.05 mg kg –1 ) and fentanyl (0.5 mg kg –1 ), the skin was cut, and the periosteum was removed. After marking the cortical area of interest, a 5.5-mm circular part of the cranium was removed using a sterile carbon steel microdrill. Dura mater was separated from leptomeninges using forceps and removed to prevent dural fibrosis. Sterile round cover glasses and tailored rings were attached to the cranial bone with acrylic dental glue. To prevent postsurgical astroglial or microglial activation affecting tumor growth, stereotactic implantation of CAR-T cells was performed at least 2 weeks after cranial window implantation. For stereotactic implantation of CAR-T cells, 2 × 10 5 transduced T cells were resuspended in 1–2 µl of PBS and injected at predefined coordinates (1 mm lateral and 2 mm posterior to the bregma at an intraparenchymal depth of 1.5 mm). Perioperative care included daily recording of weight and neurological scores. TPLSM TPLSM was performed using a multiphoton TrimScope I system (LaVision Biotec) connected to an upright Olympus microscope equipped with a MaiTai laser (690 to 1,040 nm, Spectra Physics) and a ×20/0.95-NA water immersion objective (Olympus). Single images were acquired from different depths depending on different regions, with a z interval of 2 and 5 μm. An excitation wavelength of 920 nm was used with a resolution of 1,024 × 1,024 pixels and detected by photomultiplier tubes (G6780-20, Hamamatsu). Mice were anesthetized by isoflurane and maintained with a constant flow from 0.8 to 2.0% (as low as possible according to the physical condition of the mouse). After original images were acquired using Imspector Pro, Bitplane Imaris software was used for further analysis. To obtain high-quality images, brightness, contrast or color balance was regulated manually for the whole images. Immunohistochemistry hCSF1R staining in mouse xenograft bone marrow samples was performed using primary anti-human CSF1R (Cell Signaling, rabbit monoclonal, clone E4T8Z, 28917). Samples were formalin fixed, decalcified in EDTA and paraffin embedded. Heat-mediated epitope retrieval was accomplished using epitope retrieval solution, pH 8 (Novocastra, RE7116). Slides were incubated in primary antibody for 60 min at room temperature at a dilution of 1:180. Biotinylated secondary anti-rabbit IgG (Vector, BA-1000) and streptavidin–HRP reagent (Novocastra, RE 7104) were used for antibody detection. Finally, slides were stained with DAB+ (Agilent Technologies, K3468) and counterstained with hematoxylin Gill’s formula (Vector, H-3401). T cell stimulation assay using plate-bound recombinant protein Ninety-six-well, half area, flat-bottom, polystyrene plates (Corning) were coated overnight at 4 °C with Fc-tagged recombinant protein diluted in 100 µl of PBS at the indicated concentrations. The next day, plates were washed with PBS and blocked with 2% bovine serum albumin (BSA) dissolved in PBS for 30 min. After another washing step, 50,000 T cells resuspended in hTCM without cytokines were added. T cell activation was assessed by flow cytometry after 24 h of incubation. Cytotoxicity assays For coculture experiments, 30,000 to 50,000 human AML cells were plated in a flat-bottom, 96-well plate. Tumor cells were cocultured with T cells at the indicated E:T ratio for 48 h in hTCM without supplements unless otherwise specified. Killing was assessed using either a Bio-Glo luciferase assay system (Promega Corporation) according to the manufacturer’s protocol or flow cytometry. Specific lysis was calculated after normalization to control conditions. Cocultures of primary AML blasts or healthy bone marrow cells and CAR-T cells were performed under the conditions outlined above. Killing was quantified by flow cytometry. Proliferation assays Before cocultures (E:T ratio of 0:5:1), T cells were stained using a CellTrace Far Red cell proliferation kit (Thermo Fisher Scientific), according to the manufacturer’s instructions. Trace dilution was measured by flow cytometry at day 7. Cytokine measurements Cytokine levels in coculture supernatants were analyzed using IFNγ and IL-2 ELISA (BD Bioscience) according to the manufacturer’s protocol. A LEGENDplex mouse cytokine release syndrome panel (Biolegend) was used to analyze serum cytokine levels in mice. Procedures were performed as described by the manufacturer. HSC cocultures Human CD34 + bone marrow- or cord blood-derived HSCs were acquired from Stemcell Technologies. Healthy human bone marrow samples (HD) were obtained from individuals undergoing hip replacement surgery at the University Hospital of the LMU, Munich. All cells were collected after informed consent was obtained, in accordance with the Declaration of Helsinki. HSCs were thawed in a prewarmed water bath at 37 °C. Directly after thawing, cells were expanded using StemSpan II medium (Stemcell Technologies) supplemented with serum-free nutrient supply and UM729 small-molecule inhibitor. Flow cytometric analysis of HSCs or cocultures of HSCs and CAR-T cells were conducted after a total expansion phase of 7 d. Fresh expansion medium was added on day 3. HSCs (30,000) were cocultured with T cells at the indicated E:T ratios for 24 h before flow cytometric analysis. Cocultures of HD and T cells were performed similar to the cocultures of primary human AML blasts (see earlier). Generation of iMGLs Human iMGLs were generated as previously described 49 , 50 . In brief, human iPSC cell lines were differentiated to hematopoietic progenitors using a STEMdiff hematopoietic kit (Stemcell Technologies). Following successful development to hematopoietic progenitors, cells were grown in serum-free iMGL differentiation medium containing CSF-1, IL-34 and transforming growth factor-β for at least 12 d. Cells were then collected and used for in vitro cocultures with CAR-T cells. Cocultures of iMGLs and CAR-T cells were performed as described above. Software and statistical analysis Flow cytometric data were obtained with BD FACSDiva or Beckman Coulter software. G*Power software v3.1 was used to calculate group size of animal experiments. Luminescence and absorbance were measured with a Mithras Reader using MicroWin2000 software. Flow cytometric data were analyzed using FlowJo V10.3 to V10.8.1 software. ImageJ/Fiji and Imaris (Bitplane AG) were used for analysis of TPLSM images. Radiance calculation of BLI images was performed using Living Image 4.4 (PerkinElmer). All statistical analyses were performed using GraphPad Prism software V9.2.0 to V9.5.0. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Data from publicly available scRNA-seq studies can be found via the following accession numbers or the provided links: GSE116256 (ref. 21 ), (ref. 39 ), (ref. 27 ), GSE134355 (ref. 37 ), (ref. 36 ), GSE131907 (ref. 28 ), GSE115469 (ref. 33 ), (ref. 31 ), GSE136103 (ref. 34 ), (ref. 32 ), (ref. 29 ), (ref. 30 ) and EGAS00001002927 (ref. 35 ). Data from publicly available bulk sequencing studies can be found via the accession numbers E-MTAB-5214 (ref. 82 ) and E-MTAB-2801 (ref. 83 ) via the Expression Atlas ( ). The surface gene library was obtained by integrating publicly available data 23 , 24 , 25 , 26 using OmniPath 22 ( ). Targets of FDA-approved drugs were obtained using DrugBank ( ). All reagents and biological material will be made available upon reasonable request to the authors given the agreement by the providing institution. Code availability Python scripts for replicating the figures from the scRNA-seq data as jupyter notebooks can be found in the GitHub repository at ref. 96 . Count matrices of processed scRNA-seq data will be made available upon reasonable request. | None | [] | [] | [] | SciNews | Medicine | Gottschlich, A. et al. Single-cell transcriptomic atlas-guided development of CAR-T cells for the treatment of acute myeloid leukemia, Nature Biotechnology (2023). DOI: 10.1038/s41587-023-01684-0. www.nature.com/articles/s41587-023-01684-0 Journal information: Nature Biotechnology | https://dx.doi.org/10.1038/s41587-023-01684-0 | https://medicalxpress.com/news/2023-03-car-t-cell-therapy-acute-myeloid.html | Researchers from the University of Munich and Helmholtz Munich have discovered new molecular targets that could enable CAR-T cell immunotherapy for acute myeloid leukemia (AML), a form of blood cancer that currently cannot be treated with this approach. The team, led by Professor Sebastian Kobold and Dr. Carsten Marr, used bioinformatic analyses and single-cell data to identify two potential targets, CSF1R and CD86, which are exclusively found on the surface of AML cells. They then produced CAR-T cells that target these molecules and tested them on AML models, including patient-derived cells, with promising results. The therapy showed effectiveness against AML while sparing healthy cells. The researchers aim to develop good manufacturing practice-capable processes to produce CAR-T cells for clinical trials with AML patients, with the first tests expected in two to three years.
Unlike other forms of blood cancer, acute myeloid leukemia (AML) cannot currently be treated with CAR-T cell immunotherapy. The reason is that specific molecular targets with which certain immune cells could specifically target AML cells are lacking, which would permit the immune system to attack cancer. Two research teams of Professor Dr. Sebastian Kobold with Dr. Adrian Gottschlich from the Division of Clinical Pharmacology at LMU University Hospital Munich and Dr. Carsten Marr with Moritz Thomas from the Institute of AI for Health at Helmholtz Munich have now succeeded in discovering such targets. The results have now been published in the journal Nature Biotechnology. AML—one of several forms of leukemia ("blood cancer")—is a treacherous disease. Five years after the initial diagnosis, only one-third of patients are still alive. Up to 85 percent of patients appear to be cured after intensive chemotherapy. However, in more than half of them, the disease returns within one to two years because the chemotherapy has not destroyed all leukemia cells. In the event of a relapse, a stem cell transplant is the only hope for cure for a patient. But even then, the long-term probability of survival is less than 20 percent. New treatment options are therefore urgently needed. CAR-T cell therapy is an innovative therapy. CAR-T stands for "chimeric antigen receptor in T cells." T cells are cells of the immune system. Cancer cells evade their "normal" attempts to attack them by using various molecular tricks. Thus, T cells no longer recognize their opponents, the cancer cells. During CAR-T cell therapy, T cells are first removed from the patients and then genetically engineered to produce a specific protein (CAR) on their surface. When these CAR-T cells are injected back into the patient's body, these will only engage their target: CD19, which ensures that they recognize the patient's cancer cells and bind to them in a targeted manner. The cancer cells consequently die. New targets However, the approved CAR-T cells against CD19 are not suitable for AML, because CD19 is (usually) not present on the surface of AML cells. Clinical results with CAR-T cells directed against other surface molecules of AML cells have been sobering so far, according to scientists. This is because CAR-T cells were unable to distinguish between healthy and degenerated cells—with correspondingly induced significant side effects. The physician Sebastian Kobold and the physicist Carsten Marr, together with colleagues from the LMU University Hospital Munich and the Institute of AI for Health at Helmholtz Munich, set out to find alternative molecules that would ideally be found exclusively on the surface of AML cells. With the help of extensive bioinformatic analyses and the integration of expression data from more than half a million individual cells, two candidates finally crystallized out of 25,000 potential cell surface molecules. These are known as CSF1R and CD86. "Such an analysis would not have been possible a few years ago, since the required single-cell data has been generated only very recently," says Marr, who led the AI-assisted analysis in the study at Helmholtz Munich. The researchers produced CAR-T cells in the laboratory of the LMU University Hospital Munich that precisely target these molecules. The cells were then tested on different AML models, including AML cells from patients. The results, according to Kobold are promising: "On the one hand, these CAR-T cells are effective against AML, but on the other hand, they hardly destroy healthy cells." The study impressively demonstrates how the synergy of interdisciplinary research groups can lead to breakthroughs in health research to treat patients in the best possible way. The researchers' next goal: They want to develop GMP (good manufacturing practice)-capable processes to produce CAR-T cells that can then also be used in clinical trials with AML patients. This is to take place within the framework of the "Bavarian Cell Therapy Catalyst," which is supported by the Bavarian Research Foundation. Kobold expects the first tests with patients in two to three years. |
10.1007/s00114-018-1539-z | White cheeks are more titillating | Male blue tits with white cheeks are healthier and more likely to mate with higher quality partners than their counterparts with duller cheek feathers. Having purer white cheeks also indicates that a blue tit was better able to overcome an infection with parasites during the previous year. This is according to Elisa Pérez Badás of the Museo Nacional de Ciencias Naturales in Spain. She is lead author of a study published in Springer's journal The Science of Nature. Previous research has shown that the food consumed by a bird, as well as its general well-being, can influence the colour of its feathers. Scientists also know that hardships suffered by birds in one season can be carried over into the next. In this study, Badás and her research team wanted to test whether difficulties encountered by the blue tit (Cyanistes caeruleus) during the breeding season might influence the precise intensity of the new blue, white and yellow feathers growing once these birds have moulted. In the life cycle of this small bird, which is widespread in forests in Europe and Western Asia, moulting only happens once the breeding season is completed. Therefore, the birds show off their new plumage until the end of the next breeding season. To prove their assumptions, the research team monitored a population of blue tits living in a forest in central Spain over the course of two breeding seasons. In the first season, the researchers caught the birds and took blood samples to detect whether the blue tits suffered from parasitic infections. The team also used a spectrophotometer to gauge the spectrum of colour of the birds' feathers. These results were compared with the hues, levels of saturation and luminance that blue tits are known to see. In the following season, the researchers noted the birds' mating patterns, and how these were influenced by changes that might have occurred in particular birds' feather colours. Overall, the researchers found that males in a better physical condition (males that weighed more) during the highly demanding nestling provisioning stage sported brighter, whiter cheeks. Those who were not infected much by the malaria parasite Plasmodium while breeding also showed purer white cheek feathers in winter. According to Pérez Badás, this indicates that their feathers were of a better quality, and that intense parasitic infections can have an effect on a bird's life cycle. "In the following season, those males with brighter cheeks paired with females that had noticeably brighter cheek patches compared to the male's previous mate," adds Badás. The results therefore suggest that the conditions that male blue tits experienced during reproduction are likely to affect moult and thus feather colouration, at least in their white facial feathers. This, in turn, enables the stronger males to find brighter females than the partners they paired with in the previous spring. "Members of the same species were quite able to pick up such colour differences," notes Badás. | A study published in Springer's journal The Science of Nature found that male blue tits with brighter, whiter cheeks are healthier and more likely to mate with higher quality partners. The researchers, led by Elisa Pérez Badás, monitored a population of blue tits in central Spain over two breeding seasons and found that males in better physical condition during the breeding season had brighter, whiter cheeks, while those infected with parasites had duller cheek feathers. The study suggests that the conditions a male blue tit experiences during reproduction can affect the quality of its feathers, particularly its white facial feathers, which can then influence its mating success. The results show that stronger males are able to find brighter females than their previous partners, indicating that the color differences are noticeable to other members of the same species. | None | Abstract Carry-over effects refer to processes that occur in one season and influence fitness in the following. In birds, two costly activities, namely reproduction and moult, are restricted to a small time window, and sometimes overlap. Thus, colour in newly moulted feathers is likely to be affected by the costs of reproduction. Using models of bird vision we investigated male colour change in a free-living population of blue tits ( Cyanistes caeruleus ) in three sampling occasions: spring 1, winter and spring 2. We related crown, tail, breast and cheek feather colouration after the moult (winter) to the intensity of infections by blood parasites during reproduction (spring 1). In the following spring (spring 2), we explored mating patterns with respect to changes in feather colour (springs 1 vs. 2). Males that were less intensely infected by the malaria parasite Plasmodium while breeding showed purer white cheek feathers in winter, which may indicate higher feather quality. Increased brightness in the white cheek was associated with better body condition during reproduction. In the following season, males with brighter cheeks paired with females that had noticeably brighter cheek patches compared to the male’s previous mate. These results suggest that the conditions experienced during reproduction are likely to affect moult and thus feather colouration, at least in the white patch. High quality individuals may allocate resources efficiently during reproduction increasing future reproductive success through variation in mating patterns. Carry-over effects from reproduction might extend not only to the non-breeding phase, but also to the following breeding season. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction A central tenet of life-history theory is that resources allocated to current reproduction are traded-off against self-maintenance and future reproductive output (Stearns 1992 ; Metcalfe and Monaghan 2001 ). In birds, a growing body of research has investigated the mechanisms behind the costs of reproduction (reviewed in Harshman and Zera 2007 ; Blount et al. 2016 ), and the costs per se, which can come as a reduction in survival (Santos and Nakagawa 2012 ), for example, through accelerated ageing (Bize et al. 2009 ; Badás et al. 2015 ). Others have focused on the downstream effects of non-breeding season processes on reproduction, commonly known as ‘carry-over effects’ (Gunnarsson et al. 2006 ; Robb et al. 2008 ; Sorensen et al. 2009 ). However, few studies have explored the effects of reproduction on the subsequent non-reproductive season, when reproductive activities, in fact, could influence the outcome during the following breeding event (Dreiss and Roulin 2010 ; Harrison et al. 2011 ). Reproduction can exert changes in the individual’s post-breeding activities such as the moult. Moulting is an energetically demanding process (Griggio et al. 2009 ), because it encompasses physiological (i.e. altering multiple stress response pathways, Merino and Barbosa 1997 ) and metabolic costs (an increase in 30% of metabolic rate) (Cyr et al. 2008 ). Many birds initiate the post-nuptial moult while still raising young (Jenni and Winkler 1994 ), but because these activities are highly demanding, they should be separate in time. Indeed, passerines that were already moulting while breeding had reduced fledgling success (Sanz 1999 ; Hemborg et al. 2001 ; Morales et al. 2007 ). Delayed reproduction can compromise the time allocated for moulting, thus reducing feather quality, as has been reported in starlings ( Sturnus vulgaris ) (Dawson et al. 2000 ). Furthermore, Nilsson and Svensson ( 1996 ) showed that blue tits ( Cyanistes caeruleus ) that delayed moult had higher thermoregulatory costs in the following winter, and this resulted in reduced over-winter survival and breeding success the following year. Thus, the effects in feather synthesis become evident when reproductive effort exceeds what individuals were prepared to sustain. Additional information is needed on whether the individual’s status during reproduction has an important bearing on feather quality. Data on recently moulted birds in free-living populations are scarce because re-trapping the same individuals repeatedly is difficult (Dawson et al. 2000 ). The costs of reproduction on feather quality could be assessed through colouration, because colours are incorporated to new feathers during moult (Hill and McGraw 2006 ). In fact, because plumage colours are produced through different metabolic pathways depending on its nature (structural or pigmentary), they are subject to different constraints that may convey information to prospecting mates (Hill 2006a ). For example, in eiders ( Somateria mollisima ), it has been suggested that reproductive females with reduced lymphocyte levels may suffer from infections in their following moult, which could reduce the reflectance of the white plumage bands (Hanssen et al. 2006 ). In blue tits, experimentally increasing reproductive effort produced changes in feather colouration in two ornaments in the year following manipulation: the yellow breast and the blue crown (Doutrelant et al. 2012 ). Other mechanisms, such as soiling (Fitzpatrick 1998 ) or feather degrading bacteria (Shawkey et al. 2007 ) can also explain changes in colour during the season, but to date, no study has evaluated these changes with respect to parasitic infections. Because reproduction can affect immunocompetence (Hanssen et al. 2003 ), it is common that bird populations in temperate regions suffer from chronic blood parasite infections with relapses during the breeding season (Valkiūnas 2005 ). The negative effects that these parasites exert on the host are well known (Merino et al. 2000 ; Martínez-de la Puente et al. 2010 ; Asghar et al. 2015 ); however, more studies are needed to explore the effects of parasitic infections during the breeding season on subsequent feather colouration. Strong immune responses (i.e. against parasitic infections) may have negative effects on the moult, decreasing the amount of resources available and resulting in a delayed onset of post-nuptial moult (Sanz et al. 2004 , but see Moreno et al. 2001 ). Indeed, experimental studies have shown that certain aspects of structural colouration can signal food stress (Siefferman and Hill 2005 ) or acute parasite infection during the moult (Doucet and Montgomerie 2003 ). Similarly, many studies have related dull carotenoid-based ornaments to high parasite loads (reviewed in Hill 2006b ), but others have failed to find such negative relationships (Seutin 1994 ; Fitze and Richner 2002 ). In this study, we investigated colour change in plumage patches that differ in their main mechanism of colour production (structural, pigmentary, or both) and related these changes to parasitic infections during reproduction and to other breeding parameters. In the blue tit, coherent scattering leads to the production of the structural blue plumage colours seen in the crown, while incoherent scattering is responsible for the achromatic white feathers found in the cheek (Prum 2006 ). Carotenoid pigments are obtained from food and deposited in feathers (McGraw et al. 2002 ) producing, among others, the yellow colouration seen in the blue tit’s breast feathers. Aside from the crown, cheek and breast, we also measured feather colouration in another patch for which less information is available in the literature, the blue base of the tail. This plumage patch has been described as sexually dichromatic in nestlings (Johnsen et al. 2003 ), and it is likely to be relevant for sexual selection in the blue tit. Studies investigating several ornaments simultaneously are increasing (Doucet and Montgomerie 2003 ; Hegyi et al. 2007 ; Galván 2010 ), but changes in feather colour in multiple ornaments are understudied. Furthermore, adult blue tits undergo a complete moult once a year; hence, the plumage achieved during moult will be carried until the end of the next breeding season (Nilsson and Svensson 1996 ). For this reason, the carry-over effects from reproduction on moult and feather colouration may have an effect on mating patterns in the following reproductive event. First, we aimed at relating individual quality during reproduction to change in male feather colour (spring 1 vs. winter). Individual quality was evaluated by measuring body condition and the intensity of infections by several blood parasites (avian malaria and malaria-like parasites) during the highly demanding nestling provisioning phase. In poor quality individuals, we expect reproductive costs to negatively affect the feather colouration obtained after the moult, therefore, these birds should show duller or less pure colour depending on the feather patch. High-quality individuals may be able to cope with the costs of reproduction and either (i) increase brightness/saturation or (ii) maintain brighter/more-saturated feather colours. Besides, if performance in the previous reproductive event was high, an individual may gain higher quality partners in the following season (see Griggio et al. 2009 ), although this has been unexplored so far. On this basis, the second aim of this study was to investigate the effects of male colour change and breeding parameters on matting patterns in the consecutive breeding season (spring 2). Using discrimination models that take into account, the blue tit’s photoreceptor spectral sensitivities (Endler and Mielke 2005 ; Stevens 2011 ), we evaluated the change in the mate’s feather colouration between seasons (spring 1 vs. spring 2). Methods Study site and sampling Data were collected during the 2013 (spring 1 and winter) and 2014 seasons (spring 2) on a free-ranging population of blue tits breeding in a deciduous forest of Pyrenean oak ( Quercus pyrenaica ), in the vicinity of Valsaín (Segovia), central Spain (40° 53′ N, 4° 01′ W, 1200 m.a.s.l.), where 300 wooden nestboxes have been in place since 1994 (Fargallo & Merino, 1999 ). Breeding birds in springs 1 and 2 were caught at the nestbox during chick provisioning (when nestlings were 3 days old, hatching date = day 0), while birds caught in winter (2013) were attracted to mist nets using blue tit specific playback calls. At every sampling occasion, 2–3 nets (24–36 m each) were set up for 1–2 h, and then they were moved to a different location within the vicinity of the deciduous forest. Bird captures took place in 6 days (dates: 1, 2, 3, 9, 10 and 24 of November 2013) during 4–5 h each day depending on climatic conditions. Unringed birds were individually marked with a numbered aluminium leg-ring. First-years were identified (if age not known from ringing records) by possession of distinctive, non-adult greater wing coverts (Svensson 1992 ). We also recorded tarsus length to the nearest 0.01 mm using callipers and weight to the nearest 0.1 g using an electronic balance. These measurements were used to calculate individual body mass, corrected by regression for body size (tarsus length) and time of day by using the equation from Senar ( 2002 ). At spring 1, we took a blood sample via the brachial vein. One drop of blood was stored on an FTA card (Whatman, UK) for molecular analyses (parasitological analyses, see below). We also measured feather colour reflectance on four different patches in males and females: breast, cheek, crown and base of the tail. First, we evaluated the change in colour for each patch before and after the moult (spring 1 vs. winter). Second, we explored the change in colour between winter (2013) and the following season (spring 2, 2014). Finally, for a subsample of males ( N = 13, see below), we described colour and luminance differences between seasons (spring 1 vs. spring 2) and compared this to colour change between their female partners (female pair from spring 1 vs. female pair from spring 2). We used these values in subsequent analyses (see below). Moult stage was recorded both at the breeding season (springs 1 and 2) and winter captures. One individual had already started moult in spring 1 (as of 28th of June). However, this male was not recaptured in winter. By the time they were recaptured in November of season 1, all individuals had already finished moulting. None of the birds used in this study had started moulting when captured at nestling age 3 in springs 1 or 2. Parasite quantification (spring, season 1) For all samples, DNA was extracted from blood using a standard ammonium-acetate protocol and stored at − 20 °C. This DNA solution was then purified using silica filters to obtain a higher quality DNA (NZYGel pure, NZYtech, Lda. -Genes and Enzymes). DNA samples were quantified by spectrophotometry and adjusted to the same concentration (10 ng/uL). We detected and quantified the following parasites using quantitative PCR (qPCR) with SYBR green (SYBR Selected Master Mix, Applied Biosystems) to amplify a fragment of the cytochrome b or 18S rRNA genes using a pair of species-specific primers for each parasite: Haemoproteus majoris haplotype cyan2, Plasmodium spp. haplotype cyan1, Lankesterella valsaininesis and Leucocytozoon spp. haplotypes leuA, leuA1 and leuB. The variable Leucocytozoon A includes haplotypes A and A1 (see Badás et al. 2015 for more information on the primers used). Models of bird vision (seasons 1 and 2) Colour spectra were collected with the use of a spectrophotometer (Ocean Optics Inc., Dunedin, FL, USA) connected to an Ocean Optics fibre-optic reflection probe. The probe was made up of seven optical fibres that were illuminated by a Pulsed Xenon Light Source (Jaz-PX lamp), and it was inserted in a miniature black chamber that acted as holder and excluded ambient light. The equipment was calibrated with a flat white standard (Ocean Optics) prior to each patch measured. The probe was lifted between repeated measurements within a body region. Reflectance data from 300 to 700 nm were undertaken at 90° incidence and 3 mm from the feather surface over an illuminated circular area approximately 1 mm in diameter. Each spectrum was an average of three scans and was calculated relative to the reflectance produced by the white standard and a dark current. To model the UV-sensitive (UVS) blue tit visual system, we used their known photoreceptor spectral sensitivities (Hart et al. 2000 ) and calculated the relative quantum (photon) catch values for the four single cones, used in colour vision, and the double cones, used in luminance vision (Endler and Mielke 2005 ; Stevens et al. 2009 ). From this, we extracted hue, saturation, and luminance variables for each colour patch (Endler and Mielke 2005 ; Stevens et al. 2009 ). Although hue and saturation colour variables may not necessarily relate to colour perception in birds, avian visual models that incorporate cone sensitivities of the bird’s retina and light conditions, have proved to be the most widely approach used to model avian colour vision and colouration (Stoddard and Prum 2008 ; Kemp et al. 2015 ). Luminance refers to the perceived lightness of a patch (brightness); so, we simply used the double cone photon catch values. Saturation refers to the amount of colour compared with white light, and it was obtained by plotting the standardised single cone catch data for each individual in avian tetrahedral colour space (Stevens et al. 2009 ) and calculating the distance from the centre of the colour space (following Endler and Mielke 2005 ). Values were generated using ‘d65’ irradiance levels (Badás et al. 2017 ). To calculate hue or colour type, we derived colour channels based on using ratios from the photon catch outputs for each patch. The four single cone types for bird vision are categorised after the relative stimulation of wavelengths: ultra-short (UV), short (SW), medium (MW) or long (LW) (Cuthill 2006 ). Hue was then calculated as the ratio of cone catch values: ‘(LW + MW + UV) versus SW’ for the yellow breast feathers (Badás et al. 2017 ), ‘(SW + UV) versus (MW + LW)’ for the crown, and ‘(MW + UV) versus (SW + LW)’ for the tail. This approach is broadly inspired by the way that opponent colour channels work in vision in encoding antagonistic colour types (Osorio et al. 1999 ) and is based on recent work following the same methods (Evans et al. 2010 ; Komdeur et al. 2005 ; Spottiswoode and Stevens 2011 ; Stevens et al. 2014 ). Note that we are not suggesting that the ratio used here is actually present in avian vision, but a logical and intuitive way to describe variation in hue. Hue was not calculated for the white cheek because this is an achromatic plumage patch. Following calculation of photon catches, we determined colour contrasts using a model of visual discrimination that accurately predicts discrimination behaviour in observers (Vorobyev et al. 1998 ). By using the single cones, we extracted colour differences (Vorobyev et al. 1998 ), and using the double cones we obtained luminance (achromatic) differences (Siddiqi et al. 2004 ). When modelling, we used the retinal single cone proportions of the blue tit available in the literature (long wave = 1.00, medium wave = 0.99, short wave = 0.71 and UVS = 0.37; Hart et al. 2000 ), and Weber fractions were set to 0.05 for all cones in both chromatic and achromatic contrasts. Some authors have suggested that the appropriate Weber fraction for the long wave cones may be 0.1 (Lind 2016 ), and thus we also computed chromatic and achromatic scores with this value. These models gave qualitatively the same results (shown in the Online Resource). Colour contrasts are expressed in ‘just noticeable differences’ (JND scores), where generally, a JND of less than 1.00 indicates that two stimuli are indistinguishable; values between 1.00 and 3.00 should be difficult to discriminate except under optimal viewing conditions, and larger values allow increasingly easy discrimination (Siddiqi et al. 2004 ). Then, for each individual and colour patch, by calculating chromatic and achromatic colour contrasts and reporting JND scores, we evaluated the change of colour between different periods. Statistical analyses All analyses were performed in R v.3.1.3 (R Foundation for Statistical Computing, Vienna). We used saturation and luminance (not hue) for each colour patch analyses. Yellow hue and saturation were highly correlated (spring: r = 0.87, p < 0.001; winter: r = 0.81, p < 0.001), and we chose to use saturation rather than hue as it most consistently reflects feather carotenoid content across species (Saks et al. 2003 ; McGraw and Gregory 2004 ). Hue and saturation were also correlated in the blue crown (spring: r = 0.99, p < 0.001; winter: r = 0.98, p < 0.001) and blue-green tail (spring: r = 0.91, p < 0.001; Winter: r = 0.90, p < 0.001) plumage, and again, we used saturation rather than hue. In winter, females had lower recapture probability than males (χ 2 1 = 12.63, p < 0.001, N = 40), probably because the method of capture using species-specific playback calls might preferentially attract males. Thus, females were not included in the analyses (data on breeding parameters and parasite infection was available only for one female). From 78 breeding pairs in the breeding season 1, we were able to recapture 21 males in winter (annual survival rates of adult blue tits are similar in other European populations, see Dhondt et al. 1998 ). Due to limiting blood volumes for molecular analyses on four individuals and two individuals for which colour data could not be obtained because of measurement error, data on breeding parameters and parasitic infections during reproduction was available for 15 individuals that were included in the analyses. To explore the differences in colour between spring 1 and winter, we fitted a linear mixed model for each feather patch with one of the colour variables as the response variable. For each dependent variable, a set of biologically meaningful models was designed in order to explore variation in feather colour before and after the moult. All models included sampling occasion as a fixed factor and individual identity as a random factor in order to control for repeated measures on the same individual. Alternative models may also include a maximum of three of the following predictors (maximum number of parameters to be estimated k = 5 including intercept and one interaction at a time due to reduced sample size): age, date of winter sampling, hatching date, body mass, and parasite infection intensity by one of each blood parasite species (see parasitological analyses above). Parasite intensity variables were cubic root transformed and classified by quantiles in order to categorise data in two meaningful groups: low and high intensity of parasitic infections. Models included the interaction between sample (spring or winter season 1) and one parasite species at a time due to limited sample size (we specifically tested for the interaction, as seen in Knowles et al. 2010 ). Different parasite species were analysed as competing hypotheses in which we explored whether parasite load by one malaria or malaria-like parasite may affect the moult on different feather patches. We chose to explore the effects of infection in feather colour change in different analyses for each species because these may have different virulence levels and thus different parasitaemia. Because we were interested in the change in feather colour with respect to individual status at the breeding season, winter body mass was not included in the analyses. Moreover, spring body mass was correlated to winter body mass ( t = 2.84, df = 17, correlation coefficient: r = 0.57, p = 0.01). Date of sampling was incorporated into the set of models to account for its effect on feather colouration and infection probability. Age was obtained from previous ringing records and coded into a three level score due to reduced sample size of older individuals: 1 = first-years ( N = 9), 2 = second-years ( N = 6), 3 = third-years and older ( N = 6). The final most parsimonious model was selected based on AIC (Akaike Information Criterion) via its corrected version for small sample sizes (AICc, Sugiura 1978 ). When the difference in AICc between two or more models is less than 10 AIC units ( Δ AIC < 10), they are thought to be reasonably well-fitted models (Bolker et al. 2009 ). When this was the case, results from models with similar support are presented in tables and ordered according to their AIC weight (AIC w , see Table S1 in the Online Resource). To quantify the relative importance of individual variables within the selected model, we calculated model weights (Johnson and Omland 2004 ). Due to reduced sample size, and in order to be conservative, we further confirmed model support by using estimates of significance between the selected versus the null model. These were obtained by parametric bootstrap procedures (‘PBmodcomp’ command from the R package pbkrtest following Halekoh and Højsgaard 2014 , not shown). In addition to this, model parameters and 95% confidence intervals for the main effects in selected models are shown in Tables S1 and S2 , and they were calculated from 1000 bootstrapped iterations derived with ‘bootMer’ (from the R package lme4, Bates et al. 2014 ). Finally, we explored whether changes in male colour variables between seasons 1 and 2 explained better performance in spring 2. Out of the 21 individuals captured between spring 1 and winter, 13 males were recaptured again in spring 2 (colour data was only available for 11 males due to measurement error). We calculated the rate of change in saturation and brightness between breeding seasons for patches where we had previously found significant change (white cheek): (C 2 -C 1 )/C 1 , where C 1 refers to colour in spring 1 and C 2 to colour in spring 2. Then, this change was related to (i) clutch size change between seasons 1–2, (ii) hatching date change between seasons 1–2 and (iii) the JND scores describing perceptible differences in colour between female partner in season 1 versus female partner in season 2. Four males paired with the same female in season 2, but we believe that excluding these males from the analysis was not necessary because the JND contrast between the same female could still provide information about changes in female feather colouration between seasons (i.e. higher quality females may change more between seasons if their feather colouration was duller before the moult). Thus, JND scores could still reflect individual quality given our premises. The relationship between male colour change and other breeding parameters (i.e. number of fledglings) was not checked because a post-hatching experiment conducted in season 2 could potentially affect these parameters. We aimed to look at trends shared between colour variables by using Pearson correlations, instead of using more complex modelling due to reduced sample size. In relatively small samples like the present one assumptions of normality are likely to be violated, whereas non-parametric bootstrapping allows us to compute new parameter estimates without making assumptions on the form of the population (Fox 2002 ). Thus, we further complemented the significant results obtained by Pearson correlations through robust regression models or with those calculated from 1000 bootstrapped iterations derived from resampling with the function ‘bootCase’ (from the R package car, Fox and Weisberg 2011 ). Effect sizes for all analyses are reported as Cohen’s D (Cohen 1998 ), which are generally interpreted as small (0.2), medium (0.5) and large (0.8). Results Colour change from spring to winter in season 1 Plumage saturation in the white cheek changed significantly before and after the moult depending on the infection by Plasmodium parasites ( t = − 2.34, p = 0.036, effect size ES = 0.57; Table 1 , cheek saturation model 1). The overall trend was that individuals decreased saturation in their white cheek feathers after the moult. Note that, in contrast to saturation changes in other ornaments (i.e. blue crown or yellow breast feathers), more saturation in white feathers may indicate that this is a less pure feather patch (Badás et al. 2017 ). Indeed, in this study, males that were more intensely parasitized by Plasmodium during the breeding season decreased saturation significantly less (Fig. 1 , for graphical purposes, we represent mean changes between sampling occasions). Saturation in yellow-breast blue tit feathers increased after the moult, although this increase was marginally non-significant ( t = − 1.87, p = 0.08, effect size ES = 0.48; Table S1 ). Such marginal increase was unaffected by parasite loads or other parameters during the breeding season. No significant change was detected for the blue crown and the blue-green tail feathers before and after the moult (Table S1 ), but there was a trend that males that were more parasitized by Leucocytozoon A grew more saturated blue crown feathers after moulting ( t = − 1.9, p = 0.08, effect size ES = 0.56; Table S1 ; Fig. S1 a). Finally, males with more saturated blue-green tail feathers had higher body mass, irrespective of sampling occasion ( t = 2.31, p = 0.03, effect size ES = 0.54; Table S1 ). Table 1 Best-fitted models explaining colour change in blue tits using the Akaike’s second-order Information Criterion (AICc). When ∆AICc < 10 units, all competing models are shown (see main text). Variables included in each model are marked with an X. AIC w refers to the weight of each new variable included in the models showing similar support. Only models that were supported after significant parametric bootstrap against the null model are shown (explaining why in some cases ΣAIC w ≠ 1). A total of 30 males were included in this analysis Full size table Fig. 1 Chromatic change between spring and winter in the white cheek in relation to Plasmodium infection during spring. Bars indicate standard errors Full size image Patterns of achromatic change from spring 1 to winter were related to individual quality during spring 1. The increase in white cheek brightness after the moult was positively correlated with higher body mass during the breeding season ( t = − 2.74, p = 0.017, effect size ES = 0.72; Table 1 , cheek brightness model 1; Fig. 2 ). We also detected that males that were more parasitized by Haemoproteus tended to grow marginally brighter yellow breast feathers ( t = − 1.95, p = 0.08, effect size ES = 0.69; Table S2 ; Fig. S1 b) and marginally brighter blue-green feathers at the base of the tail ( t = − 1.89, p = 0.08, effect size ES = 0.7; Table S2 ; Fig. S1 c). Intense infections by the parasite Leucocytozoon B during spring 1 had a marginally non-significant negative effect in brightness in the blue-green base of the tail in winter ( t = 1.96, p = 0.07, effect size ES = 0.71; Table S2 ; Fig. S1 d). Finally, no significant change was detected for the blue crown brightness (Table S2 ). Mean JND contrasts between each individual before and after the moult are shown in Table S3 . Fig. 2 Achromatic change in the white cheek in relation to spring body mass ( F 2,10 = 4.67, p value = 0.0382, R 2 = 38.54%, N = 14). Body mass (g) is expressed as the corrected body mass following Senar ( 2002 ). Regression line and ± 95% confidence intervals (shaded area) are shown. Note that confidence intervals were calculated from 1,000 bootstrapped iterations that control for reduced sample size and presence of outliers (see main text) Full size image Male colour, female partner and breeding parameters in season 2 Because we found significant differences in cheek colour variables before and after moult (spring vs. winter season 1, see Table 1 ), we used these variables in subsequent analyses for season 2. There were no significant associations between (i) change in cheek saturation between seasons and breeding parameters in season 2 (only hatching date and clutch size, see methods section, all p values > 0.5), (ii) change in cheek saturation between seasons and JND scores for female differences between seasons (all p values > 0.5) or (iii) change in cheek brightness between seasons and breeding parameters in season 2 (only hatching date and clutch size, see methods section, all p values > 0.5). Non-significant results are shown in Table S3 (Online Resource). However, the change in cheek brightness was significantly related to the JND scores describing female differences between seasons. The more pronounced increase in male cheek brightness after the moult, the higher the JND scores describing female differences between seasons. This is, in season 2, males with brighter cheeks paired with females that were more different in their cheek luminance than the female pair they had in the previous season (bootstrapped estimate = 0.089, sup. 95% CI = 0.18, inf. 95% CI = 0.003, R 2 = 0.425, F 1,9 = 8.39, p value = 0.018, N = 11, effect size ES = 1.93, Fig. 3 ). Because JND scores only state the magnitude of the absolute differences between two colour spectra, we confirmed that the female pair in season 2 had indeed brighter cheek feathers than the same male’s partner in season 1 (robust regression model R 2 = 0.39, Chi-sq = 6.17, df = 1, p value = 0.013, N = 13, effect size ES = 1.9). Mean JND contrasts between each individual comparing feather colour in season 1 versus season 2 are shown in Table S3 . Fig. 3 Change in male white cheek luminance between seasons in relation to female partners JND scores ( F 1,9 = 8.39, p value = 0.018, R 2 = 42.5%, N = 11). Female JND scores were obtained as achromatic colour differences between the male's partner in season 1 versus the same male's partner in season 2. Regression line and ± 95% confidence intervals (shaded area) are shown Full size image Discussion This study investigated feather colour change between seasons in relation to breeding characteristics from the previous reproductive season. We offered correlational evidence that, in blue tits, colour change in the white cheek was related to body mass and the intensity of Plasmodium infections while breeding. Additionally, after controlling for individual differences prior to moult, we found that males with a more pronounced increase in white cheek brightness paired with brighter females in season 2 compared to the females they paired with in season 1. The change in structural white colouration in the blue tit was, thus, related to pair formation in the consecutive breeding season. However, no differences were found in plumage patches such as the blue crown or the blue-green tail between seasons. This is unexpected, especially because previous studies have shown that newly moulted feathers, for example, in the blue crown, were brighter (Örnborg et al. 2002 ). Blue tit crown coloration may change within a given year (Örnborg et al. 2002 ) and also between years with individual age (Delhey et al. 2006 ). The apparent lack of colour change between seasons in structurally blue feathers in the present blue tit population could be explained by the time of year the spectral samples were taken. Here, the post-breeding reflectance spectra was not sampled soon after breeding (i.e. in summer) but after moult had been completed (in winter). It may be possible that the observed feather colouration may have already faded to be similar to what the colours were in the previous spring. For example, the differences may have been greater if the samples had been collected immediately after the moult and long before the next breeding season (Delhey et al. 2010 ). In this study, we also found a relationship between the change in brightness in unpigmented feathers and body mass during the breeding season. Brightness in white feathers is produced by large, randomly organised air vacuoles in the barbules (Prum 2006 ), and these vacuoles are absent in less bright white feathers, as seen in the rock partmigan Lagopus mutus (Dyck 1976 ). In the blue tit, the particular arrangement of barbules in the white cheek could be related to individual status during reproduction, because feathers are moulted immediately after the breeding season (Svensson and Nilsson 1995 ). Male blue tits that were heavier during the breeding season (spring 1) might have started the moult with more resources to allocate into plumage maintenance, which has been shown to increase feather brightness (Griggio et al. 2010 ). Our study will add to a recently growing body of evidence that white plumage reflectance may signal individual quality (Griggio et al. 2011 ; Zanollo et al. 2012 ; Ruiz-De-Castañeda et al. 2015 ). In addition to this, we offer for the first time, correlational evidence that the conditions experienced during the breeding season may have an effect on mating patterns in the following season (but see Doucet et al. 2005 ). We found that male blue tits that developed brighter cheeks in spring 2 paired with brighter females when these were compared to the females they mated with in spring 1. This suggests that there may be assortative mating with respect to white plumage colouration in the blue tit. In the same species, this was found for the ultraviolet colouration of the crown (Mahr et al. 2012 ). Brighter male blue tits in the present population may have been able to attract better quality females in the following spring if for example, they benefited from better body condition during winter. Accordingly, we found that brighter males were in better body condition during winter (spring and winter body mass were correlated). Passerines like the blue tit might signal body condition during winter because this may have great relevance for the subsequent breeding season by providing better access to food resources and higher probability of establishing a territory in spring (Smith and Nilsson 1987 ). Indeed, brighter achromatic patches and larger white ornaments have been related to higher female quality in pied flycatchers breeding in the same area (Cantarero et al. 2017 ; López-Arrabé et al. 2014 ). And in the barn owl ( Tyto alba ), adults that became whiter performed better than the previous year (Dreiss and Roulin 2010 ). Unfortunately, in this study, we were unable to explore breeding success as a result of mating with higher ornamented females in spring 2 because a post-hatching experiment was taking place in the season of 2014. Still, we present, for the first time, data on feather colour change after the moult, which is associated to mating patterns in the consecutive season, and is further supported by a discrimination model that takes into account the birds’ visual system (but see Griffith 2000 ; Gustafsson et al. 1995 ). Another noteworthy point is that a single ornament may provide different information on the overall quality of an individual via different colour characteristics, in accordance with the ‘multiple messages hypothesis’ (Møller and Pomiankowski 1993 ). In this study, we found this pattern in the white cheek (via brightness and saturation). Male blue tits that grew more saturated white cheeks were more intensely infected by Plasmodium in the spring of season 1, while cheek brightness was related to body mass (see above). It is possible that more saturation in white patches signals poorer individual quality, because white colours should be less saturated (Badás et al. 2017 ). Surprisingly, the relationship between feather colouration and parasite load by several malaria or malaria-like parasites offered conflicting results. As opposed to what we found in white cheek feathers, other haemosporidian species were marginally related to an increase in feather colouration (see Tables S1 and S2 and Fig. S1 : higher intensity of Haemoproteus tended to be associated with increased breast and tail brightness, and higher Leucocytozoon A parasite loads tended to be related to increased crown saturation after the moult). On the contrary, male blue tits that were more intensely infected by Leucocytozoon B parasites developed marginally duller tail feathers. It seems that the effects of parasitic species with respect to feather colour changes could vary between seasons, because in a previous study during the 2012 breeding season in the same population (Badás et al. 2017 ), males that were more intensely infected with Haemoproteus (as opposed to Plasmodium in this study), had more saturated white cheeks. Two hypotheses can be proposed to explain why several parasites species were found to affect colouration differently in different reproductive seasons: (i) certain parasites could increase their level of virulence depending on environmental conditions (Møller et al. 2013 ) or (ii) infections by a certain parasite could be positively correlated with infections by another undetected parasite that disrupts feather structure in the observed patch. For example, wild turkeys ( Meleagris gallopavo ) suffering from coccidiosis had reduced UV reflectance in a structural plumage patch (Hill et al. 2005 ). Although speculative at the moment, individuals suffering from avian malaria could be infected by other parasites such as coccidians ( Isospora sp.), which have been found to infect blue tits in our population (del Cerro, S., unpublished data). In fact, multiple infections with parasites other than haemosporidians are common in this blue tit population (Merino et al. 1997 ; del Cerro et al. 2010 ). Facilitation of secondary infections when individuals are already infected has been reported in humans (Nacher et al. 2002 ); and in birds, these could be driven by MHC alleles that alter the competitive interactions between malaria parasites (Loiseau et al. 2008 ). However, we cannot exclude the possibility that the observed associations respond to complex interactions between immune system responses and feather synthesis during the moult (Sanz et al. 2004 ; Serra et al. 2007 ; Orledge et al. 2012 ), so marginal results should be taken carefully. The challenge in future studies will be to distinguish empirically whether different ornaments are redundant or non-redundant by exploring the behaviour they elicit from a recipient (Partan and Marler 1999 ). Our results, although correlational, suggest that better performance during the reproductive season (i.e. regarded as higher body mass and/or less intense infections by blood parasites), may have important implications for the following breeding event. Blue tit males that were in better body condition at the highly demanding nestling provisioning stage were able to develop brighter white cheek feathers after the moult. This might have enabled them to find brighter females than those they paired with in the previous spring. Although limited to one between-year shift, we also offered the first correlational evidence that intense infections by Plasmodium during a costly reproductive stage might have consequences after the moult. A visual discrimination model confirmed that differences in colour could be perceived by conspecifics. This study sets the basis for further experimental studies on the carry-over effects of reproduction in ornamentation (but see Doutrelant et al. 2012 ) and mating patterns. Allocating resources efficiently during reproduction to immune defence and self-maintenance may increase the resources available for the moult and thus affect mating patterns in the following reproductive period. | None | [] | [] | [] | SciNews | Biology | E. P. Badás et al, Colour change in a structural ornament is related to individual quality, parasites and mating patterns in the blue tit, The Science of Nature (2018). DOI: 10.1007/s00114-018-1539-z | http://dx.doi.org/10.1007/s00114-018-1539-z | https://phys.org/news/2018-02-white-cheeks-titillating.html | A study published in Springer's journal The Science of Nature found that male blue tits with brighter, whiter cheeks are healthier and more likely to mate with higher quality partners. The researchers, led by Elisa Pérez Badás, monitored a population of blue tits in central Spain over two breeding seasons and found that males in better physical condition during the breeding season had brighter, whiter cheeks, while those infected with parasites had duller cheek feathers. The study suggests that the conditions a male blue tit experiences during reproduction can affect the quality of its feathers, particularly its white facial feathers, which can then influence its mating success. The results show that stronger males are able to find brighter females than their previous partners, indicating that the color differences are noticeable to other members of the same species.
Male blue tits with white cheeks are healthier and more likely to mate with higher quality partners than their counterparts with duller cheek feathers. Having purer white cheeks also indicates that a blue tit was better able to overcome an infection with parasites during the previous year. This is according to Elisa Pérez Badás of the Museo Nacional de Ciencias Naturales in Spain. She is lead author of a study published in Springer's journal The Science of Nature. Previous research has shown that the food consumed by a bird, as well as its general well-being, can influence the colour of its feathers. Scientists also know that hardships suffered by birds in one season can be carried over into the next. In this study, Badás and her research team wanted to test whether difficulties encountered by the blue tit (Cyanistes caeruleus) during the breeding season might influence the precise intensity of the new blue, white and yellow feathers growing once these birds have moulted. In the life cycle of this small bird, which is widespread in forests in Europe and Western Asia, moulting only happens once the breeding season is completed. Therefore, the birds show off their new plumage until the end of the next breeding season. To prove their assumptions, the research team monitored a population of blue tits living in a forest in central Spain over the course of two breeding seasons. In the first season, the researchers caught the birds and took blood samples to detect whether the blue tits suffered from parasitic infections. The team also used a spectrophotometer to gauge the spectrum of colour of the birds' feathers. These results were compared with the hues, levels of saturation and luminance that blue tits are known to see. In the following season, the researchers noted the birds' mating patterns, and how these were influenced by changes that might have occurred in particular birds' feather colours. Overall, the researchers found that males in a better physical condition (males that weighed more) during the highly demanding nestling provisioning stage sported brighter, whiter cheeks. Those who were not infected much by the malaria parasite Plasmodium while breeding also showed purer white cheek feathers in winter. According to Pérez Badás, this indicates that their feathers were of a better quality, and that intense parasitic infections can have an effect on a bird's life cycle. "In the following season, those males with brighter cheeks paired with females that had noticeably brighter cheek patches compared to the male's previous mate," adds Badás. The results therefore suggest that the conditions that male blue tits experienced during reproduction are likely to affect moult and thus feather colouration, at least in their white facial feathers. This, in turn, enables the stronger males to find brighter females than the partners they paired with in the previous spring. "Members of the same species were quite able to pick up such colour differences," notes Badás. |
Subsets and Splits